Sample records for comparatively large amounts

  1. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    PubMed

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  2. A thermal and chemical degradation approach to decipher pristane and phytane precursors in sedimentary organic matter

    USGS Publications Warehouse

    Koopmans, M.P.; Rijpstra, W.I.C.; Klapwijk, M.M.; De Leeuw, J. W.; Lewan, M.D.; Sinninghe, Damste J.S.

    1999-01-01

    A thermal and chemical degradation approach was followed to determine the precursors of pristane (Pr) and phytane (Ph) in samples from the Gessoso-solfifera, Ghareb and Green River Formations. Hydrous pyrolysis of these samples yields large amounts of Pr and Ph carbon skeletons, indicating that their precursors are predominantly sequestered in high-molecular-weight fractions. However, chemical degradation of the polar fraction and the kerogen of the unheated samples generally does not release large amounts of Pr and Ph. Additional information on the precursors of Pr and Ph is obtained from flash pyrolysis analyses of kerogens and residues after hydrous pyrolysis and after chemical degradation. Multiple precursors for Pr and Ph are recognised in these three samples. The main increase of the Pr/Ph ratio with increasing maturation temperature, which is associated with strongly increasing amounts of Pr and Ph, is probably due to the higher amount of precursors of Pr compared to Ph, and not to the different timing of generation of Pr and Ph.A thermal and chemical degradation approach was followed to determine the precursors of pristane (Pr) and phytane (Ph) in samples from the Gessoso-solfifera, Ghareb and Green River Formations. Hydrous pyrolysis of these samples yields large amounts of Pr and Ph carbon skeletons, indicating that their precursors are predominantly sequestered in high-molecular-weight fractions. However, chemical degradation of the polar fraction and the kerogen of the unheated samples generally does not release large amounts of Pr and Ph. Additional information on the precursors of Pr and Ph is obtained from flash pyrolysis analyses of kerogens and residues after hydrous pyrolysis and after chemical degradation. Multiple precursors for Pr and Ph are recognised in these three samples. The main increase of the Pr/Ph ratio with increasing maturation temperature, which is associated with strongly increasing amounts of Pr and Ph, is probably due to the higher amount of precursors of Pr compared to Ph, and not to the different timing of generation of Pr and Ph.

  3. Model for fluorescence quenching in light harvesting complex II in different aggregation states.

    PubMed

    Andreeva, Atanaska; Abarova, Silvia; Stoitchkova, Katerina; Busheva, Mira

    2009-02-01

    Low-temperature (77 K) steady-state fluorescence emission spectroscopy and dynamic light scattering were applied to the main chlorophyll a/b protein light harvesting complex of photosystem II (LHC II) in different aggregation states to elucidate the mechanism of fluorescence quenching within LHC II oligomers. Evidences presented that LHC II oligomers are heterogeneous and consist of large and small particles with different fluorescence yield. At intermediate detergent concentrations the mean size of the small particles is similar to that of trimers, while the size of large particles is comparable to that of aggregated trimers without added detergent. It is suggested that in small particles and trimers the emitter is monomeric chlorophyll, whereas in large aggregates there is also another emitter, which is a poorly fluorescing chlorophyll associate. A model, describing populations of antenna chlorophyll molecules in small and large aggregates in their ground and first singlet excited states, is considered. The model enables us to obtain the ratio of the singlet excited-state lifetimes in small and large particles, the relative amount of chlorophyll molecules in large particles, and the amount of quenchers as a function of the degree of aggregation. These dependencies reveal that the quenching of the chl a fluorescence upon aggregation is due to the formation of large aggregates and the increasing of the amount of chlorophyll molecules forming these aggregates. As a consequence, the amount of quenchers, located in large aggregates, is increased, and their singlet excited-state lifetimes steeply decrease.

  4. Development and freeze-thaw durability of high flyash-content concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sajadi, J.

    1987-01-01

    Objectives were to investigate the effects on concrete strength, drying shrinkage, freeze-thaw durability, and air-void system parameters of replacing various amounts of portland cement with different types of fly ash and to compare selected characteristics of such fly-ash concretes and fly-ash concretes containing a high-range water-reducing admixture to those of a control mixture. It was concluded that concrete mixtures with 90-day compressive strengths equal to the control could be produced when large amounts of cement were replaced by fly ash. In addition, when the high-range water-reducing admixtures was employed, very large amounts of cement could be replaced by fly ashmore » to yield mixtures whose compressive strengths were equal to or greater than the strengths of the control mix at all ages. The maximum amount of cement that could be replaced for equal-strength mixtures depended upon the nature of the fly ash. Drying shrinkage of plain fly-ash concretes and fly-ash concretes containing the high-range water-reducing admixture were similar to those of the control mix. The optimum fly-ash content in a concrete is comparable in strength and durability to a conventional (control) concrete was influenced by the chemical and physical characteristics of the fly ash.« less

  5. Conifer DBMagic: A database housing multiple de novo transcriptome assemblies for twelve diverse conifer species

    Treesearch

    W. Walter Lorenz; Savavanaraj Ayyampalayam; John M. Bordeaux; Glenn T. Howe; Kathleen D. Jermstad; David B. Neale; Deborah L. Rogers; Jeffrey F.D. Dean

    2012-01-01

    Conifers comprise an ancient and widespread plant lineage of enormous commercial and ecological value. However, compared to model woody angiosperms, such as Populus and Eucalyptus, our understanding of conifers remains quite limited at a genomic level. Large genome sizes (10,000-40,000 Mbp) and large amounts of repetitive DNA...

  6. A MBD-seq protocol for large-scale methylome-wide studies with (very) low amounts of DNA.

    PubMed

    Aberg, Karolina A; Chan, Robin F; Shabalin, Andrey A; Zhao, Min; Turecki, Gustavo; Staunstrup, Nicklas Heine; Starnawska, Anna; Mors, Ole; Xie, Lin Y; van den Oord, Edwin Jcg

    2017-09-01

    We recently showed that, after optimization, our methyl-CpG binding domain sequencing (MBD-seq) application approximates the methylome-wide coverage obtained with whole-genome bisulfite sequencing (WGB-seq), but at a cost that enables adequately powered large-scale association studies. A prior drawback of MBD-seq is the relatively large amount of genomic DNA (ideally >1 µg) required to obtain high-quality data. Biomaterials are typically expensive to collect, provide a finite amount of DNA, and may simply not yield sufficient starting material. The ability to use low amounts of DNA will increase the breadth and number of studies that can be conducted. Therefore, we further optimized the enrichment step. With this low starting material protocol, MBD-seq performed equally well, or better, than the protocol requiring ample starting material (>1 µg). Using only 15 ng of DNA as input, there is minimal loss in data quality, achieving 93% of the coverage of WGB-seq (with standard amounts of input DNA) at similar false/positive rates. Furthermore, across a large number of genomic features, the MBD-seq methylation profiles closely tracked those observed for WGB-seq with even slightly larger effect sizes. This suggests that MBD-seq provides similar information about the methylome and classifies methylation status somewhat more accurately. Performance decreases with <15 ng DNA as starting material but, even with as little as 5 ng, MBD-seq still achieves 90% of the coverage of WGB-seq with comparable genome-wide methylation profiles. Thus, the proposed protocol is an attractive option for adequately powered and cost-effective methylome-wide investigations using (very) low amounts of DNA.

  7. Comparing Laboratory and Field Measured Bioaccumulation Endpoints

    EPA Science Inventory

    The report presents an approach that allows comparisons of all laboratory and field bioaccumulation endpoints measurements. The approach will enable the inclusion of large amounts of field data into evaluations of bioaccumulation potential for legacy chemicals. Currently, these...

  8. Analysis of ecdysteroids in different developmental stages of Hymenolepis diminuta.

    PubMed

    Mercer, J G; Munn, A E; Arme, C; Rees, H H

    1987-08-01

    Prepatent and patent adult Hymenolepis diminuta from the intestines of rats, H. diminuta eggs recovered from the faeces of rats harbouring patent infections, and infective cysticercoids from the beetle intermediate host were analysed for free and conjugated ecdysteroids. Adult worms and eggs contained both free ecdysteroids and hydrolysable polar conjugated ecdysteroids, with comparatively large amounts of immunoreactive material also being detected following hydrolysis of the possible apolar conjugated ecdysteroid fraction. Free ecdysteroids were not detected in the cysticercoid sample. The concentration of free ecdysteroids in H. diminuta eggs was higher than that detected in the tissues of the adult worms. Ecdysone and 20-hydroxyecdysone were the major identified compounds of the free ecdysteroid fraction, whereas in the hydrolysed polar conjugated ecdysteroid fraction these two compounds were accompanied by 20,26-dihydroxyecdysone. The free ecdysteroid fraction also contained comparatively large amounts of unidentified immunoreactive material.

  9. Evidence for ultramafic lavas on Syrtis Major

    NASA Technical Reports Server (NTRS)

    Reyes, D. P.; Christensen, P. R.

    1993-01-01

    Pyroxene compositions from ISM data compared with pyroxene compositions of Apollo 12 pigeonite basalt, Shergotite meteorite, and pyroxenitic komatiite show that the Syrtis Major volcanic materials are consistent with pyroxenitic komatiite. Pyroxenitic komatiite is significant for the earth because it contains a large amount of MgO, implying generation under unique circumstances compared to typical basaltic compositions.

  10. The effect of spin in swing bowling in cricket: model trajectories for spin alone

    NASA Astrophysics Data System (ADS)

    Robinson, Garry; Robinson, Ian

    2015-02-01

    In ‘swing’ bowling, as employed by fast and fast-medium bowlers in cricket, back-spin along the line of the seam is normally applied in order to keep the seam vertical and to provide stability against ‘wobble’ of the seam. Whilst spin is normally thought of as primarily being the slow bowler's domain, the spin applied by the swing bowler has the side-effect of generating a lift or Magnus force. This force, depending on the orientation of the seam and hence that of the back-spin, can have a side-ways component as well as the expected vertical ‘lift’ component. The effect of the spin itself, in influencing the trajectory of the fast bowler's delivery, is normally not considered, presumably being thought of as negligible. The purpose of this paper is to investigate, using calculated model trajectories, the amount of side-ways movement due to the spin and to see how this predicted movement compares with the total observed side-ways movement. The size of the vertical lift component is also estimated. It is found that, although the spin is an essential part of the successful swing bowler's delivery, the amount of side-ways movement due to the spin itself amounts to a few centimetres or so, and is therefore small, but perhaps not negligible, compared to the total amount of side-ways movement observed. The spin does, however, provide a considerable amount of lift compared to the equivalent delivery bowled without spin, altering the point of pitching by up to 3 m, a very large amount indeed. Thus, for example, bowling a ball with the seam pointing directly down the pitch and not designed to swing side-ways at all, but with the amount of back-spin varied, could provide a very powerful additional weapon in the fast bowler's arsenal. So-called ‘sling bowlers’, who use a very low arm action, can take advantage of spin since effectively they can apply side-spin to the ball, giving rise to a large side-ways movement, ˜ 20{}^\\circ cm or more, which certainly is significant. For a given amount of spin the amount of side-ways movement increases as the bowler's delivery arm becomes more horizontal. This technique could also be exploited by normal spin bowlers as well as swing bowlers.

  11. Comparative verification between GEM model and official aviation terminal forecasts

    NASA Technical Reports Server (NTRS)

    Miller, Robert G.

    1988-01-01

    The Generalized Exponential Markov (GEM) model uses the local standard airways observation (SAO) to predict hour-by-hour the following elements: temperature, pressure, dew point depression, first and second cloud-layer height and amount, ceiling, total cloud amount, visibility, wind, and present weather conditions. GEM is superior to persistence at all projections for all elements in a large independent sample. A minute-by-minute GEM forecasting system utilizing the Automated Weather Observation System (AWOS) is under development.

  12. Analysis of volatile organic compounds. [trace amounts of organic volatiles in gas samples

    NASA Technical Reports Server (NTRS)

    Zlatkis, A. (Inventor)

    1977-01-01

    An apparatus and method are described for reproducibly analyzing trace amounts of a large number of organic volatiles existing in a gas sample. Direct injection of the trapped volatiles into a cryogenic percolum provides a sharply defined plug. Applications of the method include: (1) analyzing the headspace gas of body fluids and comparing a profile of the organic volatiles with standard profiles for the detection and monitoring of disease; (2) analyzing the headspace gas of foods and beverages and comparing the profile with standard profiles to monitor and control flavor and aroma; and (3) analyses for determining the organic pollutants in air or water samples.

  13. Highly Sensitive GMO Detection Using Real-Time PCR with a Large Amount of DNA Template: Single-Laboratory Validation.

    PubMed

    Mano, Junichi; Hatano, Shuko; Nagatomi, Yasuaki; Futo, Satoshi; Takabatake, Reona; Kitta, Kazumi

    2018-03-01

    Current genetically modified organism (GMO) detection methods allow for sensitive detection. However, a further increase in sensitivity will enable more efficient testing for large grain samples and reliable testing for processed foods. In this study, we investigated real-time PCR-based GMO detection methods using a large amount of DNA template. We selected target sequences that are commonly introduced into many kinds of GM crops, i.e., 35S promoter and nopaline synthase (NOS) terminator. This makes the newly developed method applicable to a wide range of GMOs, including some unauthorized ones. The estimated LOD of the new method was 0.005% of GM maize events; to the best of our knowledge, this method is the most sensitive among the GM maize detection methods for which the LOD was evaluated in terms of GMO content. A 10-fold increase in the DNA amount as compared with the amount used under common testing conditions gave an approximately 10-fold reduction in the LOD without PCR inhibition. Our method is applicable to various analytical samples, including processed foods. The use of other primers and fluorescence probes would permit highly sensitive detection of various recombinant DNA sequences besides the 35S promoter and NOS terminator.

  14. Stroke volume variation as a guide for fluid resuscitation in patients undergoing large-volume liposuction.

    PubMed

    Jain, Anil Kumar; Khan, Asma M

    2012-09-01

    : The potential for fluid overload in large-volume liposuction is a source of serious concern. Fluid management in these patients is controversial and governed by various formulas that have been advanced by many authors. Basically, it is the ratio of what goes into the patient and what comes out. Central venous pressure has been used to monitor fluid therapy. Dynamic parameters, such as stroke volume and pulse pressure variation, are better predictors of volume responsiveness and are superior to static indicators, such as central venous pressure and pulmonary capillary wedge pressure. Stroke volume variation was used in this study to guide fluid resuscitation and compared with one guided by an intraoperative fluid ratio of 1.2 (i.e., Rohrich formula). : Stroke volume variation was used as a guide for intraoperative fluid administration in 15 patients subjected to large-volume liposuction. In another 15 patients, fluid resuscitation was guided by an intraoperative fluid ratio of 1.2. The amounts of intravenous fluid administered in the groups were compared. : The mean amount of fluid infused was 561 ± 181 ml in the stroke volume variation group and 2383 ± 1208 ml in the intraoperative fluid ratio group. The intraoperative fluid ratio when calculated for the stroke volume variation group was 0.936 ± 0.084. All patients maintained hemodynamic parameters (heart rate and systolic, diastolic, and mean blood pressure). Renal and metabolic indices remained within normal limits. : Stroke volume variation-guided fluid application could result in an appropriate amount of intravenous fluid use in patients undergoing large-volume liposuction. : Therapeutic, II.

  15. The U.S. Farm Sector in the Mid-1980's. Agricultural Economic Report Number 548.

    ERIC Educational Resources Information Center

    Reimund, Donn A.; And Others

    This report compares several farm characteristics of the mid-1980s with those of a decade earlier to document the real amount of change in the farm sector. Farms are stratified into five groups based on their farm income: rural residence, small family, family, large family, and very large. Sources and levels of farm operator income and wealth are…

  16. ENCAPSULATING WASTE DISPOSAL METHODS - PHASE I

    EPA Science Inventory

    The release of chemical and biological agents on a large-scale urban environment would be devastating. The amount of waste generated during such an event would be comparable to a tornado ripping through a town. Building materials, furniture, office materials, building ins...

  17. Energy Storage Requirements for Achieving 50% Penetration of Solar Photovoltaic Energy in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Margolis, Robert

    2016-09-01

    We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less

  18. Energy Storage Requirements for Achieving 50% Solar Photovoltaic Energy Penetration in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Margolis, Robert

    2016-08-01

    We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less

  19. Deuterium Retention and Physical Sputtering of Low Activation Ferritic Steel

    NASA Astrophysics Data System (ADS)

    T, Hino; K, Yamaguchi; Y, Yamauchi; Y, Hirohata; K, Tsuzuki; Y, Kusama

    2005-04-01

    Low activation materials have to be developed toward fusion demonstration reactors. Ferritic steel, vanadium alloy and SiC/SiC composite are candidate materials of the first wall, vacuum vessel and blanket components, respectively. Although changes of mechanical-thermal properties owing to neutron irradiation have been investigated so far, there is little data for the plasma material interactions, such as fuel hydrogen retention and erosion. In the present study, deuterium retention and physical sputtering of low activation ferritic steel, F82H, were investigated by using deuterium ion irradiation apparatus. After a ferritic steel sample was irradiated by 1.7 keV D+ ions, the weight loss was measured to obtain the physical sputtering yield. The sputtering yield was 0.04, comparable to that of stainless steel. In order to obtain the retained amount of deuterium, technique of thermal desorption spectroscopy (TDS) was employed to the irradiated sample. The retained deuterium desorbed at temperature ranging from 450 K to 700 K, in the forms of DHO, D2, D2O and hydrocarbons. Hence, the deuterium retained can be reduced by baking with a relatively low temperature. The fluence dependence of retained amount of deuterium was measured by changing the ion fluence. In the ferritic steel without mechanical polish, the retained amount was large even when the fluence was low. In such a case, a large amount of deuterium was trapped in the surface oxide layer containing O and C. When the fluence was large, the thickness of surface oxide layer was reduced by the ion sputtering, and then the retained amount in the oxide layer decreased. In the case of a high fluence, the retained amount of deuterium became comparable to that of ferritic steel with mechanical polish or SS 316L, and one order of magnitude smaller than that of graphite. When the ferritic steel is used, it is required to remove the surface oxide layer for reduction of fuel hydrogen retention. Ferritic steel sample was exposed to the environment of JFT-2M tokamak in JAERI and after that the deuterium retention was examined. The result was roughly the same as the case of deuterium ion irradiation experiment.

  20. GREEN RETROFITTING RESIDENTIAL BUILDINGS

    EPA Science Inventory

    When compared with the rest of the world, the United States consumes a disproportionately large amount of energy and is a major source of greenhouse gases from fossil fuel combustion. As much as two thirds of U.S. electricity production is consumed by residential and commerci...

  1. Overview of en route noise prediction using a integrated noise model

    DOT National Transportation Integrated Search

    2010-04-20

    En route aircraft noise is often ignored in aircraft noise modeling because large amounts of noise attenuation due to long propagation distances between the aircraft and the receivers on the ground, reduced power in cruise flight compared to takeoff ...

  2. LanzaTech- Capturing Carbon. Fueling Growth.

    ScienceCinema

    NONE

    2018-01-16

    LanzaTech will design a gas fermentation system that will significantly improve the rate at which methane gas is delivered to a biocatalyst. Current gas fermentation processes are not cost effective compared to other gas-to-liquid technologies because they are too slow for large-scale production. If successful, LanzaTech's system will process large amounts of methane at a high rate, reducing the energy inputs and costs associated with methane conversion.

  3. Ion Heating During Local Helicity Injection Plasma Startup in the Pegasus ST

    NASA Astrophysics Data System (ADS)

    Burke, M. G.; Barr, J. L.; Bongard, M. W.; Fonck, R. J.; Hinson, E. T.; Perry, J. M.; Reusch, J. A.

    2015-11-01

    Plasmas in the Pegasus ST are initiated either through standard, MHD stable, inductive current drive or non-solenoidal local helicity injection (LHI) current drive with strong reconnection activity, providing a rich environment to study ion dynamics. During LHI discharges, a large amount of impurity ion heating has been observed, with the passively measured impurity Ti as high as 800 eV compared to Ti ~ 60 eV and Te ~ 175 eV during standard inductive current drive discharges. In addition, non-thermal ion velocity distributions are observed and appear to be strongest near the helicity injectors. The ion heating is hypothesized to be a result of large-scale magnetic reconnection activity, as the amount of heating scales with increasing fluctuation amplitude of the dominant, edge localized, n =1 MHD mode. An approximate temporal scaling of the heating with the amplitude of higher frequency magnetic fluctuations has also been observed, with large amounts of power spectral density present at several impurity ion cyclotron frequencies. Recent experiments have focused on investigating the impurity ion heating scaling with the ion charge to mass ratio as well as the reconnecting field strength. The ion charge to mass ratio was modified by observing different impurity charge states in similar LHI plasmas while the reconnecting field strength was modified by changing the amount of injected edge current. Work supported by US DOE grant DE-FG02-96ER54375.

  4. How much information? East Asian and North American cultural products and information search performance.

    PubMed

    Wang, Huaitang; Masuda, Takahiko; Ito, Kenichi; Rashid, Marghalara

    2012-12-01

    Literature in cultural psychology suggests that compared with North Americans, East Asians prefer context-rich cultural products (e.g., paintings and photographs). The present article further examines the preferred amount of information in cultural products produced by East Asians and North Americans (Study 1: Society for Personality and Social Psychology conference posters; Study 2: government and university portal pages). The authors found that East Asians produced more information-rich products than did North Americans. Study 3 further examined people's information search speed when identifying target objects on mock webpages containing large amounts of information. The results indicated that East Asians were faster than North Americans in dealing with information on mock webpages with large amounts of information. Finally, the authors found that there were cultural differences as well as similarities in functional and aesthetic preferences regarding styles of information presentation. The interplay between cultural products and skills for accommodating to the cultural products is discussed.

  5. The Gist of Juries: Testing a Model of Damage Award Decision Making

    PubMed Central

    Reyna, Valerie F.; Hans, Valerie P.; Corbin, Jonathan C.; Yeh, Ryan; Lin, Kelvin; Royer, Caisa

    2017-01-01

    Despite the importance of damage awards, juries are often at sea about the amounts that should be awarded, with widely differing awards for cases that seem comparable. We tested a new model of damage award decision making by systematically varying the size, context, and meaningfulness of numerical comparisons or anchors. As a result, we were able to elicit large differences in award amounts that replicated for 2 different cases. Although even arbitrary dollar amounts (unrelated to the cases) influenced the size of award judgments, the most consistent effects of numerical anchors were achieved when the amounts were meaningful in the sense that they conveyed the gist of numbers as small or large. Consistent with the model, the ordinal gist of the severity of plaintiff’s damages and defendant’s liability predicted damage awards, controlling for other factors such as motivation for the award-judgment task and perceived economic damages. Contrary to traditional dual-process approaches, numeracy and cognitive style (e.g., need for cognition and cognitive reflection) were not significant predictors of these numerical judgments, but they were associated with lower levels of variability once the gist of the judgments was taken into account. Implications for theory and policy are discussed. PMID:29075092

  6. Efficient multifeature index structures for music data retrieval

    NASA Astrophysics Data System (ADS)

    Lee, Wegin; Chen, Arbee L. P.

    1999-12-01

    In this paper, we propose four index structures for music data retrieval. Based on suffix trees, we develop two index structures called combined suffix tree and independent suffix trees. These methods still show shortcomings for some search functions. Hence we develop another index, called Twin Suffix Trees, to overcome these problems. However, the Twin Suffix Trees lack of scalability when the amount of music data becomes large. Therefore we propose the fourth index, called Grid-Twin Suffix Trees, to provide scalability and flexibility for a large amount of music data. For each index, we can use different search functions, like exact search and approximate search, on different music features, like melody, rhythm or both. We compare the performance of the different search functions applied on each index structure by a series of experiments.

  7. Analysis of returns above variable costs for management of Verticillium wilt in cotton

    USDA-ARS?s Scientific Manuscript database

    A large plot study located in Halfway, TX, was conducted from 2007 to 2013 in an irrigated field infested with Verticillium wilt. Management options (crop rotation, irrigation amount, variety election) and combinations of options that can reduce this disease were compared using returns above variabl...

  8. Supporting Distance Learners: Making Practice More Effective

    ERIC Educational Resources Information Center

    Pratt, Keryn

    2015-01-01

    This paper reports on a qualitative evaluation of the postgraduate courses offered by distance in one university department. The types and amount of support provided to students was evaluated and compared with Simpson's (2008a) Proactive Motivational Support model (PaMS). While students were largely satisfied with the support they received during…

  9. Major soybean maturity gene haplotypes revealed by SNPViz analysis of 72 sequenced soybean genomes

    USDA-ARS?s Scientific Manuscript database

    In this Genomics Era, vast amounts of next generation sequencing data have become publicly-available for multiple genomes across hundreds of species. Analysis of these large-scale datasets can become cumbersome, especially when comparing nucleotide polymorphisms across many samples within a dataset...

  10. World Survey of Education, V: Educational Policy, Legislation and Administration.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    This report presents comprehensive, standardized, statistical, and descriptive educational information for 194 States and territories. The information was compiled from responses to questionnaires sent to country representatives and is designed to contribute to the comparative study of education by marshalling a large amount of material normally…

  11. Fate of peat-derived carbon and associated CO2 and CO emissions from two Southeast Asian estuaries

    NASA Astrophysics Data System (ADS)

    Müller, D.; Warneke, T.; Rixen, T.; Müller, M.; Mujahid, A.; Bange, H. W.; Notholt, J.

    2015-06-01

    Coastal peatlands in Southeast Asia release large amounts of organic carbon to rivers, which transport it further to the adjacent estuaries. However, little is known about the fate of this terrestrial material in the coastal ocean. Although Southeast Asia is, by area, considered a hotspot of estuarine CO2 emissions, studies in this region are very scarce. We measured dissolved and particulate organic carbon, carbon dioxide (CO2) partial pressure and carbon monoxide (CO) concentrations in two tropical estuaries in Sarawak, Malaysia, whose coastal area is covered by peatlands. We surveyed the estuaries of the rivers Lupar and Saribas during the wet and dry season, respectively. The spatial distribution and the carbon-to-nitrogen ratios of dissolved organic matter (DOM) suggest that peat-draining rivers convey terrestrial organic carbon to the estuaries. We found evidence that a large fraction of this carbon is respired. The median pCO2 in the estuaries ranged between 618 and 5064 μatm with little seasonal variation. CO2 fluxes were determined with a floating chamber and estimated to amount to 14-272 mol m-2 yr-1, which is high compared to other studies from tropical and subtropical sites. In contrast, CO concentrations and fluxes were relatively moderate (0.3-1.4 nmol L-1 and 0.8-1.9 mmol m-2 yr-1) if compared to published data for oceanic or upwelling systems. We attributed this to the large amounts of suspended matter (4-5004 mg L-1), limiting the light penetration depth. However, the diurnal variation of CO suggests that it is photochemically produced, implying that photodegradation might play a role for the removal of DOM from the estuary as well. We concluded that unlike smaller peat-draining tributaries, which tend to transport most carbon downstream, estuaries in this region function as an efficient filter for organic carbon and release large amounts of CO2 to the atmosphere. The Lupar and Saribas mid-estuaries release 0.4 ± 0.2 Tg C yr-1, which corresponds to approximately 80% of the emissions from the aquatic systems in these two catchments.

  12. Stream measurement work: Chapter 8 in Seventeenth biennial report of the State Engineer to the governor of Utah: 1929-1930

    USGS Publications Warehouse

    Purton, A.B.

    1930-01-01

    General stream measurement work looking toward a comprehensive inventory of the water resources of the state has been continued during the biennium by the United States Geological Survey under the usual cooperative agreement with the State Engineer.Since 1909 Utah in company with many other states has made regular legislative appropriations for the purpose of assisting and hastening the determination of the water supply of the United States by the Geographical Survey. Because of the comparatively small Federal appropriations the scope of this wok in the individual states has been largely influenced by the amount of the state cooperation. The funds contributed by each state have all been expended within that state and matched as far as possible by funds of the Geographical Survey. Up to the present, however, the Federal funds have been insufficient to match the state contributions beyond a very limited amount and in many localities the large amount of work done has been made possible only by correspondingly large unmatched state appropriations.During this period the regular stream gaging work in Utah has been practically limited to that possible with approximately ten thousand dollars annually divided about equally between the state and Geological Survey with the government’s share including the cost at Washington of general supervision, and the review, editing, and publication of the records. This has been the maximum amount that it has been possible to allot any one state to meet state cooperation.

  13. Fate of terrestrial organic carbon and associated CO2 and CO emissions from two Southeast Asian estuaries

    NASA Astrophysics Data System (ADS)

    Müller, D.; Warneke, T.; Rixen, T.; Müller, M.; Mujahid, A.; Bange, H. W.; Notholt, J.

    2016-02-01

    Southeast Asian rivers convey large amounts of organic carbon, but little is known about the fate of this terrestrial material in estuaries. Although Southeast Asia is, by area, considered a hotspot of estuarine carbon dioxide (CO2) emissions, studies in this region are very scarce. We measured dissolved and particulate organic carbon, as well as CO2 partial pressures and carbon monoxide (CO) concentrations in two tropical estuaries in Sarawak, Malaysia, whose coastal area is covered by carbon-rich peatlands. We surveyed the estuaries of the rivers Lupar and Saribas during the wet and dry season, respectively. Carbon-to-nitrogen ratios suggest that dissolved organic matter (DOM) is largely of terrestrial origin. We found evidence that a large fraction of this carbon is respired. The median pCO2 in the estuaries ranged between 640 and 5065 µatm with little seasonal variation. CO2 fluxes were determined with a floating chamber and estimated to amount to 14-268 mol m-2 yr-1, which is high compared to other studies from tropical and subtropical sites. Estimates derived from a merely wind-driven turbulent diffusivity model were considerably lower, indicating that these models might be inappropriate in estuaries, where tidal currents and river discharge make an important contribution to the turbulence driving water-air gas exchange. Although an observed diurnal variability of CO concentrations suggested that CO was photochemically produced, the overall concentrations and fluxes were relatively moderate (0.4-1.3 nmol L-1 and 0.7-1.8 mmol m-2 yr-1) if compared to published data for oceanic or upwelling systems. We attributed this to the large amounts of suspended matter (4-5004 mg L-1), limiting the light penetration depth and thereby inhibiting CO photoproduction. We concluded that estuaries in this region function as an efficient filter for terrestrial organic carbon and release large amounts of CO2 to the atmosphere. The Lupar and Saribas rivers deliver 0.3 ± 0.2 Tg C yr-1 to the South China Sea as organic carbon and their mid-estuaries release approximately 0.4 ± 0.2 Tg C yr-1 into the atmosphere as CO2.

  14. Quality evaluation of radiographic contrast media in large-volume prefilled syringes and vials.

    PubMed

    Sendo, T; Hirakawa, M; Yaginuma, M; Aoyama, T; Oishi, R

    1998-06-01

    The authors compared the particle contaminations of radiographic contrast media packaged in large-volume prefilled syringes and vials. Particle counting was performed for four contrast media packaged in large-volume prefilled syringes (iohexol, ioversol, ioversol for angiography, and ioxaglate) and three contrast media packaged in vials (iohexol, ioversol, and ioxaglate). X-ray emission spectrometry was performed to characterize the individual particles. The amount of silicone oil in the syringe was quantified with infrared spectrophotometry. The particle contamination in syringes containing ioversol was higher than that in syringes containing iohexol or ioxaglate. Particle contamination in the vials was relatively low, except with ioxaglate. X-ray emission spectrometry of the components of the syringe and vial showed that the source of particles was internal material released from the rubber stopper or inner surface. The particle counts for contrast media packaged in syringes and vials varied considerably among the different contrast media and were related to the amount of silicone oil on the inner surface and rubber piston of the syringe.

  15. Evaluation of high-level clouds in cloud resolving model simulations with ARM and KWAJEX observations

    DOE PAGES

    Liu, Zheng; Muhlbauer, Andreas; Ackerman, Thomas

    2015-11-05

    In this paper, we evaluate high-level clouds in a cloud resolving model during two convective cases, ARM9707 and KWAJEX. The simulated joint histograms of cloud occurrence and radar reflectivity compare well with cloud radar and satellite observations when using a two-moment microphysics scheme. However, simulations performed with a single moment microphysical scheme exhibit low biases of approximately 20 dB. During convective events, two-moment microphysical overestimate the amount of high-level cloud and one-moment microphysics precipitate too readily and underestimate the amount and height of high-level cloud. For ARM9707, persistent large positive biases in high-level cloud are found, which are not sensitivemore » to changes in ice particle fall velocity and ice nuclei number concentration in the two-moment microphysics. These biases are caused by biases in large-scale forcing and maintained by the periodic lateral boundary conditions. The combined effects include significant biases in high-level cloud amount, radiation, and high sensitivity of cloud amount to nudging time scale in both convective cases. The high sensitivity of high-level cloud amount to the thermodynamic nudging time scale suggests that thermodynamic nudging can be a powerful ‘‘tuning’’ parameter for the simulated cloud and radiation but should be applied with caution. The role of the periodic lateral boundary conditions in reinforcing the biases in cloud and radiation suggests that reducing the uncertainty in the large-scale forcing in high levels is important for similar convective cases and has far reaching implications for simulating high-level clouds in super-parameterized global climate models such as the multiscale modeling framework.« less

  16. Comparative effects of fructose and glucose on lipogenic gene expression and intermediary metabolism in HepG2 liver cells

    USDA-ARS?s Scientific Manuscript database

    It is well established that the consumption of large amounts of fructose or sucrose increases lipogenesis and circulating triglycerides in humans. Although the underlying molecular mechanisms responsible for this effect are not completely understood, it is possible that as reported for rodents, hig...

  17. Use of the disease severity index for null hypothesis testing

    USDA-ARS?s Scientific Manuscript database

    A disease severity index (DSI) is a single number for summarizing a large amount of disease severity information. It is used to indicate relative resistance of cultivars, to relate disease severity to yield loss, or to compare treatments. The DSI has most often been based on a special type of ordina...

  18. Evaluation of a stream channel-type system for southeast Alaska.

    Treesearch

    M.D. Bryant; P.E. Porter; S.J. Paustian

    1991-01-01

    Nine channel types within a hierarchical channel-type classification system (CTCS) were surveyed to determine relations between salmonid densities and species distribution, and channel type. Two other habitat classification systems and the amount of large woody debris also were compared to species distribution and salmonid densities, and to stream channel types....

  19. CO2 and CH4 exchanges between land ecosystems and the atmosphere in northern high latitudes over the 21st century

    USGS Publications Warehouse

    Zhuang, Q.; Melillo, J.M.; Sarofim, M.C.; Kicklighter, D.W.; McGuire, A.D.; Felzer, B.S.; Sokolov, A.; Prinn, R.G.; Steudler, P.A.; Hu, S.

    2006-01-01

    Terrestrial ecosystems of the northern high latitudes (above 50??N) exchange large amounts of CO2 and CH4 with the atmosphere each year. Here we use a process-based model to estimate the budget of CO 2 and CH4 of the region for current climate conditions and for future scenarios by considering effects of permafrost dynamics, CO 2 fertilization of photosynthesis and fire. We find that currently the region is a net source of carbon to the atmosphere at 276 Tg C yr -1. We project that throughout the 21st century, the region will most likely continue as a net source of carbon and the source will increase by up to 473 Tg C yr-1 by the end of the century compared to the current emissions. However our coupled carbon and climate model simulations show that these emissions will exert relatively small radiative forcing on global climate system compared to large amounts of anthropogenic emissions. Copyright 2006 by the American Geophysical Union.

  20. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  1. Hot chemistry in the diffuse medium: spectral signature in the H2 rotational lines

    NASA Astrophysics Data System (ADS)

    Verstraete, L.; Falgarone, E.; Pineau des Forets, G.; Flower, D.; Puget, J. L.

    1999-03-01

    Most of the diffuse interstellar medium is cold, but it must harbor pockets of hot gas to explain the large observed abundances of molecules like CH+ and HCO+. Because they dissipate locally large amounts of kinetic energy, MHD shocks and coherent vortices in turbulence can drive endothermic chemical reactions or reactions with large activation barriers. We predict the spectroscopic signatures in the H2 rotational lines of MHD shocks and vortices and compare them to those observed with the ISO-SWS along a line of sight through the Galaxy which samples 20 magnitudes of mostly diffuse gas.

  2. Estimates of ozone response to various combinations of NO(x) and VOC emission reductions in the eastern United States

    NASA Technical Reports Server (NTRS)

    Roselle, Shawn J.; Schere, Kenneth L.; Chu, Shao-Hang

    1994-01-01

    There is increasing recognition that controls on NO(x) emissions may be necessary, in addition to existing and future Volatile Organic Compounds (VOC) controls, for the abatement of ozone (O3) over portions of the United States. This study compares various combinations of anthropogenic NO(x) and VOC emission reductions through a series of model simulations. A total of 6 simulations were performed with the Regional Oxidant Model (ROM) for a 9-day period in July 1988. Each simulation reduced anthropogenic NO(x) and VOC emissions across-the-board by different amounts. Maximum O3 concentrations for the period were compared between the simulations. Comparison of the simulations suggests that: (1) NO(x) controls may be more effective than VOC controls in reducing peak O3 over most of the eastern United States; (2) VOC controls are most effective in urban areas having large sources of emissions; (3) NO(x) controls may increase O3 near large point sources; and (4) the benefit gained from increasing the amount of VOC controls may lessen as the amount of NO(x) control is increased. This paper has been reviewed in accordance with the U.S. Environmental Protection Agency's peer and administrative review policies and approved for presentation and publication. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.

  3. Effect of Expansion of Fertilization Width on Nitrogen Recovery Rate in Tea Plants

    NASA Astrophysics Data System (ADS)

    Nonaka, Kunihiko; Hirono, Yuhei; Watanabe, Iriki

    In cultivation of tea plants, large amounts of nitrogen, compared to amounts used for other crops, have been used for fertilization, resulting in degradation of the soil environment between hedges and an increase in concentrations of nitrate nitrogen in surrounding water systems. To reduce the environmental load, new methods of fertilizer application are needed. This report deals with the effect of expansion of fertilization width on nitrogen recovery rate in tea plants. In the test field, 15 N-labeled ammonium sulfate had been applied over custom fertilization by between-hedges fertilization (fertilization width of 15cm) and wide fertilization (fertilization width of 40cm), nitrogen recovery rates were compared. Expansion of fertilization width resulted in an approximately 30% increase in nitrogen recovery rate compared to that in the case of fertilization between hedges. Increases in nitrogen recovery rates were observed with fallapplied fertilization, spring-applied fertilization, pop-up fertilizer application, and summerapplied fertilization.

  4. How restrained eaters perceive the amount they eat.

    PubMed

    Jansen, A

    1996-09-01

    The cognitive model of binge eating states that it is the awareness of a broken diet that disinhibits the restrained eater. It is, according to that model, the perception of having overeaten that triggers disinhibited eating. However, although the perception of the amount eaten plays a central role in cognitive restraint theory, it has never directly been tested how restrained subjects perceive the amount of food they eat. In the present studies, participants were given ad libitum access to large amounts of palatable food and both their perception of the amount eaten and their estimated caloric intake were compared with the amount they actually ate. The restrained participants in these studies ate more than the unrestrained participants. In the first and second studies, the restrained participants consumed 571 and 372 'forbidden' calories respectively, without having the feeling that they had eaten very much, let alone too much. Moreover in both studies, the restrained eaters underestimated their caloric intake, whereas unrestrained eaters estimated their caloric intake quite well. The potential implications of the present findings for the cognitive restraint model are discussed.

  5. Computationally efficient simulation of unsteady aerodynamics using POD on the fly

    NASA Astrophysics Data System (ADS)

    Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando

    2016-12-01

    Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.

  6. An Exploration on Greenhouse Gas and Ammonia Production by Insect Species Suitable for Animal or Human Consumption

    PubMed Central

    Oonincx, Dennis G. A. B.; van Itterbeeck, Joost; Heetkamp, Marcel J. W.; van den Brand, Henry; van Loon, Joop J. A.; van Huis, Arnold

    2010-01-01

    Background Greenhouse gas (GHG) production, as a cause of climate change, is considered as one of the biggest problems society is currently facing. The livestock sector is one of the large contributors of anthropogenic GHG emissions. Also, large amounts of ammonia (NH3), leading to soil nitrification and acidification, are produced by livestock. Therefore other sources of animal protein, like edible insects, are currently being considered. Methodology/Principal Findings An experiment was conducted to quantify production of carbon dioxide (CO2) and average daily gain (ADG) as a measure of feed conversion efficiency, and to quantify the production of the greenhouse gases methane (CH4) and nitrous oxide (N2O) as well as NH3 by five insect species of which the first three are considered edible: Tenebrio molitor, Acheta domesticus, Locusta migratoria, Pachnoda marginata, and Blaptica dubia. Large differences were found among the species regarding their production of CO2 and GHGs. The insects in this study had a higher relative growth rate and emitted comparable or lower amounts of GHG than described in literature for pigs and much lower amounts of GHG than cattle. The same was true for CO2 production per kg of metabolic weight and per kg of mass gain. Furthermore, also the production of NH3 by insects was lower than for conventional livestock. Conclusions/Significance This study therefore indicates that insects could serve as a more environmentally friendly alternative for the production of animal protein with respect to GHG and NH3 emissions. The results of this study can be used as basic information to compare the production of insects with conventional livestock by means of a life cycle analysis. PMID:21206900

  7. Distribution and biokinetic analysis of 210Pb and 210Po in poultry due to ingestion of dicalcium phosphate.

    PubMed

    Casacuberta, N; Traversa, F L; Masqué, P; Garcia-Orellana, J; Anguita, M; Gasa, J; Garcia-Tenorio, R

    2010-09-15

    Dicalcium phosphate (DCP) is used as a calcium supplement for food producing animals (i.e., cattle, poultry and pig). When DCP is produced via wet acid digestion of the phosphate rock and depending on the acid used in the industrial process, the final product can result in enhanced (210)Pb and (210)Po specific activities (approximately 2000 Bq.kg(-1)). Both (210)Pb and (210)Po are of great interest because their contribution to the dose received by ingestion is potentially large. The aims of this work are to examine the accumulation of (210)Pb and (210)Po in chicken tissues during the first 42 days of life and to build a suitable single-compartment biokinetic model to understand the behavior of both radionuclides within the entire animal using the experimental results. Three commercial corn-soybean-based diets containing different amounts and sources of DCP were fed to broilers during a period of 42 days. The results show that diets containing enhanced concentrations of (210)Pb and (210)Po lead to larger specific accumulation in broiler tissues compared to the blank diet. Radionuclides do not accumulate homogeneously within the animal body: (210)Pb follows the calcium pathways to some extent and accumulates largely in bones, while (210)Po accumulates to a large extent in liver and kidneys. However, the total amount of radionuclide accumulation in tissues is small compared to the amounts excreted in feces. The single-compartment non-linear biokinetic model proposed here for (210)Pb and (210)Po in the whole animal takes into account the size evolution and is self-consistent in that no fitting parameterization of intake and excretions rates is required. Copyright 2010 Elsevier B.V. All rights reserved.

  8. An exploration on greenhouse gas and ammonia production by insect species suitable for animal or human consumption.

    PubMed

    Oonincx, Dennis G A B; van Itterbeeck, Joost; Heetkamp, Marcel J W; van den Brand, Henry; van Loon, Joop J A; van Huis, Arnold

    2010-12-29

    Greenhouse gas (GHG) production, as a cause of climate change, is considered as one of the biggest problems society is currently facing. The livestock sector is one of the large contributors of anthropogenic GHG emissions. Also, large amounts of ammonia (NH(3)), leading to soil nitrification and acidification, are produced by livestock. Therefore other sources of animal protein, like edible insects, are currently being considered. An experiment was conducted to quantify production of carbon dioxide (CO₂) and average daily gain (ADG) as a measure of feed conversion efficiency, and to quantify the production of the greenhouse gases methane (CH₄) and nitrous oxide (N₂O) as well as NH₃ by five insect species of which the first three are considered edible: Tenebrio molitor, Acheta domesticus, Locusta migratoria, Pachnoda marginata, and Blaptica dubia. Large differences were found among the species regarding their production of CO₂ and GHGs. The insects in this study had a higher relative growth rate and emitted comparable or lower amounts of GHG than described in literature for pigs and much lower amounts of GHG than cattle. The same was true for CO₂ production per kg of metabolic weight and per kg of mass gain. Furthermore, also the production of NH₃ by insects was lower than for conventional livestock. This study therefore indicates that insects could serve as a more environmentally friendly alternative for the production of animal protein with respect to GHG and NH₃ emissions. The results of this study can be used as basic information to compare the production of insects with conventional livestock by means of a life cycle analysis.

  9. Destruction of Navy Hazardous Wastes by Supercritical Water Oxidation

    DTIC Science & Technology

    1994-08-01

    cleaning and derusting (nitrite and citric acid solutions), electroplating ( acids and metal bearing solutions), electronics and refrigeration... acid forming chemical species or that contain a large amount of dissolved solids present a challenge to current SCWO •-chnology. Approved for public...Waste streams that contain a large amount of mineral- acid forming chemical species or that contain a large amount of dissolved solids present a challenge

  10. Plasma homovanillic acid in the prodromal phase of schizophrenia.

    PubMed

    Sumiyoshi, T; Kurachi, M; Kurokawa, K; Yotsutsuji, T; Uehara, T; Itoh, H; Saitoh, O

    2000-03-01

    Plasma levels of homovanillic acid (pHVA) have been used as a peripheral measure of central dopaminergic activity. Despite a large body of studies investigating pHVA in schizophrenia, little is known about pHVA in patients in the prodromal phase of the illness. Plasma HVA levels of 12 male outpatients meeting DSM-III-R criteria for the prodromal phase of schizophrenia at the time of blood sampling (who later developed psychotic symptoms) were compared with those of 12 normal male healthy volunteers. Task amounts in the Kraepelin arithmetic test at the time of blood sampling were compared between the prodromal patients and normal controls and were correlated with pHVA levels. The prodromal patients had significantly higher pHVA levels compared with normal control subjects. The mean amount of the arithmetic task for the prodromal patients was significantly less than that for controls. In the patient group, a significant negative correlation was observed between pHVA levels and the task amounts. Data from the present study indicate the presence of dopaminergic dysfunction in the prodromal stage of schizophrenia that is associated with neuropsychological impairment. Increased pHVA levels in the prodromal patients may have implications for early detection of schizophrenia.

  11. Empirical relationships between tree fall and landscape-level amounts of logging and fire

    PubMed Central

    Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C.

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape. PMID:29474487

  12. Empirical relationships between tree fall and landscape-level amounts of logging and fire.

    PubMed

    Lindenmayer, David B; Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape.

  13. Simplified method for calculating shear deflections of beams.

    Treesearch

    I. Orosz

    1970-01-01

    When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...

  14. Genetic variation between populations of the stable fly, Stomoxys calcitrans (L.)(Diptera: Muscidae) from Nebraska, Denmark and Australia

    USDA-ARS?s Scientific Manuscript database

    The stable fly, Stomoxys calcitrans L., is a cosmopolitan, major pest of livestock. Previous studies on this insect, from samples within the United States, suggested a large amount of gene flow; more genetic variation was detected within populations than between populations. To compare the genetic v...

  15. Ignorance- versus Evidence-Based Decision Making: A Decision Time Analysis of the Recognition Heuristic

    ERIC Educational Resources Information Center

    Hilbig, Benjamin E.; Pohl, Rudiger F.

    2009-01-01

    According to part of the adaptive toolbox notion of decision making known as the recognition heuristic (RH), the decision process in comparative judgments--and its duration--is determined by whether recognition discriminates between objects. By contrast, some recently proposed alternative models predict that choices largely depend on the amount of…

  16. Evidence of cue synergism in termite corpse response behavior

    Treesearch

    Michael D. Ulyshen; Thomas G. Shelton

    2012-01-01

    Subterranean termites of the genus Reticulitermes are known to build walls and tubes and move considerable amounts of soil into wood but the causes of this behavior remain largely unexplored. In laboratory assays, we tested the hypothesis that Reticulitermes virginicus (Banks) would carry more sand into wooden blocks containing corpses compared to corpse-free controls...

  17. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    PubMed Central

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254

  18. Method and apparatus for displaying information

    NASA Technical Reports Server (NTRS)

    Huang, Sui (Inventor); Eichler, Gabriel (Inventor); Ingber, Donald E. (Inventor)

    2010-01-01

    A method for displaying large amounts of information. The method includes the steps of forming a spatial layout of tiles each corresponding to a representative reference element; mapping observed elements onto the spatial layout of tiles of representative reference elements; assigning a respective value to each respective tile of the spatial layout of the representative elements; and displaying an image of the spatial layout of tiles of representative elements. Each tile includes atomic attributes of representative elements. The invention also relates to an apparatus for displaying large amounts of information. The apparatus includes a tiler forming a spatial layout of tiles, each corresponding to a representative reference element; a comparator mapping observed elements onto said spatial layout of tiles of representative reference elements; an assigner assigning a respective value to each respective tile of said spatial layout of representative reference elements; and a display displaying an image of the spatial layout of tiles of representative reference elements.

  19. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    PubMed

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  20. Fuzzy Document Clustering Approach using WordNet Lexical Categories

    NASA Astrophysics Data System (ADS)

    Gharib, Tarek F.; Fouad, Mohammed M.; Aref, Mostafa M.

    Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. This area is growing rapidly mainly because of the strong need for analysing the huge and large amount of textual data that reside on internal file systems and the Web. Text document clustering provides an effective navigation mechanism to organize this large amount of data by grouping their documents into a small number of meaningful classes. In this paper we proposed a fuzzy text document clustering approach using WordNet lexical categories and Fuzzy c-Means algorithm. Some experiments are performed to compare efficiency of the proposed approach with the recently reported approaches. Experimental results show that Fuzzy clustering leads to great performance results. Fuzzy c-means algorithm overcomes other classical clustering algorithms like k-means and bisecting k-means in both clustering quality and running time efficiency.

  1. A simple biosynthetic pathway for large product generation from small substrate amounts

    NASA Astrophysics Data System (ADS)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways.

  2. Influence of dietary fiber on luminal environment and morphology in the small and large intestine of sows.

    PubMed

    Serena, A; Hedemann, M S; Bach Knudsen, K E

    2008-09-01

    In this study, the effect of feeding different types and amounts of dietary fiber (DF) on luminal environment and morphology in the small and large intestine of sows was studied. Three diets, a low-fiber diet (LF) and 2 high-fiber diets (high fiber 1, HF1, and high fiber 2, HF2) were used. Diet LF (DF, 17%; soluble DF 4.6%) was based on wheat and barley, whereas the 2 high-fiber diets (HF1: DF, 43%; soluble DF, 11.0%; and HF2: DF, 45%; soluble DF, 7.6%) were based on wheat and barley supplemented with different coproducts from the vegetable food and agroindustry (HF1 and HF2: sugar beet pulp, potato pulp, and pectin residue; HF2: brewers spent grain, seed residue, and pea hull). The diets were fed for a 4-wk period to 12 sows (4 receiving each diet). Thereafter, the sows were killed 4 h postfeeding, and digesta and tissue samples were collected from various parts of the small and large intestine. The carbohydrates in the LF diet were well digested in the small intestine, resulting in less digesta in all segments of the intestinal tract. The fermentation of nonstarch polysaccharides in the large intestine was affected by the chemical composition and physicochemical properties. The digesta from pigs fed the LF diet provided low levels of fermentable carbohydrates that were depleted in proximal colon, whereas for pigs fed the 2 high-DF diets, the digesta was depleted of fermentable carbohydrates at more distal locations of the colon. The consequence was an increased retention time, greater DM percentage, decreased amount of material, and a decreased tissue weight after feeding the LF diet compared with the HF diets. The concentration of short-chain fatty acids was consistent with the fermentability of carbohydrates in the large intestine, but there was no effect of the dietary composition on the molar short-chain fatty acid proportions. It was further shown that feeding the diet providing the greatest amount of fermentable carbohydrates (diet HF1, which was high in soluble DF) resulted in significant morphological changes in the colon compared with the LF diet.

  3. Advanced spacecraft thermal control techniques

    NASA Technical Reports Server (NTRS)

    Fritz, C. H.

    1977-01-01

    The problems of rejecting large amounts of heat from spacecraft were studied. Shuttle Space Laboratory heat rejection uses 1 kW for pumps and fans for every 5 kW (thermal) heat rejection. This is rather inefficient, and for future programs more efficient methods were examined. Two advanced systems were studied and compared to the present pumped-loop system. The advanced concepts are the air-cooled semipassive system, which features rejection of a large percentage of the load through the outer skin, and the heat pipe system, which incorporates heat pipes for every thermal control function.

  4. [Physical properties of resins for veneer crown. (Part 1) Bending strength of thermosetting methacrylic resins (author's transl)].

    PubMed

    Kashiwada, T

    1979-01-01

    The physical properties of thermosetting methacrylic resins contain a kind or more than two kinds of cross linking agents were investigated. Knoop hardness and bending strength after drying, water sorption and thermal cycling were listed in table 4 and 5. Hydrophilic resins absorbed water about 3 times as much as hydrophobic resins. The materials contain a small amount of hydrophobic cross linking agents in MMA indicate comparatively excellent properties after drying, water sorption and thermal cycling. Knoop hardness of resins generally reduced by water sorption, especially in the case of the resin contains a large amount of triethylene glycol dimethacrylate.

  5. Large-Eddy Simulation of Shallow Cumulus over Land: A Composite Case Based on ARM Long-Term Observations at Its Southern Great Plains Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yunyan; Klein, Stephen A.; Fan, Jiwen

    Based on long-term observations by the Atmospheric Radiation Measurement program at its Southern Great Plains site, a new composite case of continental shallow cumulus (ShCu) convection is constructed for large-eddy simulations (LES) and single-column models. The case represents a typical daytime nonprecipitating ShCu whose formation and dissipation are driven by the local atmospheric conditions and land surface forcing and are not influenced by synoptic weather events. The case includes early morning initial profiles of temperature and moisture with a residual layer; diurnally varying sensible and latent heat fluxes, which represent a domain average over different land surface types; simplified large-scalemore » horizontal advective tendencies and subsidence; and horizontal winds with prevailing direction and average speed. Observed composite cloud statistics are provided for model evaluation. The observed diurnal cycle is well reproduced by LES; however, the cloud amount, liquid water path, and shortwave radiative effect are generally underestimated. LES are compared between simulations with an all-or-nothing bulk microphysics and a spectral bin microphysics. The latter shows improved agreement with observations in the total cloud cover and the amount of clouds with depths greater than 300 m. When compared with radar retrievals of in-cloud air motion, LES produce comparable downdraft vertical velocities, but a larger updraft area, velocity, and updraft mass flux. Both observations and LES show a significantly larger in-cloud downdraft fraction and downdraft mass flux than marine ShCu.« less

  6. Large-Eddy Simulation of Shallow Cumulus over Land: A Composite Case Based on ARM Long-Term Observations at Its Southern Great Plains Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yunyan; Klein, Stephen A.; Fan, Jiwen

    Based on long-term observations by the Atmospheric Radiation Measurement program at its Southern Great Plains site, a new composite case of continental shallow cumulus (ShCu) convection is constructed for large-eddy simulations (LES) and single-column models. The case represents a typical daytime non-precipitating ShCu whose formation and dissipation are driven by the local atmospheric conditions and land-surface forcing, and are not influenced by synoptic weather events. The case includes: early-morning initial profiles of temperature and moisture with a residual layer; diurnally-varying sensible and latent heat fluxes which represent a domain average over different land-surface types; simplified large-scale horizontal advective tendencies andmore » subsidence; and horizontal winds with prevailing direction and average speed. Observed composite cloud statistics are provided for model evaluation. The observed diurnal cycle is well-reproduced by LES, however the cloud amount, liquid water path, and shortwave radiative effect are generally underestimated. LES are compared between simulations with an all-or-nothing bulk microphysics and a spectral bin microphysics. The latter shows improved agreement with observations in the total cloud cover and the amount of clouds with depths greater than 300 meters. When compared with radar retrievals of in-cloud air motion, LES produce comparable downdraft vertical velocities, but a larger updraft area, velocity and updraft mass flux. Finally, both observation and LES show a significantly larger in-cloud downdraft fraction and downdraft mass flux than marine ShCu.« less

  7. Large-Eddy Simulation of Shallow Cumulus over Land: A Composite Case Based on ARM Long-Term Observations at Its Southern Great Plains Site

    DOE PAGES

    Zhang, Yunyan; Klein, Stephen A.; Fan, Jiwen; ...

    2017-09-19

    Based on long-term observations by the Atmospheric Radiation Measurement program at its Southern Great Plains site, a new composite case of continental shallow cumulus (ShCu) convection is constructed for large-eddy simulations (LES) and single-column models. The case represents a typical daytime non-precipitating ShCu whose formation and dissipation are driven by the local atmospheric conditions and land-surface forcing, and are not influenced by synoptic weather events. The case includes: early-morning initial profiles of temperature and moisture with a residual layer; diurnally-varying sensible and latent heat fluxes which represent a domain average over different land-surface types; simplified large-scale horizontal advective tendencies andmore » subsidence; and horizontal winds with prevailing direction and average speed. Observed composite cloud statistics are provided for model evaluation. The observed diurnal cycle is well-reproduced by LES, however the cloud amount, liquid water path, and shortwave radiative effect are generally underestimated. LES are compared between simulations with an all-or-nothing bulk microphysics and a spectral bin microphysics. The latter shows improved agreement with observations in the total cloud cover and the amount of clouds with depths greater than 300 meters. When compared with radar retrievals of in-cloud air motion, LES produce comparable downdraft vertical velocities, but a larger updraft area, velocity and updraft mass flux. Finally, both observation and LES show a significantly larger in-cloud downdraft fraction and downdraft mass flux than marine ShCu.« less

  8. Specific Method for the Determination of Ozone in the Atmosphere.

    ERIC Educational Resources Information Center

    Sachdev, Sham L.; And Others

    A description is given of work undertaken to develop a simple, specific, and reliable method for ozone. Reactions of ozone with several 1-alkenes were studied at room temperature (25C). Eugenol (4-allyl-2-methoxy phenol), when reacted with ozone, was found to produce relatively large amounts of formaldehyde as compared to other 1-alkenes tested.…

  9. Violence or Nonviolence: Which do We Choose?

    ERIC Educational Resources Information Center

    Schafer, John

    2005-01-01

    A large mass of research on violence now exists, yet the utilitarian value of this vast amount of scientific endeavor may be rated as low, comparing it to the levels of violence at all levels abounding in the world today. The author calls for centralizing funding and work on violence at the national level in the United States, perhaps forming a…

  10. Production of Ginkgo leaf-shaped basidiocarps of the Lingzhi or Reishi medicinal mushroom Ganoderma lucidum (higher Basidiomycetes), containing high levels of α- and β-D-glucan and ganoderic acid A.

    PubMed

    Yajima, Yuka; Miyazaki, Minoru; Okita, Noriyasu; Hoshino, Tamotsu

    2013-01-01

    Ganoderic acid A and α- and β-D-glucan content were compared among morphologically different basidiocarps of the medicinal mushroom Ganoderma lucidum. Ginkgo leaf-shaped basidiocarps gradually hardened from the base to the pileus and accumulated a higher amount of bioactive components than normal (kidney-shaped) and antler/deer horn-shaped basidiocarps. In the normal G. lucidum stipe, the outer context contained the highest amount of α- and β-D-glucan (approximately 55%) and the highest amount of ganoderic acid A (approximately 0.3%). Ginkgo leaf-shaped G. lucidum had a large area of outer layer and stout outer context, which contributed to their high α- and β-D-glucan and ganoderic acid A content.

  11. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, P.D.

    Intercontinental ballistic missiles based in silos have become relatively vulnerable, at least in theory, to counter-force attacks. This theoretical vulnerability may not, in fact, be a serious practical concern; it is nonetheless troubling both to policy-makers and to the public. Furthermore, the present generation of ICBMs is aging (the Minuteman II single warhead missile will exceed its operational life-span early in the next decade) and significant restructuring of the ballistic missile force may well be necessary if a Strategic Arms Reduction Treaty (START) is signed. This paper compares several proposed schemes for modernizing the ICBM force. Because the rail-garrison MIRVdmore » mobile system is the least costly alternative to secure a large number of strategic warheads, it receives a comparatively large amount of attention.« less

  13. A successful strategy for the recovering of active P21, an insoluble recombinant protein of Trypanosoma cruzi

    NASA Astrophysics Data System (ADS)

    Santos, Marlus Alves Dos; Teixeira, Francesco Brugnera; Moreira, Heline Hellen Teixeira; Rodrigues, Adele Aud; Machado, Fabrício Castro; Clemente, Tatiana Mordente; Brigido, Paula Cristina; Silva, Rebecca Tavares E.; Purcino, Cecílio; Gomes, Rafael Gonçalves Barbosa; Bahia, Diana; Mortara, Renato Arruda; Munte, Claudia Elisabeth; Horjales, Eduardo; da Silva, Claudio Vieira

    2014-03-01

    Structural studies of proteins normally require large quantities of pure material that can only be obtained through heterologous expression systems and recombinant technique. In these procedures, large amounts of expressed protein are often found in the insoluble fraction, making protein purification from the soluble fraction inefficient, laborious, and costly. Usually, protein refolding is avoided due to a lack of experimental assays that can validate correct folding and that can compare the conformational population to that of the soluble fraction. Herein, we propose a validation method using simple and rapid 1D 1H nuclear magnetic resonance (NMR) spectra that can efficiently compare protein samples, including individual information of the environment of each proton in the structure.

  14. Correlation between emission property and concentration of Sn2+ center in the SnO-ZnO-P2O5 glass.

    PubMed

    Masai, Hirokazu; Tanimoto, Toshiro; Fujiwara, Takumi; Matsumoto, Syuji; Tokuda, Yomei; Yoko, Toshinobu

    2012-12-03

    The authors report on the correlation between the photoluminescence (PL) property and the SnO amount in SnO-ZnO-P2O5 (SZP) glass. In the PL excitation (PLE) spectra of the SZP glass containing Sn2+ emission center, two S1 states, one of which is strongly affected by SnO amount, are assumed to exist. The PLE band closely correlates with the optical band edge originating from Sn2+ species, and they both largely red-shifts with increasing amount of SnO. The emission decay time of the SZP glass decreased with increasing amount of SnO and the internal quantum efficiencies of the SZP glasses containing 1~5 mol% of SnO are comparable to that of MgWO4. It is expected that the composition-dependent S1 state (the lower energy excitation band) governs the quantum efficiency of the SZP glasses.

  15. A replacement for islet equivalents with improved reliability and validity.

    PubMed

    Huang, Han-Hung; Ramachandran, Karthik; Stehno-Bittel, Lisa

    2013-10-01

    Islet equivalent (IE), the standard estimate of isolated islet volume, is an essential measure to determine the amount of transplanted islet tissue in the clinic and is used in research laboratories to normalize results, yet it is based on the false assumption that all islets are spherical. Here, we developed and tested a new easy-to-use method to quantify islet volume with greater accuracy. Isolated rat islets were dissociated into single cells, and the total cell number per islet was determined by using computer-assisted cytometry. Based on the cell number per islet, we created a regression model to convert islet diameter to cell number with a high R2 value (0.8) and good validity and reliability with the same model applicable to young and old rats and males or females. Conventional IE measurements overestimated the tissue volume of islets. To compare results obtained using IE or our new method, we compared Glut2 protein levels determined by Western Blot and proinsulin content via ELISA between small (diameter≤100 μm) and large (diameter≥200 μm) islets. When normalized by IE, large islets showed significantly lower Glut2 level and proinsulin content. However, when normalized by cell number, large and small islets had no difference in Glut2 levels, but large islets contained more proinsulin. In conclusion, normalizing islet volume by IE overestimated the tissue volume, which may lead to erroneous results. Normalizing by cell number is a more accurate method to quantify tissue amounts used in islet transplantation and research.

  16. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  17. Requirement analysis to promote small-sized E-waste collection from consumers.

    PubMed

    Mishima, Kuniko; Nishimura, Hidekazu

    2016-02-01

    The collection and recycling of small-sized waste electrical and electronic equipment is an emerging problem, since these products contain certain amounts of critical metals and rare earths. Even if the amount is not large, having a few supply routes for such recycled resources could be a good strategy to be competitive in a world of finite resources. The small-sized e-waste sometimes contains personal information, therefore, consumers are often reluctant to put them into recycling bins. In order to promote the recycling of E-waste, collection of used products from the consumer becomes important. Effective methods involving incentives for consumers might be necessary. Without such methods, it will be difficult to achieve the critical amounts necessary for an efficient recycling system. This article focused on used mobile phones among information appliances as the first case study, since it contains relatively large amounts of valuable metals compared with other small-sized waste electrical and electronic equipment and there are a large number of products existing in the market. The article carried out surveys to determine what kind of recycled material collection services are preferred by consumers. The results clarify that incentive or reward money alone is not a driving force for recycling behaviour. The article discusses the types of effective services required to promote recycling behaviour. The article concludes that securing information, transferring data and providing proper information about resources and environment can be an effective tool to encourage a recycling behaviour strategy to promote recycling, plus the potential discount service on purchasing new products associated with the return of recycled mobile phones. © The Author(s) 2015.

  18. Tropospheric O3 over Indonesia during biomass burning events measured with GOME (Global Ozone Monitoring Experiment) and compared with backtrajectory calculation

    NASA Astrophysics Data System (ADS)

    Ladstaetter-Weissenmayer, A.; Meyer-Arnek, J.; Burrows, J. P.

    During the dry season, biomass burning is an important source of ozone precursors for the tropical troposphere, and ozone formation can occur in biomass burning plumes originating in Indonesia and northern Australia. Satellite based GOME (Global Ozone Measuring experiment) data are used to characterize the amount of tropospheric ozone production over this region during the El Niño event in September 1997 compared to a so called "normal" year 1998. Large scale biomass burning occurred over Kalimantan in 1997 caused by the absence of the northern monsoon rains, leading to significant increases in tropospheric ozone. Tropospheric ozone was determined from GOME data using the Tropospheric Excess Method (TEM). Backtrajectory calculations show that Indonesia is influenced every summer by the emissions of trace gases from biomass buring over northern Australia. But in 1997 over Indonesia an increasing of tropospheric ozone amounts can be observed caused by the fires over Indonesia itself as well as by northern Australia. The analysis of the measurements of BIBLE-A (Biomass Burning and Lightning Experiment) and using ATSR (Along the Track Scanning Radiometer) data show differences in the view to the intensity of fire counts and therefore in the amount of the emission of precursors of tropospheric ozone comparing September 1997 to September 1998.

  19. Mechanical properties of concrete containing recycled concrete aggregate (RCA) and ceramic waste as coarse aggregate replacement

    NASA Astrophysics Data System (ADS)

    Khalid, Faisal Sheikh; Azmi, Nurul Bazilah; Sumandi, Khairul Azwa Syafiq Mohd; Mazenan, Puteri Natasya

    2017-10-01

    Many construction and development activities today consume large amounts of concrete. The amount of construction waste is also increasing because of the demolition process. Much of this waste can be recycled to produce new products and increase the sustainability of construction projects. As recyclable construction wastes, concrete and ceramic can replace the natural aggregate in concrete because of their hard and strong physical properties. This research used 25%, 35%, and 45% recycled concrete aggregate (RCA) and ceramic waste as coarse aggregate in producing concrete. Several tests, such as concrete cube compression and splitting tensile tests, were also performed to determine and compare the mechanical properties of the recycled concrete with those of the normal concrete that contains 100% natural aggregate. The concrete containing 35% RCA and 35% ceramic waste showed the best properties compared with the normal concrete.

  20. Characteristics of the microwave pyrolysis and microwave CO2-assisted gasification of dewatered sewage sludge.

    PubMed

    Chun, Young Nam; Jeong, Byeo Ri

    2017-07-28

    Microwave drying-pyrolysis or drying-gasification characteristics were examined to convert sewage sludge into energy and resources. The gasification was carried out with carbon dioxide as a gasifying agent. The examination results were compared with those of the conventional heating-type electric furnace to compare both product characteristics. Through the pyrolysis or gasification, gas, tar, and char were generated as products. The produced gas was the largest component of each process, followed by the sludge char and the tar. During the pyrolysis process, the main components of the produced gas were hydrogen and carbon monoxide, with a small amount of hydrocarbons such as methane and ethylene. In the gasification process, however, the amount of carbon monoxide was greater than the amount of hydrogen. In microwave gasification, a large amount of heavy tar was produced. The largest amount of benzene in light tar was generated from the pyrolysis or gasification. Ammonia and hydrogen cyanide, which are precursors of NO x , were also generated. In the microwave heating method, the sludge char produced by pyrolysis and gasification had pores in the mesopore range. This could be explained that the gas obtained from the microwave pyrolysis or gasification of the wet sewage sludge can be used as an alternative fuel, but the tar and NO x precursors in the produced gas should be treated. Sludge char can be used as a biomass solid fuel or as a tar removal adsorbent if necessary.

  1. Aboveground tree growth varies with belowground carbon allocation in a tropical rainforest environment

    Treesearch

    J.W. Raich; D.A. Clark; L. Schwendenmann; Tana Wood

    2014-01-01

    Young secondary forests and plantations in the moist tropics often have rapid rates of biomass accumulation and thus sequester large amounts of carbon. Here, we compare results from mature forest and nearby 15–20 year old tree plantations in lowland Costa Rica to evaluate differences in allocation of carbon to aboveground production and root systems. We found that the...

  2. The role of explicit and implicit self-esteem in peer modeling of palatable food intake: a study on social media interaction among youngsters.

    PubMed

    Bevelander, Kirsten E; Anschütz, Doeschka J; Creemers, Daan H M; Kleinjan, Marloes; Engels, Rutger C M E

    2013-01-01

    This experimental study investigated the impact of peers on palatable food intake of youngsters within a social media setting. To determine whether this effect was moderated by self-esteem, the present study examined the roles of global explicit self-esteem (ESE), body esteem (BE) and implicit self-esteem (ISE). Participants (N = 118; 38.1% boys; M age 11.14±.79) were asked to play a computer game while they believed to interact online with a same-sex normal-weight remote confederate (i.e., instructed peer) who ate either nothing, a small or large amount of candy. Participants modeled the candy intake of peers via a social media interaction, but this was qualified by their self-esteem. Participants with higher ISE adjusted their candy intake to that of a peer more closely than those with lower ISE when the confederate ate nothing compared to when eating a modest (β = .26, p = .05) or considerable amount of candy (kcal) (β = .32, p = .001). In contrast, participants with lower BE modeled peer intake more than those with higher BE when eating nothing compared to a considerable amount of candy (kcal) (β = .21, p = .02); ESE did not moderate social modeling behavior. In addition, participants with higher discrepant or "damaged" self-esteem (i.e., high ISE and low ESE) modeled peer intake more when the peer ate nothing or a modest amount compared to a substantial amount of candy (kcal) (β = -.24, p = .004; β = -.26, p<.0001, respectively). Youngsters conform to the amount of palatable food eaten by peers through social media interaction. Those with lower body esteem or damaged self-esteem may be more at risk to peer influences on food intake.

  3. The Role of Explicit and Implicit Self-Esteem in Peer Modeling of Palatable Food Intake: A Study on Social Media Interaction among Youngsters

    PubMed Central

    Bevelander, Kirsten E.; Anschütz, Doeschka J.; Creemers, Daan H. M.; Kleinjan, Marloes; Engels, Rutger C. M. E.

    2013-01-01

    Objective This experimental study investigated the impact of peers on palatable food intake of youngsters within a social media setting. To determine whether this effect was moderated by self-esteem, the present study examined the roles of global explicit self-esteem (ESE), body esteem (BE) and implicit self-esteem (ISE). Methods Participants (N = 118; 38.1% boys; M age 11.14±.79) were asked to play a computer game while they believed to interact online with a same-sex normal-weight remote confederate (i.e., instructed peer) who ate either nothing, a small or large amount of candy. Results Participants modeled the candy intake of peers via a social media interaction, but this was qualified by their self-esteem. Participants with higher ISE adjusted their candy intake to that of a peer more closely than those with lower ISE when the confederate ate nothing compared to when eating a modest (β = .26, p = .05) or considerable amount of candy (kcal) (β = .32, p = .001). In contrast, participants with lower BE modeled peer intake more than those with higher BE when eating nothing compared to a considerable amount of candy (kcal) (β = .21, p = .02); ESE did not moderate social modeling behavior. In addition, participants with higher discrepant or “damaged” self-esteem (i.e., high ISE and low ESE) modeled peer intake more when the peer ate nothing or a modest amount compared to a substantial amount of candy (kcal) (β = −.24, p = .004; β = −.26, p<.0001, respectively). Conclusion Youngsters conform to the amount of palatable food eaten by peers through social media interaction. Those with lower body esteem or damaged self-esteem may be more at risk to peer influences on food intake. PMID:24015251

  4. Patterns of text reuse in a scientific corpus

    PubMed Central

    Citron, Daniel T.; Ginsparg, Paul

    2015-01-01

    We consider the incidence of text “reuse” by researchers via a systematic pairwise comparison of the text content of all articles deposited to arXiv.org from 1991 to 2012. We measure the global frequencies of three classes of text reuse and measure how chronic text reuse is distributed among authors in the dataset. We infer a baseline for accepted practice, perhaps surprisingly permissive compared with other societal contexts, and a clearly delineated set of aberrant authors. We find a negative correlation between the amount of reused text in an article and its influence, as measured by subsequent citations. Finally, we consider the distribution of countries of origin of articles containing large amounts of reused text. PMID:25489072

  5. Transfer of interferon alfa into human breast milk.

    PubMed

    Kumar, A R; Hale, T W; Mock, R E

    2000-08-01

    Originally assumed to be antiviral substances, the efficacy of interferons in a number of pathologies, including malignancies, multiple sclerosis, and other immune syndromes, is increasingly recognized. This study provides data on the transfer of interferon alfa (2B) into human milk of a patient receiving massive intravenous doses for the treatment of malignant melanoma. Following an intravenous dose of 30 million IU, the amount of interferon transferred into human milk was only slightly elevated (1551 IU/mL) when compared to control milk (1249 IU/mL). These data suggest that even following enormous doses, interferon is probably too large in molecular weight to transfer into human milk in clinically relevant amounts.

  6. Comparison of predicted and experimental performance of large-bore roller bearing operating to 3.0 million DN

    NASA Technical Reports Server (NTRS)

    Coe, H. H.; Huller, F. T.

    1980-01-01

    Bearing inner and outer race temperatures and the amount of heat transferred to the lubricant were calculated by using the computer program CYBEAN. The results obtained were compared with previously reported experimental data for a 118 mm bore roller bearing that operated at shaft speeds to 25,500 rpm, radial loads to 8,900 N (2000 lb), and total lubricant flow rates to 0.0102 cu m/min (2.7 gal/min). The calculated results compared well with the experimental data.

  7. Response of lead-acid batteries to chopper-controlled discharge: Preliminary results

    NASA Technical Reports Server (NTRS)

    Cataldo, R. L.

    1978-01-01

    The preliminary results of simulated electric vehicle, chopper, speed controller discharge of a battery show energy output losses up to 25 percent compared to constant current discharges at the same average discharge current of 100 amperes. These energy losses are manifested as temperature rises during discharge, amounting to a two-fold increase for a 400-ampere pulse compared to the constant current case. Because of the potentially large energy inefficiency, the results suggest that electric vehicle battery/speed controller interaction must be carefully considered in vehicle design.

  8. Response of lead-acid batteries to chopper-controlled discharge

    NASA Technical Reports Server (NTRS)

    Cataldo, R. L.

    1978-01-01

    The preliminary results of simulated electric vehicle, chopper, speed controller discharge of a battery show energy output losses at up to 25 percent compared to constant current discharges at the same average discharge current of 100 A. These energy losses are manifested as temperature rises during discharge, amounting to a two-fold increase for a 400-A pulse compared to the constant current case. Because of the potentially large energy inefficiency, the results suggest that electric vehicle battery/speed controller interaction must be carefully considered in vehicle design.

  9. Comparing centralised and decentralised anaerobic digestion of stillage from a large-scale bioethanol plant to animal feed production.

    PubMed

    Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R

    2008-01-01

    A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.

  10. Accumulation of Reserve Carbohydrate by Rumen Protozoa and Bacteria in Competition for Glucose

    PubMed Central

    Denton, Bethany L.; Diese, Leanne E.; Firkins, Jeffrey L.

    2014-01-01

    The aim of this study was to determine if rumen protozoa could form large amounts of reserve carbohydrate compared to the amounts formed by bacteria when competing for glucose in batch cultures. We separated large protozoa and small bacteria from rumen fluid by filtration and centrifugation, recombined equal protein masses of each group into one mixture, and subsequently harvested (reseparated) these groups at intervals after glucose dosing. This method allowed us to monitor reserve carbohydrate accumulation of protozoa and bacteria individually. When mixtures were dosed with a moderate concentration of glucose (4.62 or 5 mM) (n = 2 each), protozoa accumulated large amounts of reserve carbohydrate; 58.7% (standard error of the mean [SEM], 2.2%) glucose carbon was recovered from protozoal reserve carbohydrate at time of peak reserve carbohydrate concentrations. Only 1.7% (SEM, 2.2%) was recovered in bacterial reserve carbohydrate, which was less than that for protozoa (P < 0.001). When provided a high concentration of glucose (20 mM) (n = 4 each), 24.1% (SEM, 2.2%) of glucose carbon was recovered from protozoal reserve carbohydrate, which was still higher (P = 0.001) than the 5.0% (SEM, 2.2%) glucose carbon recovered from bacterial reserve carbohydrate. Our novel competition experiments directly demonstrate that mixed protozoa can sequester sugar away from bacteria by accumulating reserve carbohydrate, giving protozoa a competitive advantage and stabilizing fermentation in the rumen. Similar experiments could be used to investigate the importance of starch sequestration. PMID:25548053

  11. Metagenomics of rumen bacteriophage from thirteen lactating dairy cattle

    PubMed Central

    2013-01-01

    Background The bovine rumen hosts a diverse and complex community of Eukarya, Bacteria, Archea and viruses (including bacteriophage). The rumen viral population (the rumen virome) has received little attention compared to the rumen microbial population (the rumen microbiome). We used massively parallel sequencing of virus like particles to investigate the diversity of the rumen virome in thirteen lactating Australian Holstein dairy cattle all housed in the same location, 12 of which were sampled on the same day. Results Fourteen putative viral sequence fragments over 30 Kbp in length were assembled and annotated. Many of the putative genes in the assembled contigs showed no homology to previously annotated genes, highlighting the large amount of work still required to fully annotate the functions encoded in viral genomes. The abundance of the contig sequences varied widely between animals, even though the cattle were of the same age, stage of lactation and fed the same diets. Additionally the twelve animals which were co-habited shared a number of their dominant viral contigs. We compared the functional characteristics of our bovine viromes with that of other viromes, as well as rumen microbiomes. At the functional level, we found strong similarities between all of the viral samples, which were highly distinct from the rumen microbiome samples. Conclusions Our findings suggest a large amount of between animal variation in the bovine rumen virome and that co-habiting animals may have more similar viromes than non co-habited animals. We report the deepest sequencing to date of the rumen virome. This work highlights the enormous amount of novelty and variation present in the rumen virome. PMID:24180266

  12. Simultaneous determination of the quantity and isotopic signature of dissolved organic matter from soil water using high-performance liquid chromatography/isotope ratio mass spectrometry.

    PubMed

    Scheibe, Andrea; Krantz, Lars; Gleixner, Gerd

    2012-01-30

    We assessed the accuracy and utility of a modified high-performance liquid chromatography/isotope ratio mass spectrometry (HPLC/IRMS) system for measuring the amount and stable carbon isotope signature of dissolved organic matter (DOM) <1 µm. Using a range of standard compounds as well as soil solutions sampled in the field, we compared the results of the HPLC/IRMS analysis with those from other methods for determining carbon and (13)C content. The conversion efficiency of the in-line wet oxidation of the HPLC/IRMS averaged 99.3% for a range of standard compounds. The agreement between HPLC/IRMS and other methods in the amount and isotopic signature of both standard compounds and soil water samples was excellent. For DOM concentrations below 10 mg C L(-1) (250 ng C total) pre-concentration or large volume injections are recommended in order to prevent background interferences. We were able to detect large differences in the (13)C signatures of soil solution DOM sampled in 10 cm depth of plots with either C3 or C4 vegetation and in two different parent materials. These measurements also demonstrated changes in the (13)C signature that demonstrate rapid loss of plant-derived C with depth. Overall the modified HLPC/IRMS system has the advantages of rapid sample preparation, small required sample volume and high sample throughput, while showing comparable performance with other methods for measuring the amount and isotopic signature of DOM. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Comparison of Surgically Treated Large Versus Small Intestinal Volvulus (2009-2014).

    PubMed

    Davis, Elizabeth; Townsend, Forrest I; Bennett, Julie W; Takacs, Joel; Bloch, Christopher P

    2016-01-01

    The purpose of this retrospective study was to compare the outcome for dogs with surgically treated large versus small intestinal volvulus between October 2009 and February 2014. A total of 15 dogs met the inclusion criteria and underwent an abdominal exploratory. Nine dogs were diagnosed with large intestinal volvulus during the study period, and all nine had surgical correction for large intestinal volvulus. All dogs were discharged from the hospital. Of the seven dogs available for phone follow-up (74 to 955 days postoperatively), all seven were alive and doing well. Six dogs were diagnosed with small intestinal volvulus during the study period. One of the six survived to hospital discharge. Three of the six were euthanized at the time of surgery due to an extensive amount of necrotic bowel. Of the three who were not, one died postoperatively the same day, one died 3 days later, and one dog survived for greater than 730 days. Results concluded that the outcome in dogs with surgically corrected large intestinal volvulus is excellent, compared with a poor outcome in dogs with small intestinal volvulus. The overall survival to discharge for large intestinal volvulus was 100%, versus 16% for small intestinal volvulus.

  14. Mechanistic Analysis of Mechano-Electrochemical Interaction in Silicon Electrodes with Surface Film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verma, Ankit; Mukherjee, Partha P.

    2017-11-17

    High-capacity anode materials for lithium-ion batteries, such as silicon, are prone to large volume change during lithiation/delithiation which may cause particle cracking and disintegration, thereby resulting in severe capacity fade and reduction in cycle life. In this work, a stochastic analysis is presented in order to understand the mechano-electrochemical interaction in silicon active particles along with a surface film during cycling. Amorphous silicon particles exhibiting single-phase lithiation incur lower amount of cracking as compared to crystalline silicon particles exhibiting two-phase lithiation for the same degree of volumetric expansion. Rupture of the brittle surface film is observed for both amorphous andmore » crystalline silicon particles and is attributed to the large volumetric expansion of the silicon active particle with lithiation. The mechanical property of the surface film plays an important role in determining the amount of degradation in the particle/film assembly. A strategy to ameliorate particle cracking in silicon active particles is proposed.« less

  15. Analyzing large-scale spiking neural data with HRLAnalysis™

    PubMed Central

    Thibeault, Corey M.; O'Brien, Michael J.; Srinivasa, Narayan

    2014-01-01

    The additional capabilities provided by high-performance neural simulation environments and modern computing hardware has allowed for the modeling of increasingly larger spiking neural networks. This is important for exploring more anatomically detailed networks but the corresponding accumulation in data can make analyzing the results of these simulations difficult. This is further compounded by the fact that many existing analysis packages were not developed with large spiking data sets in mind. Presented here is a software suite developed to not only process the increased amount of spike-train data in a reasonable amount of time, but also provide a user friendly Python interface. We describe the design considerations, implementation and features of the HRLAnalysis™ suite. In addition, performance benchmarks demonstrating the speedup of this design compared to a published Python implementation are also presented. The result is a high-performance analysis toolkit that is not only usable and readily extensible, but also straightforward to interface with existing Python modules. PMID:24634655

  16. Principles of gene microarray data analysis.

    PubMed

    Mocellin, Simone; Rossi, Carlo Riccardo

    2007-01-01

    The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.

  17. Biogeographic patterns in below-ground diversity in New York City's Central Park are similar to those observed globally

    PubMed Central

    Ramirez, Kelly S.; Leff, Jonathan W.; Barberán, Albert; Bates, Scott Thomas; Betley, Jason; Crowther, Thomas W.; Kelly, Eugene F.; Oldfield, Emily E.; Shaw, E. Ashley; Steenbock, Christopher; Bradford, Mark A.; Wall, Diana H.; Fierer, Noah

    2014-01-01

    Soil biota play key roles in the functioning of terrestrial ecosystems, however, compared to our knowledge of above-ground plant and animal diversity, the biodiversity found in soils remains largely uncharacterized. Here, we present an assessment of soil biodiversity and biogeographic patterns across Central Park in New York City that spanned all three domains of life, demonstrating that even an urban, managed system harbours large amounts of undescribed soil biodiversity. Despite high variability across the Park, below-ground diversity patterns were predictable based on soil characteristics, with prokaryotic and eukaryotic communities exhibiting overlapping biogeographic patterns. Further, Central Park soils harboured nearly as many distinct soil microbial phylotypes and types of soil communities as we found in biomes across the globe (including arctic, tropical and desert soils). This integrated cross-domain investigation highlights that the amount and patterning of novel and uncharacterized diversity at a single urban location matches that observed across natural ecosystems spanning multiple biomes and continents. PMID:25274366

  18. Nonphotic phase shifting in female Syrian hamsters: interactions with the estrous cycle.

    PubMed

    Young Janik, L; Janik, Daniel

    2003-08-01

    Nonphotic phase shifting of circadian rhythms was examined in female Syrian hamsters. Animals were stimulated at zeitgeber time 4.5 by either placing them in a novel running wheel or by transferring them to a clean home cage. Placement in a clean home cage was more effective than novel wheel treatment in stimulating large (> 1.5 h) phase shifts. Peak phase shifts (ca. 3.5 h) and the percentage of females showing large phase shifts were comparable to those found in male hamsters stimulated with novel wheels. The amount of activity induced by nonphotic stimulation and the amount of phase shifting varied slightly with respect to the 4-day estrous cycle. Animals tended to run less and shift less on the day of estrus. Nonphotic stimulation on proestrus often resulted in a 1-day delay of the estrous cycle reflected in animals' postovulatory vaginal discharge and the expression of sexual receptivity (lordosis). This delay of the estrous cycle was associated with large phase advances and high activity. These results extend the generality of nonphotic phase shifting to females for the first time and raise the possibility that resetting of circadian rhythms can induce changes in the estrous cycle.

  19. A systematic review to assess comparative effectiveness studies in epidural steroid injections for lumbar spinal stenosis and to estimate reimbursement amounts.

    PubMed

    Bresnahan, Brian W; Rundell, Sean D; Dagadakis, Marissa C; Sullivan, Sean D; Jarvik, Jeffrey G; Nguyen, Hiep; Friedly, Janna L

    2013-08-01

    To systematically appraise published comparative effectiveness evidence (clinical and economic) of epidural steroid injections (ESI) for lumbar spinal stenosis and to estimate Medicare reimbursement amounts for ESI procedures. TYPE: Systematic review. PubMed, Embase, and CINAHL were searched through August 2012 for key words that pertain to low back pain, spinal stenosis or sciatica, and epidural steroid injection. We used institutional and Medicare reimbursement amounts for our cost estimation. Articles published in English that assessed ESIs for adults with lumbar spinal stenosis versus a comparison intervention were included. Our search identified 146 unique articles, and 138 were excluded due to noncomparative study design, not having a study population with lumbar spinal stenosis, not having an appropriate outcome, or not being in English. We fully summarized 6 randomized controlled trials and 2 large observational studies. Randomized controlled trial articles were reviewed, and the study population, sample size, treatment groups, ESI dosage, ESI approaches, concomitant interventions, outcomes, and follow-up time were reported. Descriptive resource use estimates for ESIs were calculated with use of data from our institution during 2010 and Medicare-based reimbursement amounts. ESIs or anesthetic injections alone resulted in better short-term improvement in walking distance compared with control injections. However, there were no longer-term differences. No differences between ESIs versus anesthetic in self-reported improvement in pain were reported. Transforaminal approaches had better improvement in pain scores (≤4 months) compared with interlaminar injections. Two observational studies indicated increased rates of lumbar ESI in Medicare beneficiaries. Our sample included 279 patients who received at least 1 ESI during 2010, with an estimated mean total outpatient reimbursement for one ESI procedure "event" to be $637, based on 2010 Medicare reimbursement amounts ($505 technical and $132 professional payments). This systematic review of ESI for treating lumbar spinal stenosis found a limited amount of data that suggest that ESI is effective in some patients for improving select short-term outcomes, but results differed depending on study design, outcome measures used, and comparison groups evaluated. Overall, there are relatively few comparative clinical or economic studies for ESI procedures for lumbar spinal stenosis in adults, which indicated a need for additional evidence. Copyright © 2013. Published by Elsevier Inc.

  20. GPU-Meta-Storms: computing the structure similarities among massive amount of microbial community samples using GPU.

    PubMed

    Su, Xiaoquan; Wang, Xuetao; Jing, Gongchao; Ning, Kang

    2014-04-01

    The number of microbial community samples is increasing with exponential speed. Data-mining among microbial community samples could facilitate the discovery of valuable biological information that is still hidden in the massive data. However, current methods for the comparison among microbial communities are limited by their ability to process large amount of samples each with complex community structure. We have developed an optimized GPU-based software, GPU-Meta-Storms, to efficiently measure the quantitative phylogenetic similarity among massive amount of microbial community samples. Our results have shown that GPU-Meta-Storms would be able to compute the pair-wise similarity scores for 10 240 samples within 20 min, which gained a speed-up of >17 000 times compared with single-core CPU, and >2600 times compared with 16-core CPU. Therefore, the high-performance of GPU-Meta-Storms could facilitate in-depth data mining among massive microbial community samples, and make the real-time analysis and monitoring of temporal or conditional changes for microbial communities possible. GPU-Meta-Storms is implemented by CUDA (Compute Unified Device Architecture) and C++. Source code is available at http://www.computationalbioenergy.org/meta-storms.html.

  1. The impact of a windshield in a tipping bucket rain gauge on the reduction of losses in precipitation measurements during snowfall events

    NASA Astrophysics Data System (ADS)

    Buisan, Samuel T.; Collado, Jose Luis; Alastrue, Javier

    2016-04-01

    The amount of snow available controls the ecology and hydrological response of mountainous areas and cold regions and affects economic activities including winter tourism, hydropower generation, floods and water supply. An accurate measurement of snowfall accumulation amount is critical and source of error for a better evaluation and verification of numerical weather forecast, hydrological and climate models. It is well known that the undercatch of solid precipitation resulting from wind-induced updrafts at the gauge orifice is the main factor affecting the quality and accuracy of the amount of snowfall precipitation. This effect can be reduced by the use of different windshields. Overall, Tipping Bucket Rain Gauges (TPBRG) provide a large percentage of the precipitation amount measurements, in all climate regimes, estimated at about 80% of the total of observations by automatic instruments. In the frame of the WMO-SPICE project, we compared at the Formigal-Sarrios station (Spanish Pyrenees, 1800 m a.s.l.) the measured precipitation in two heated TPBRGs, one of them protected with a single alter windshield in order to reduce the wind bias. Results were contrasted with measured precipitation using the SPICE reference gauge (Pluvio2 OTT) in a Double Fence Intercomparison Reference (DFIR). Results reported that shielded reduces undercatch up to 40% when wind speed exceeds 6 m/s. The differences when compared with the reference gauge reached values higher than 70%. The inaccuracy of these measurements showed a significant impact in nowcasting operations and climatology in Spain, especially during some heavy snowfall episodes. Also, hydrological models showed a better agreement with the observed rivers flow when including the precipitation not accounted during these snowfall events. The conclusions of this experiment will be used to take decisions on the suitability of the installation of windshields in stations characterized by a large quantity of snowfalls during the winter season and which are mainly located in Northern Spain

  2. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Treesearch

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  3. Reanalysis of 24 Nearby Open Clusters using Gaia data

    NASA Astrophysics Data System (ADS)

    Yen, Steffi X.; Reffert, Sabine; Röser, Siegfried; Schilbach, Elena; Kharchenko, Nina V.; Piskunov, Anatoly E.

    2018-04-01

    We have developed a fully automated cluster characterization pipeline, which simultaneously determines cluster membership and fits the fundamental cluster parameters: distance, reddening, and age. We present results for 24 established clusters and compare them to literature values. Given the large amount of stellar data for clusters available from Gaia DR2 in 2018, this pipeline will be beneficial to analyzing the parameters of open clusters in our Galaxy.

  4. Transparency of an instantaneously created electron-positron-photon plasma

    NASA Astrophysics Data System (ADS)

    Bégué, D.; Vereshchagin, G. V.

    2014-03-01

    The problem of the expansion of a relativistic plasma generated when a large amount of energy is released in a small volume has been considered by many authors. We use the analytical solution of Bisnovatyi-Kogan and Murzina for the spherically symmetric relativistic expansion. The light curves and the spectra from transparency of an electron-positron-photon plasma are obtained. We compare our results with the work of Goodman.

  5. Extra-metabolic energy use and the rise in human hyper-density

    NASA Astrophysics Data System (ADS)

    Burger, Joseph R.; Weinberger, Vanessa P.; Marquet, Pablo A.

    2017-03-01

    Humans, like all organisms, are subject to fundamental biophysical laws. Van Valen predicted that, because of zero-sum dynamics, all populations of all species in a given environment flux the same amount of energy on average. Damuth’s ’energetic equivalence rule’ supported Van Valen´s conjecture by showing a tradeoff between few big animals per area with high individual metabolic rates compared to abundant small species with low energy requirements. We use metabolic scaling theory to compare variation in densities and individual energy use in human societies to other land mammals. We show that hunter-gatherers occurred at densities lower than the average for a mammal of our size. Most modern humans, in contrast, concentrate in large cities at densities up to four orders of magnitude greater than hunter-gatherers, yet consume up to two orders of magnitude more energy per capita. Today, cities across the globe flux greater energy than net primary productivity on a per area basis. This is possible by importing enormous amounts of energy and materials required to sustain hyper-dense, modern humans. The metabolic rift with nature created by modern cities fueled largely by fossil energy poses formidable challenges for establishing a sustainable relationship on a rapidly urbanizing, yet finite planet.

  6. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  7. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE PAGES

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...

    2018-03-03

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  8. A comparative evaluation of supervised and unsupervised representation learning approaches for anaplastic medulloblastoma differentiation

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Arevalo, John; Basavanhally, Ajay; Madabhushi, Anant; González, Fabio

    2015-01-01

    Learning data representations directly from the data itself is an approach that has shown great success in different pattern recognition problems, outperforming state-of-the-art feature extraction schemes for different tasks in computer vision, speech recognition and natural language processing. Representation learning applies unsupervised and supervised machine learning methods to large amounts of data to find building-blocks that better represent the information in it. Digitized histopathology images represents a very good testbed for representation learning since it involves large amounts of high complex, visual data. This paper presents a comparative evaluation of different supervised and unsupervised representation learning architectures to specifically address open questions on what type of learning architectures (deep or shallow), type of learning (unsupervised or supervised) is optimal. In this paper we limit ourselves to addressing these questions in the context of distinguishing between anaplastic and non-anaplastic medulloblastomas from routine haematoxylin and eosin stained images. The unsupervised approaches evaluated were sparse autoencoders and topographic reconstruct independent component analysis, and the supervised approach was convolutional neural networks. Experimental results show that shallow architectures with more neurons are better than deeper architectures without taking into account local space invariances and that topographic constraints provide useful invariant features in scale and rotations for efficient tumor differentiation.

  9. Alternatives for the intermediate recovery of plasmid DNA: performance, economic viability and environmental impact.

    PubMed

    Freitas, Sindelia; Canário, Sónia; Santos, José A L; Prazeres, Duarte M F

    2009-02-01

    Robust cGMP manufacturing is required to produce high-quality plasmid DNA (pDNA). Three established techniques, isopropanol and ammonium sulfate (AS) precipitation (PP), tangential flow filtration (TFF) and aqueous two-phase systems (ATPS) with PEG600/AS, were tested as alternatives to recover pDNA from alkaline lysates. Yield and purity data were used to evaluate the economic and environmental impact of each option. Although pDNA yields > or = 90% were always obtained, ATPS delivered the highest HPLC purity (59%), followed by PP (48%) and TFF (18%). However, the ability of ATPS to concentrate pDNA was very poor when compared with PP or TFF. Processes were also implemented by coupling TFF with ATPS or AS-PP. Process simulations indicate that all options require large amounts of water (100-200 tons/kg pDNA) and that the ATPS process uses large amounts of mass separating agents (65 tons/kg pDNA). Estimates indicate that operating costs of the ATPS process are 2.5-fold larger when compared with the PP and TFF processes. The most significant contributions to the costs in the PP, TFF and ATPS processes came from operators (59%), consumables (75%) and raw materials (84%), respectively. The ATPS process presented the highest environmental impact, whereas the impact of the TFF process was negligible.

  10. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

    PubMed

    Divett, T; Vennell, R; Stevens, C

    2013-02-28

    At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.

  11. Extra-metabolic energy use and the rise in human hyper-density.

    PubMed

    Burger, Joseph R; Weinberger, Vanessa P; Marquet, Pablo A

    2017-03-02

    Humans, like all organisms, are subject to fundamental biophysical laws. Van Valen predicted that, because of zero-sum dynamics, all populations of all species in a given environment flux the same amount of energy on average. Damuth's 'energetic equivalence rule' supported Van Valen´s conjecture by showing a tradeoff between few big animals per area with high individual metabolic rates compared to abundant small species with low energy requirements. We use metabolic scaling theory to compare variation in densities and individual energy use in human societies to other land mammals. We show that hunter-gatherers occurred at densities lower than the average for a mammal of our size. Most modern humans, in contrast, concentrate in large cities at densities up to four orders of magnitude greater than hunter-gatherers, yet consume up to two orders of magnitude more energy per capita. Today, cities across the globe flux greater energy than net primary productivity on a per area basis. This is possible by importing enormous amounts of energy and materials required to sustain hyper-dense, modern humans. The metabolic rift with nature created by modern cities fueled largely by fossil energy poses formidable challenges for establishing a sustainable relationship on a rapidly urbanizing, yet finite planet.

  12. THE SEGUE K GIANT SURVEY. III. QUANTIFYING GALACTIC HALO SUBSTRUCTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janesh, William; Morrison, Heather L.; Ma, Zhibo

    2016-01-10

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5–125 kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey’s Sloan Extension for Galactic Understanding and Exploration project. Using a position–velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earliermore » work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (∼33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity.« less

  13. VizieR Online Data Catalog: The SEGUE K giant survey. III. Galactic halo (Janesh+, 2016)

    NASA Astrophysics Data System (ADS)

    Janesh, W.; Morrison, H. L.; Ma, Z.; Rockosi, C.; Starkenburg, E.; Xue, X. X.; Rix, H.-W.; Harding, P.; Beers, T. C.; Johnson, J.; Lee, Y. S.; Schneider, D. P.

    2016-03-01

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5-125kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey's Sloan Extension for Galactic Understanding and Exploration (SEGUE) project. Using a position-velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earlier work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (~33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity. (2 data files).

  14. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  15. Planform changes and large wood dynamics in two torrents during a severe flash flood in Braunsbach, Germany 2016.

    PubMed

    Lucía, Ana; Schwientek, Marc; Eberle, Joachim; Zarfl, Christiane

    2018-05-30

    This work presents a post-event survey study, addressing the geomorphic response and large wood budget of two torrents, Grimmbach and Orlacher Bach, in southwestern Germany that were affected by a flash flood on May 29, 2016. During the event, large amounts of wood clogged and damaged a bridge of a cycling path at the outlet of the Grimmbach, while the town of Braunsbach was devastated by discharge and material transported along the Orlacher Bach. The severity of the event in these two small catchments (30.0 km 2 and 5.95 km 2 , respectively) is remarkable in basins with a relatively low average slope (10.7 and 12.0%, respectively). In order to gain a better understanding of the driving forces during this flood event an integrated approach was applied including (i) an estimate of peak discharges, (ii) an analysis of changes in channel width by comparing available aerial photographs before the flood with a post-flood aerial surveys with an Unmanned Aerial Vehicle and validation with field observations, (iii) a detailed mapping of landslides and analysis of their connectivity with the channel network and finally (iv) an analysis of the amounts of large wood recruited and deposited in the channel. The morphological changes in the channels can be explained by hydraulic parameters, such as stream power and unit stream power, and by morphological parameters such as the valley confinement. This is similar for LW recruitment amounts and volume of exported LW since most of it comes from the erosion of the valley floor. The morphological changes and large wood recruitment and deposit are in the range of studied mountain rivers. Both factors thus need to be considered for mapping and mitigating flash flood hazards also in this kind of low range mountains. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Low-cost floating emergence net and bottle trap: Comparison of two designs

    USGS Publications Warehouse

    Cadmus, Pete; Pomeranz, Justin; Kraus, Johanna M.

    2016-01-01

    Sampling emergent aquatic insects is of interest to many freshwater ecologists. Many quantitative emergence traps require the use of aspiration for collection. However, aspiration is infeasible in studies with large amounts of replication that is often required in large biomonitoring projects. We designed an economic, collapsible pyramid-shaped floating emergence trap with an external collection bottle that avoids the need for aspiration. This design was compared experimentally to a design of similar dimensions that relied on aspiration to ensure comparable results. The pyramid-shaped design captured twice as many total emerging insects. When a preservative was used in bottle collectors, >95% of the emergent abundance was collected in the bottle. When no preservative was used, >81% of the total insects were collected from the bottle. In addition to capturing fewer emergent insects, the traps that required aspiration took significantly longer to sample. Large studies and studies sampling remote locations could benefit from the economical construction, speed of sampling, and capture efficiency.

  17. The Flint Animal Cancer Center (FACC) Canine Tumour Cell Line Panel: a resource for veterinary drug discovery, comparative oncology and translational medicine.

    PubMed

    Fowles, J S; Dailey, D D; Gustafson, D L; Thamm, D H; Duval, D L

    2017-06-01

    Mammalian cell tissue culture has been a critical tool leading to our current understanding of cancer including many aspects of cellular transformation, growth and response to therapies. The current use of large panels of cell lines with associated phenotypic and genotypic information now allows for informatics approaches and in silico screens to rapidly test hypotheses based on simple as well as complex relationships. Current cell line panels with large amounts of associated drug sensitivity and genomics data are comprised of human cancer cell lines (i.e. NCI60 and GDSC). There is increased recognition of the contribution of canine cancer to comparative cancer research as a spontaneous large animal model with application in basic and translational studies. We have assembled a panel of canine cancer cell lines to facilitate studies in canine cancer and report here phenotypic and genotypic data associated with these cells. © 2016 John Wiley & Sons Ltd.

  18. The color and size of chili peppers (Capsicum annuum) influence Hep-G2 cell growth.

    PubMed

    Popovich, David G; Sia, Sharon Y; Zhang, Wei; Lim, Mon L

    2014-11-01

    Four types of chili (Capsicum annuum) extracts, categorized according to color; green and red, and size; small and large were studied in Hep-G2 cells. Red small (RS) chili had an LC50 value of 0.378 ± 0.029 compared to green big (GB) 1.034 ± 0.061 and green small (GS) 1.070 ± 0.21 mg/mL. Red big (RB) was not cytotoxic. Capsaicin content was highest in RS and produced a greater percentage sub-G1 cells (6.47 ± 1.8%) after 24 h compared to GS (2.96 ± 1.3%) and control (1.29 ± 0.8%) cells. G2/M phase was reduced by GS compared to RS and control cells. RS at the LC50 concentration contained 1.6 times the amount of pure capsaicin LC50 to achieve the same effect of capsaicin alone. GS and GB capsaicin content at the LC50 value was lower (0.2 and 0.66, respectively) compared to the amount of capsaicin to achieve a similar reduction in cell growth.

  19. Refolding Active Human DNA Polymerase ν from Inclusion Bodies

    PubMed Central

    Arana, Mercedes E.; Powell, Gary K.; Edwards, Lori L.; Kunkel, Thomas A.; Petrovich, Robert M.

    2017-01-01

    Human DNA polymerase ν (Pol ν) is a conserved family A DNA polymerase of uncertain biological function. Physical and biochemical characterization aimed at understanding Pol ν function is hindered by the fact that, when over-expressed in E. coli, Pol ν is largely insoluble, and the small amount of soluble protein is difficult to purify. Here we describe the use of high hydrostatic pressure to refold Pol ν from inclusion bodies, in soluble and active form. The refolded Pol ν has properties comparable to those of the small amount of Pol ν that was purified from the soluble fraction. The approach described here may be applicable to other DNA polymerases that are expressed as insoluble inclusion bodies in E. coli. PMID:19853037

  20. Preparation of Small RNAs Using Rolling Circle Transcription and Site-Specific RNA Disconnection.

    PubMed

    Wang, Xingyu; Li, Can; Gao, Xiaomeng; Wang, Jing; Liang, Xingguo

    2015-01-13

    A facile and robust RNA preparation protocol was developed by combining rolling circle transcription (RCT) with RNA cleavage by RNase H. Circular DNA with a complementary sequence was used as the template for promoter-free transcription. With the aid of a 2'-O-methylated DNA, the RCT-generated tandem repeats of the desired RNA sequence were disconnected at the exact end-to-end position to harvest the desired RNA oligomers. Compared with the template DNA, more than 4 × 10(3) times the amount of small RNA products were obtained when modest cleavage was carried out during transcription. Large amounts of RNA oligomers could easily be obtained by simply increasing the reaction volume.

  1. A Comparison of Gene Expression of Decorin and MMP13 in Hypertrophic Scars Treated With Calcium Channel Blocker, Steroid, and Interferon: A Human-Scar-Carrying Animal Model Study.

    PubMed

    Yang, Shih-Yi; Yang, Jui-Yung; Hsiao, Yen-Chang; Chuang, Shiow-Shuh

    2017-01-01

    The formation of hypertrophic scaring (HSc) is an abnormal wound-healing response. In a previous study, an animal model with human scar tissue implanted into nude mice (BALB/c) has been successfully established. The effects of verapamil as well as combination therapy with verapamil and kenacort have been studied and compared. To treat persistent hypertrophic scars, local injection of drugs composed of steroids, calcium channel blockers (CCBs), and interferon might be a good method. What is the best dose of the regimen and what are the mechanisms are also a worthwhile study. Scar specimens were harvested from patients with HSc or Keloid resulting from burn injury, and then implanted to BALB/c-nu nude mice for 4 weeks. Before implantation, the specimen was either injected with or without drugs such as steroids (kenacort), CCBs (verapamil), and interferons (INFα2b), respectively. After the removal of implants, quantitative gene expressions of decorin and collagenase (MMP13) were measured using a real-time polymerase chain reaction to detect their mRNAs. Two way-ANOVA and Post Hoc were used for statistical analysis using the software SPSS 15.0. All drug-treated groups increased the expressions of decorin and MMP13 in comparison with those in noninjected group (p < .001) in a dose-dependent manner. Comparing equal amounts of individual drugs, gene expression of decorin was increased with increasing injection amount, and the best result in low amount of injection (0.02 mL of each) was shown in the group injected with INFα2b followed by kenacort and verapamil. However, the results were changed while injection amount was up to 0.04 mL and the strongest decorin gene expression was found in kenacort injection. Regarding MMP-13 expression, low-amount injection (0.02 mL) of INFα2b has strongest gene expression followed by kenacort and verapamil, but in the large-amount regimes (0.04 mL), verapamil had strongest gene expression followed by INFα2b and kenacort. This study showed that the kenacort, verapamil, and INFα2b all inhibited HSc in a dose-dependent manner through the evidence of gene expression of decorin and MMP13. In comparison with the injections between small amounts of drugs, INFα2b potentiated the strongest decorin and MMP13 expression. On the contrary, among the large-amount injection regimes, kenacrot was more effective on decorin expression as verapamil to MMP13 expression. To decrease side effects from the drugs and produce promising results for the clinical practice, it is suggested to maintain the dose of INFα2b along with an increased dose of verapamil for HSc improvement.

  2. An overview of HyFIE Technical Research Project: cross-testing in main European hypersonic wind tunnels on EXPERT body

    NASA Astrophysics Data System (ADS)

    Brazier, Jean-Philippe; Martinez Schramm, Jan; Paris, Sébastien; Gawehn, Thomas; Reimann, Bodo

    2016-09-01

    HyFIE project aimed at improving the measurement techniques in hypersonic wind tunnels and comparing the experimental data provided by four major European facilities: DLR HEG and H2K, ONERA F4 and VKI Longshot. A common geometry of EXPERT body was chosen and four different models were used. A large amount of experimental data was collected and compared with the results of numerical simulations. Collapsing all the measured values showed a good agreement between the different facilities, as well as between experimental and computed data.

  3. Measurement of charged particle transverse momentum spectra in deep inelastic scattering

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Babaev, A.; Bähr, J.; Bán, J.; Ban, Y.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Barth, M.; Bassler, U.; Beck, H. P.; Beck, M.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biddulph, P.; Bispham, P.; Bizot, J. C.; Blobel, V.; Borras, K.; Botterweck, F.; Boudry, V.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Buchholz, R.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Burton, M. J.; Calvet, D.; Campbell, A. T.; Carli, T.; Charlet, M.; Clarke, D.; Clegg, A. B.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Courau, A.; Cousinou, M.-C.; Cozzika, G.; Criegee, L.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; Davis, C. L.; Delcourt, B.; De Roeck, A.; De Wolf, E. A.; Dirkmann, M.; Dixon, P.; Di Nezza, P.; Dlugosz, W.; Dollfus, C.; Donovan, K. T.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Dünger, O.; Duhm, H.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Erdmann, M.; Erdmann, W.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Genzel, H.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goldner, D.; Golec-Biernat, K.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herynek, I.; Hess, M. F.; Hewitt, K.; Hildesheim, W.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; Itterbeck, H.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jansen, T.; Jönson, L.; Johnson, D. P.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kaschowitz, R.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Lacour, D.; Laforge, B.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehner, F.; Levonian, S.; Lindström, G.; Lindstroem, M.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Loch, P.; Lomas, J. W.; Lopez, G. C.; Lubimov, V.; Liike, D.; Lytkin, L.; Magnussen, N.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, G.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nowak, G.; Noyes, G. W.; Nunnemann, T.; Nyberg-Werther, M.; Oakden, M.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pope, G.; Povh, B.; Prell, S.; Rabbertz, K.; Rädel, G.; Reimer, P.; Reinshagen, S.; Rick, H.; Riepenhausen, F.; Riess, S.; Rizvi, E.; Robmann, P.; Roloff, P. H. E.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Sefkow, F.; Sell, R.; Semenovy, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorni, I. O.; Smirnov, F.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Steffen, F.; Steinberg, F.; Steiner, H.; Steinhart, J.; Stella, B.; Stellbergr, A.; Stier, P. J.; Stiewe, J.; Stöβlein, U.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Tagevˇský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thiebaux, C.; Thompson, G.; Tobien, N.; Todenhagen, R.; Truöl, P.; Tsipolitis, G.; Turnau, J.; Tutas, J.; Tzamariudaki, E.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; Vandenplas, D.; Van Esch, P.; Van Mechelen, P.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wenger, A.; Wengler, T.; Werner, M.; West, L. R.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wobisch, M.; Wünsch, E.; Žáček, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zuber, K.; zurNedden, M.; Hl Collaboration

    1997-02-01

    Transverse momentum spectra of charged particles produced in deep inelastic scattering are measured as a function of the kinematic variables x and Q using the H1 detector at the epcollider HERA. The data are compared to different parton emission models, either with or without ordering of the emissions in transverse momentum. The data provide evidence for a relatively large amount of parton radiation between the current and the remnant systems.

  4. Integrating Representation Learning and Skill Learning in a Human-Like Intelligent Agent

    DTIC Science & Technology

    2013-06-21

    of 10 full-year controlled studies [Koedinger and MacLaren, 1997]. Nevertheless, the quality of the personalized instructions depends largely on the...relation among its children . The value of the direction field can be d, h, or v. d is the default value set for grammar rules that have only one child ...nearly comparable performance while significantly reducing the amount of knowledge engineering effort needed. 6.3 Experimental Study on

  5. Spatial and temporal patterns of beetles associated with coarse woody debris in managed bottomland hardwood forests

    Treesearch

    Michael D. Ulyshen; James L. Hanula; Scott Horn; John C. Kilgo; Christopher E. Moorman

    2004-01-01

    Malaise traps were used to sample beetles in artificial canopy gaps of different size (0.13 ha, 0.26 ha, and 0.50 ha) and age in a South Carolina bottomland hardwood forest. Traps were placed at the center, edge, and in the surrounding forest of each gap. Young gaps (~1 year) had large amounts of coarse woody debris compared to the surrounding forest, while older gaps...

  6. Rational calculation accuracy in acousto-optical matrix-vector processor

    NASA Astrophysics Data System (ADS)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  7. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  8. Accumulation of reserve carbohydrate by rumen protozoa and bacteria in competition for glucose.

    PubMed

    Denton, Bethany L; Diese, Leanne E; Firkins, Jeffrey L; Hackmann, Timothy J

    2015-03-01

    The aim of this study was to determine if rumen protozoa could form large amounts of reserve carbohydrate compared to the amounts formed by bacteria when competing for glucose in batch cultures. We separated large protozoa and small bacteria from rumen fluid by filtration and centrifugation, recombined equal protein masses of each group into one mixture, and subsequently harvested (reseparated) these groups at intervals after glucose dosing. This method allowed us to monitor reserve carbohydrate accumulation of protozoa and bacteria individually. When mixtures were dosed with a moderate concentration of glucose (4.62 or 5 mM) (n = 2 each), protozoa accumulated large amounts of reserve carbohydrate; 58.7% (standard error of the mean [SEM], 2.2%) glucose carbon was recovered from protozoal reserve carbohydrate at time of peak reserve carbohydrate concentrations. Only 1.7% (SEM, 2.2%) was recovered in bacterial reserve carbohydrate, which was less than that for protozoa (P < 0.001). When provided a high concentration of glucose (20 mM) (n = 4 each), 24.1% (SEM, 2.2%) of glucose carbon was recovered from protozoal reserve carbohydrate, which was still higher (P = 0.001) than the 5.0% (SEM, 2.2%) glucose carbon recovered from bacterial reserve carbohydrate. Our novel competition experiments directly demonstrate that mixed protozoa can sequester sugar away from bacteria by accumulating reserve carbohydrate, giving protozoa a competitive advantage and stabilizing fermentation in the rumen. Similar experiments could be used to investigate the importance of starch sequestration. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  9. The radioactivity of seasonal dust storms in the Middle East: the May 2012 case study in Jordan.

    PubMed

    Hamadneh, Hamed S; Ababneh, Zaid Q; Hamasha, Khadeejeh M; Ababneh, Anas M

    2015-02-01

    Dust storms in the Middle East are common during spring. Some of these storms are massive and carry a large amount of dust from faraway regions, which pose health and pollution risks. The huge dust storm event occurred in early May, 2012 was investigated for its radioactive content using gamma ray spectroscopy. Dust samples were collected from Northern Jordan and it was found that the storm carried a large amount of both artificial and natural radioactivity. The average activity concentration of fallout (137)Cs was 17.0 Bq/kg which is larger than that found in soil (2.3 Bq/kg), and this enrichment is attributed to particle size effects. (7)Be which is of atmospheric origin and has a relatively short half-life, was detected in dust with relatively large activity concentrations, as it would be expected, with an average of 2860 Bq/kg, but it was not detected in soil. Despite the large activity concentration of (7)Be, dose assessment showed that it does not contribute significantly to the effective dose through inhalation. The concentrations of the primodial nuclides (40)K, (232)Th and (238)U were 547, 30.0 and 49.3 Bq/kg, respectively. With the exception of (40)K, these were comparable to what was found in soil. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Comparison of Antioxidant Constituents of Agriophyllum squarrosum Seed with Conventional Crop Seeds.

    PubMed

    Xu, Hai-Yan; Zheng, Hua-Chuan; Zhang, Hui-Wen; Zhang, Jin-Yu; Ma, Chao-Mei

    2018-06-05

    Twelve chemical constituents were identified from the Agriophyllum squarrosum seed (ASS). ASS contained large amounts of flavonoids, which were more concentrated in the seed coat. ASS-coat (1 g) contained 335.7 μg flavonoids of rutin equivalent, which was similar to the flavonoid content in soybean (351.2 μg/g), and greater than that in millet, wheat, rice, peanut, and corn. By LC-MS analysis, the major constituents in ASS were 3-O-[α-L-rhamnopyranosyl-(1→6)-β-D- glucopyranosyl]-7- O-(β-D-glucopyranosyl)-quercetin (1), rutin (4), quercetin-3-O-β-D- apiosyl(1→2)-[α-L-rhamnosyl(l→6)]-β-D-glucoside (2), isorhamnetin-3-O-rutinoside (5), and allantoin (3), compared with isoflavonoids-genistin (16), daidzin (14), and glycitin (18) in soybean. Among constituents in ASS, compounds 1, 2, 4, protocatechuic acid (8), isoquercitrin (11), and luteolin-6-C-glucoside (12) potently scavenged DPPH radicals and intracellular ROS; strongly protected against peroxyl radical-induced DNA scission; and upregulated Nrf2, phosphorylated p38, phosphorylated JNK, and Bcl-2 in HepG2 cells. These results indicate that ASS is rich in antioxidant constituents that can enrich the varieties of food flavonoids, with significant beneficial implications for those who suffer from oxidative stress-related conditions. This study found that A. squarrosum seed contains large amounts of antioxidative flavonoids and compared its chemical constituents with those of conventional foods. These results should increase the interest in planting the sand-fixing A. squarrosum on a large scale, thus preventing desertification and providing valuable foods. © 2018 Institute of Food Technologists®.

  11. Classification of brain MRI with big data and deep 3D convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim

    2018-02-01

    Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.

  12. Use of tropical maize for bioethanol production

    USDA-ARS?s Scientific Manuscript database

    Tropical maize is an alternative energy crop being considered as a feedstock for bioethanol production in the North Central and Midwest United States. Tropical maize is advantageous because it produces large amounts of soluble sugars in its stalks, creates a large amount of biomass, and requires lo...

  13. Effect of Particle Size on Thermal Conductivity of Nanofluid

    NASA Astrophysics Data System (ADS)

    Chopkar, M.; Sudarshan, S.; Das, P. K.; Manna, I.

    2008-07-01

    Nanofluids, containing nanometric metallic or oxide particles, exhibit extraordinarily high thermal conductivity. It is reported that the identity (composition), amount (volume percent), size, and shape of nanoparticles largely determine the extent of this enhancement. In the present study, we have experimentally investigated the impact of Al2Cu and Ag2Al nanoparticle size and volume fraction on the effective thermal conductivity of water and ethylene glycol based nanofluid prepared by a two-stage process comprising mechanical alloying of appropriate Al-Cu and Al-Ag elemental powder blend followed by dispersing these nanoparticles (1 to 2 vol pct) in water and ethylene glycol with different particle sizes. The thermal conductivity ratio of nanofluid, measured using an indigenously developed thermal comparator device, shows a significant increase of up to 100 pct with only 1.5 vol pct nanoparticles of 30- to 40-nm average diameter. Furthermore, an analytical model shows that the interfacial layer significantly influences the effective thermal conductivity ratio of nanofluid for the comparable amount of nanoparticles.

  14. Fractional labelmaps for computing accurate dose volume histograms

    NASA Astrophysics Data System (ADS)

    Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor

    2017-03-01

    PURPOSE: In radiation therapy treatment planning systems, structures are represented as parallel 2D contours. For treatment planning algorithms, structures must be converted into labelmap (i.e. 3D image denoting structure inside/outside) representations. This is often done by triangulated a surface from contours, which is converted into a binary labelmap. This surface to binary labelmap conversion can cause large errors in small structures. Binary labelmaps are often represented using one byte per voxel, meaning a large amount of memory is unused. Our goal is to develop a fractional labelmap representation containing non-binary values, allowing more information to be stored in the same amount of memory. METHODS: We implemented an algorithm in 3D Slicer, which converts surfaces to fractional labelmaps by creating 216 binary labelmaps, changing the labelmap origin on each iteration. The binary labelmap values are summed to create the fractional labelmap. In addition, an algorithm is implemented in the SlicerRT toolkit that calculates dose volume histograms (DVH) using fractional labelmaps. RESULTS: We found that with manually segmented RANDO head and neck structures, fractional labelmaps represented structure volume up to 19.07% (average 6.81%) more accurately than binary labelmaps, while occupying the same amount of memory. When compared to baseline DVH from treatment planning software, DVH from fractional labelmaps had agreement acceptance percent (1% ΔD, 1% ΔV) up to 57.46% higher (average 4.33%) than DVH from binary labelmaps. CONCLUSION: Fractional labelmaps promise to be an effective method for structure representation, allowing considerably more information to be stored in the same amount of memory.

  15. Examination of snowmelt over Western Himalayas using remote sensing data

    NASA Astrophysics Data System (ADS)

    Tiwari, Sarita; Kar, Sarat C.; Bhatla, R.

    2016-07-01

    Snowmelt variability in the Western Himalayas has been examined using remotely sensed snow water equivalent (SWE) and snow-covered area (SCA) datasets. It is seen that climatological snowfall and snowmelt amount varies in the Himalayan region from west to east and from month to month. Maximum snowmelt occurs at the elevation zone between 4500 and 5000 m. As the spring and summer approach and snowmelt begins, a large amount of snow melts in May. Strength and weaknesses of temperature-based snowmelt models have been analyzed for this region by computing the snowmelt factor or the degree-day factor (DDF). It is seen that average DDF in the Himalayas is more in April and less in July. During spring and summer months, melting rate is higher in the areas that have height above 2500 m. The region that lies between 4500 and 5000 m elevation zones contributes toward more snowmelt with higher melting rate. Snowmelt models have been developed to estimate interannual variations of monthly snowmelt amount using the DDF, observed SWE, and surface air temperature from reanalysis datasets. In order to further improve the estimate snowmelt, regression between observed and modeled snowmelt has been carried out and revised DDF values have been computed. It is found that both the models do not capture the interannual variability of snowmelt in April. The skill of the model is moderate in May and June, but the skill is relatively better in July. In order to explain this skill, interannual variability (IAV) of surface air temperature has been examined. Compared to July, in April, the IAV of temperature is large indicating that a climatological value of DDF is not sufficient to explain the snowmelt rate in April. Snow area and snow amount depletion curves over Himalayas indicate that in a small area at high altitude, snow is still observed with large SWE whereas over most of the region, all the snow has melted.

  16. Test of Von Baer's law of the conservation of early development.

    PubMed

    Poe, Steven

    2006-11-01

    One of the oldest and most pervasive ideas in comparative embryology is the perceived evolutionary conservation of early ontogeny relative to late ontogeny. Karl Von Baer first noted the similarity of early ontogeny across taxa, and Ernst Haeckel and Charles Darwin gave evolutionary interpretation to this phenomenon. In spite of a resurgence of interest in comparative embryology and the development of mechanistic explanations for Von Baer's law, the pattern itself has been largely untested. Here, I use statistical phylogenetic approaches to show that Von Baer's law is an unnecessarily complex explanation of the patterns of ontogenetic timing in several clades of vertebrates. Von Baer's law suggests a positive correlation between ontogenetic time and amount of evolutionary change. I compare ranked position in ontogeny to frequency of evolutionary change in rank for developmental events and find that these measures are not correlated, thus failing to support Von Baer's model. An alternative model that postulates that small changes in ontogenetic rank are evolutionarily easier than large changes is tentatively supported.

  17. Establishment of Class e1 Mass Standard of 50 kg

    NASA Astrophysics Data System (ADS)

    Yao, Hong; Wang, Jian; Ding, Jingan; Zhong, Ruilin; Ren, Xiaoping

    Because of the equipment limit, the dissemination of large mass has been realized by a large amount of higher class of 20 kg weights since 1950s in China. But with improvement of the technique and customer's requirements, it is necessary to establish the mass standard of 50 kg weight. In 1990s, mass standard laboratory has set up Class E1 weight sets from 20 kg to 1 mg. To extend the mass capacity up to 50 kg of Class E1, it is not only to produce Class E1 50 kg weight and import a mass comparator, but also need to lift the heavy weight from weight box to balance receptor safely. Up to now, the mass comparator has been installed in Hepingli campus of NIM. Two pieces of Class E1 50 kg weights are determined by combination weighing method. A lifting device has been mounted close to the mass comparator in order to move the 50 kg easily.

  18. Gene Expression Browser: Large-Scale and Cross-Experiment Microarray Data Management, Search & Visualization

    USDA-ARS?s Scientific Manuscript database

    The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...

  19. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  20. Field experiment with liquid manure and enhanced biochar

    NASA Astrophysics Data System (ADS)

    Dunst, Gerald

    2017-04-01

    Field experiments with low amounts of various liquid manure enhanced biochars. In 2016 a new machine was developed to inject liquid biochar based fertilizer directly into the crop root zone. A large-scale field experiment with corn and oil seed pumpkin was set-up on 42 hectares on 15 different fields in the south East of Austria. Three treatments were compared: (1) surface spreading of liquid manure as control (common practice), (2) 20 cm deep root zone injection with same amount of liquid manure, and (3) 20 cm deep root zone injection with same amount of liquid manure mixed with 1 to 2 tons of various nutrient enhanced biochars. The biochar were quenched with the liquid phase from a separated digestate from a biogas plant (feedstock: cow manure). From May to October nitrate and ammonium content was analyzed monthly from 0-30cm and 30-60cm soil horizons. At the end of the growing season the yield was determined. The root zone injection of the liquid manure reduced the nitrate content during the first two months at 13-16% compared to the control. When the liquid manure was blended with biochar, Nitrate soil content was lowest (reduction 40-47%). On average the root zone injection of manure-biochar increased the yield by 7% compared to the surface applied control and 3% compared to the root zone injected manure without biochar. The results shows, that biochar is able to reduce the Nitrate load in soils and increase the yield of corn at the same time. The nutrient efficiency of organic liquid fertilizers can be increased.

  1. Consumption with Large Sip Sizes Increases Food Intake and Leads to Underestimation of the Amount Consumed

    PubMed Central

    Bolhuis, Dieuwerke P.; Lakemond, Catriona M. M.; de Wijk, Rene A.; Luning, Pieternel A.; de Graaf, Cees

    2013-01-01

    Background A number of studies have shown that bite and sip sizes influence the amount of food intake. Consuming with small sips instead of large sips means relatively more sips for the same amount of food to be consumed; people may believe that intake is higher which leads to faster satiation. This effect may be disturbed when people are distracted. Objective The objective of the study is to assess the effects of sip size in a focused state and a distracted state on ad libitum intake and on the estimated amount consumed. Design In this 3×2 cross-over design, 53 healthy subjects consumed ad libitum soup with small sips (5 g, 60 g/min), large sips (15 g, 60 g/min), and free sips (where sip size was determined by subjects themselves), in both a distracted and focused state. Sips were administered via a pump. There were no visual cues toward consumption. Subjects then estimated how much they had consumed by filling soup in soup bowls. Results Intake in the small-sip condition was ∼30% lower than in both the large-sip and free-sip conditions (P<0.001). In addition, subjects underestimated how much they had consumed in the large-sip and free-sip conditions (P<0.03). Distraction led to a general increase in food intake (P = 0.003), independent of sip size. Distraction did not influence sip size or estimations. Conclusions Consumption with large sips led to higher food intake, as expected. Large sips, that were either fixed or chosen by subjects themselves led to underestimations of the amount consumed. This may be a risk factor for over-consumption. Reducing sip or bite sizes may successfully lower food intake, even in a distracted state. PMID:23372657

  2. Biogeographic patterns in below-ground diversity in New York City's Central Park are similar to those observed globally.

    PubMed

    Ramirez, Kelly S; Leff, Jonathan W; Barberán, Albert; Bates, Scott Thomas; Betley, Jason; Crowther, Thomas W; Kelly, Eugene F; Oldfield, Emily E; Shaw, E Ashley; Steenbock, Christopher; Bradford, Mark A; Wall, Diana H; Fierer, Noah

    2014-11-22

    Soil biota play key roles in the functioning of terrestrial ecosystems, however, compared to our knowledge of above-ground plant and animal diversity, the biodiversity found in soils remains largely uncharacterized. Here, we present an assessment of soil biodiversity and biogeographic patterns across Central Park in New York City that spanned all three domains of life, demonstrating that even an urban, managed system harbours large amounts of undescribed soil biodiversity. Despite high variability across the Park, below-ground diversity patterns were predictable based on soil characteristics, with prokaryotic and eukaryotic communities exhibiting overlapping biogeographic patterns. Further, Central Park soils harboured nearly as many distinct soil microbial phylotypes and types of soil communities as we found in biomes across the globe (including arctic, tropical and desert soils). This integrated cross-domain investigation highlights that the amount and patterning of novel and uncharacterized diversity at a single urban location matches that observed across natural ecosystems spanning multiple biomes and continents. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  3. THE CHEMISTRY OF THE LIVER IN ACUTE YELLOW ATROPHY

    PubMed Central

    Wells, H. Gideon

    1907-01-01

    From the liver of a young man who died of typical, " idiopathic" acute yellow atrophy of the liver, after an illness of six weeks, there were isolated and identified the following amino acids: Histidin, lysin, tyrosin, leucin, glycocoll, alanin, prolin, glutaminic acid, aspartic acid. These were found free in extracts of the liver, and presumably represent products of the autolysis of liver cells, although the amount of soluble non-protein nitrogen present in the extracts was so large as to suggest that there must be some other source for these substances. Small quantities of free proteoses and peptones, and of xanthin and hypoxanthin, were also found in the extracts. In the insoluble proteins of the liver the proportion of diamino acids was decreased slightly as compared with normal livers. The proportion of protein phosphorus was increased, probably because of active regenerative proliferation, while the sulphur was normal in amount. Iron was increased because of the large quantity of blood in the liver and the hematogenous pigmentation of the liver cells. Gelatigenous material was increased both absolutely and relatively, because of the loss of parenchyma and the proliferation of the stroma. The proportion of water to solids was much increased, there having been a loss of over two-thirds of the entire parenchymatous elements of the liver. The amount of fat, lecithin and cholesterin was not far from that normal for the liver. PMID:19867115

  4. A Self Sustaining Solar-Bio-Nano Based Wastewater Treatment System for Forward Operating Bases

    DTIC Science & Technology

    2017-06-21

    fouling problem and requires a relatively high operational pressure (more than 500 psi) [52]. It has also been reported that pulsed electric discharge as...large amount of working fluid to the targeted temperature. In addition, energy loss to the ambient environment is another problem that significantly...heat. Gas and steam turbines as engine units were compared to determine the most suitable for the studied solar–bio hybrid system. The net capacity

  5. Interaction between lactic acid bacteria and yeasts in airag, an alcoholic fermented milk.

    PubMed

    Sudun; Wulijideligen; Arakawa, Kensuke; Miyamoto, Mari; Miyamoto, Taku

    2013-01-01

    The interaction between nine lactic acid bacteria (LAB) and five yeast strains isolated from airag of Inner Mongolia Autonomic Region, China was investigated. Three representative LAB and two yeasts showed symbioses were selected and incubated in 10% (w/v) reconstituted skim milk as single and mixed cultures to measure viable count, titratable acidity, ethanol and sugar content every 24 h for 1 week. LAB and yeasts showed high viable counts in the mixed cultures compared to the single cultures. Titratable acidity of the mixed cultures was obviously enhanced compared with that of the single cultures, except for the combinations of Lactobacillus reuteri 940B3 with Saccharomyces cerevisiae 4C and Lactobacillus helveticus 130B4 with Candida kefyr 2Y305. C. kefyr 2Y305 produced large amounts of ethanol (maximum 1.35 g/L), whereas non-lactose-fermenting S. cerevisiae 4C produced large amounts of ethanol only in the mixed cultures. Total glucose and galactose content increased while lactose content decreased in the single cultures of Leuconostoc mesenteroides 6B2081 and Lb. helveticus 130B4. However, both glucose and galactose were completely consumed and lactose was markedly reduced in the mixed cultures with yeasts. The result suggests that yeasts utilize glucose and galactose produced by LAB lactase to promote cell growth. © 2012 The Authors. Animal Science Journal © 2012 Japanese Society of Animal Science.

  6. Economical ground data delivery

    NASA Technical Reports Server (NTRS)

    Markley, Richard W.; Byrne, Russell H.; Bromberg, Daniel E.

    1994-01-01

    Data delivery in the Deep Space Network (DSN) involves transmission of a small amount of constant, high-priority traffic and a large amount of bursty, low priority data. The bursty traffic may be initially buffered and then metered back slowly as bandwidth becomes available. Today both types of data are transmitted over dedicated leased circuits. The authors investigated the potential of saving money by designing a hybrid communications architecture that uses leased circuits for high-priority network communications and dial-up circuits for low-priority traffic. Such an architecture may significantly reduce costs and provide an emergency backup. The architecture presented here may also be applied to any ground station-to-customer network within the range of a common carrier. The authors compare estimated costs for various scenarios and suggest security safeguards that should be considered.

  7. A Cost Effective Block Framing Scheme for Underwater Communication

    PubMed Central

    Shin, Soo-Young; Park, Soo-Hyun

    2011-01-01

    In this paper, the Selective Multiple Acknowledgement (SMA) method, based on Multiple Acknowledgement (MA), is proposed to efficiently reduce the amount of data transmission by redesigning the transmission frame structure and taking into consideration underwater transmission characteristics. The method is suited to integrated underwater system models, as the proposed method can handle the same amount of data in a much more compact frame structure without any appreciable loss of reliability. Herein, the performance of the proposed SMA method was analyzed and compared to those of the conventional Automatic Repeat-reQuest (ARQ), Block Acknowledgement (BA), block response, and MA methods. The efficiency of the underwater sensor network, which forms a large cluster and mostly contains uplink data, is expected to be improved by the proposed method. PMID:22247689

  8. Experience of the JPL Exploratory Data Analysis Team at validating HIRS2/MSU cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Granger-Gallegos, Stephanie; Pursch, Andrew; Delgenio, Anthony

    1992-01-01

    Validation of the HIRS2/MSU cloud parameters began with the cloud/climate feedback problem. The derived effective cloud amount is less sensitive to surface temperature for higher clouds. This occurs because as the cloud elevation increases, the difference between surface temperature and cloud temperature increases, so only a small change in cloud amount is needed to effect a large change in radiance at the detector. By validating the cloud parameters it is meant 'developing a quantitative sense for the physical meaning of the measured parameters', by: (1) identifying the assumptions involved in deriving parameters from the measured radiances, (2) testing the input data and derived parameters for statistical error, sensitivity, and internal consistency, and (3) comparing with similar parameters obtained from other sources using other techniques.

  9. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    NASA Astrophysics Data System (ADS)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  10. Use of High-Resolution Satellite Observations to Evaluate Cloud and Precipitation Statistics from Cloud-Resolving Model Simulations

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Tao, W.; Hou, A. Y.; Zeng, X.; Shie, C.

    2007-12-01

    The cloud and precipitation statistics simulated by 3D Goddard Cumulus Ensemble (GCE) model for different environmental conditions, i.e., the South China Sea Monsoon Experiment (SCSMEX), CRYSTAL-FACE, and KAWJEX are compared with Tropical Rainfall Measuring Mission (TRMM) TMI and PR rainfall measurements and as well as cloud observations from the Earth's Radiant Energy System (CERES) and the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments. It is found that GCE is capable of simulating major convective system development and reproducing total surface rainfall amount as compared with rainfall estimated from the soundings. The model presents large discrepancies in rain spectrum and vertical hydrometer profiles. The discrepancy in the precipitation field is also consistent with the cloud and radiation observations. The study will focus on the effects of large scale forcing and microphysics to the simulated model- observation discrepancies.

  11. mySyntenyPortal: an application package to construct websites for synteny block analysis.

    PubMed

    Lee, Jongin; Lee, Daehwan; Sim, Mikang; Kwon, Daehong; Kim, Juyeon; Ko, Younhee; Kim, Jaebum

    2018-06-05

    Advances in sequencing technologies have facilitated large-scale comparative genomics based on whole genome sequencing. Constructing and investigating conserved genomic regions among multiple species (called synteny blocks) are essential in the comparative genomics. However, they require significant amounts of computational resources and time in addition to bioinformatics skills. Many web interfaces have been developed to make such tasks easier. However, these web interfaces cannot be customized for users who want to use their own set of genome sequences or definition of synteny blocks. To resolve this limitation, we present mySyntenyPortal, a stand-alone application package to construct websites for synteny block analyses by using users' own genome data. mySyntenyPortal provides both command line and web-based interfaces to build and manage websites for large-scale comparative genomic analyses. The websites can be also easily published and accessed by other users. To demonstrate the usability of mySyntenyPortal, we present an example study for building websites to compare genomes of three mammalian species (human, mouse, and cow) and show how they can be easily utilized to identify potential genes affected by genome rearrangements. mySyntenyPortal will contribute for extended comparative genomic analyses based on large-scale whole genome sequences by providing unique functionality to support the easy creation of interactive websites for synteny block analyses from user's own genome data.

  12. Role of tropical cyclones in determining the fate of Bay of Bengal vapor contributed rain δ18O values

    NASA Astrophysics Data System (ADS)

    Sanyal, Prasanta; Basu, Sayak

    2016-04-01

    The evaluation of robust future climate prediction is well dependent on the cognition of past and present hydrological systems which could be traced through the oxygen isotopic composition (δ18O) of rain. Compared to Peninsular and Southern India, explanation for the variability in δ18O values of monsoonal rain is sparse for the Eastern India. Analysis (and published records) of Indian summer monsoon (ISM) rain at the entry point of Bay of Bengal (BoB) vapor into the continent showed the gradual depletion of 18O in the ISM rain is determined by the surface run-off and location of cyclone generation in BoB. The timing and density of cyclones control the maxima in amount and minima in δ18O value of ISM rain and possibly also responsible for the long-term (last 10 years) decrease in rain δ18O values (and amount). Large spatial variation and temporally robustness of weak and insignificant amount effect suggested reconsideration of reconstructed past climate records along the track of BoB vapor. The memory effect of atmospheric vapor is found to lower the amount effect.

  13. Transfer learning improves supervised image segmentation across imaging protocols.

    PubMed

    van Opbroek, Annegreet; Ikram, M Arfan; Vernooij, Meike W; de Bruijne, Marleen

    2015-05-01

    The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.

  14. Thirsty tree roots exude more carbon.

    PubMed

    Preece, Catherine; Farré-Armengol, Gerard; Llusià, Joan; Peñuelas, Josep

    2018-05-01

    Root exudation is an important input of carbon into soils and affects plant and soil communities, but little is known about the effect of climatic factors such as drought on exudation, and its ability to recover. We studied the impact of increasing drought on root exudation and its subsequent recovery in the Mediterranean tree species Quercus ilex L. in a greenhouse study by measuring the amount of total organic carbon in exudates. The amount of exudation per unit root area increased with drought duration and was 21% higher under the most extreme drought scenario compared with the non-droughted control. The amount of root exudation did not differ between the treatments following 6 weeks of re-watering, indicating a strong capacity for recovery in this species. We concluded that drought could affect the amount of root exudation, which could in turn have a large impact on microbial activity in the rhizosphere, and alter these microbial communities, at least in the short term. This tree species may be able to return to normal levels of root exudation after a drought event, but long-term exudate-mediated impacts on Mediterranean forest soils may be an unforeseen effect of drought.

  15. Artificial maturation of an immature sulfur- and organic matter-rich limestone from the Ghareb Formation, Jordan

    USGS Publications Warehouse

    Koopmans, M.P.; Rijpstra, W.I.C.; De Leeuw, J. W.; Lewan, M.D.; Damste, J.S.S.

    1998-01-01

    An immature (Ro=0.39%), S-rich (S(org)/C = 0.07), organic matter-rich (19.6 wt. % TOC) limestone from the Ghareb Formation (Upper Cretaceous) in Jordan was artificially matured by hydrous pyrolysis (200, 220 ..., 300??C; 72 h) to study the effect of progressive diagenesis and early catagenesis on the amounts and distributions of hydrocarbons, organic sulfur compounds and S-rich geomacromolecules. The use of internal standards allowed the determination of absolute amounts. With increasing thermal maturation, large amounts of alkanes and alkylthiophenes with predominantly linear carbon skeletons are generated from the kerogen. The alkylthiophene isomer distributions do not change significantly with increasing thermal maturation, indicating the applicability of alkylthiophenes as biomarkers at relatively high levels of thermal maturity. For a given carbon skeleton, the saturated hydrocarbon, alkylthiophenes and alkylbenzo[b]thiophenes are stable forms at relatively high temperatures, whereas the alkylsulfides are not stable. The large amount of alkylthiophenes produced relative to the alkanes may be explained by the large number of monosulfide links per carbon skeleton. These results are in good agreement with those obtained previously for an artificial maturation series of an immature S-rich sample from the Gessoso-solfifera Formation.An immature (Ro = 0.39%), S-rich (Sorg/C = 0.07), organic matter-rich (19.6 wt.% TOC) limestone from the Ghareb Formation (Upper Cretaceous) in Jordan was artificially matured by hydrous pyrolysis (200, 220, ..., 300??C; 72 h) to study the effect of progressive diagenesis and early catagenesis on the amounts and distributions of hydrocarbons, organic sulfur compounds and S-rich geomacromolecules. The use of internal standards allowed the determination of absolute amounts. With increasing thermal maturation, large amounts of alkanes and alkylthiophenes with predominantly linear carbon skeletons are generated from the kerogen. The alkylthiophene isomer distributions do not change significantly with increasing thermal maturation, indicating the applicability of alkylthiophenes as biomarkers at relatively high levels of thermal maturity. For a given carbon skeleton, the saturated hydrocarbon, alkylthiophene and alkylbenzo[b]thiophenes are stable forms at relatively high temperatures, whereas the alkylsulfides are not stable. The large amount of alkylthiophenes produced relative to the alkanes may be explained by the large number of monosulfide links per carbon skeleton. These results are in good agreement with those obtained previously for an artificial maturation series of an immature S-rich sample from the Gessoso-solfifera Formation.

  16. Large-scale circulation patterns, instability factors and global precipitation modeling as influenced by external forcing

    NASA Astrophysics Data System (ADS)

    Bundel, A.; Kulikova, I.; Kruglova, E.; Muravev, A.

    2003-04-01

    The scope of the study is to estimate the relationship between large-scale circulation regimes, various instability indices and global precipitation with different boundary conditions, considered as external forcing. The experiments were carried out in the ensemble-prediction framework of the dynamic-statistical monthly forecast scheme run in the Hydrometeorological Research Center of Russia every ten days. The extension to seasonal intervals makes it necessary to investigate the role of slowly changing boundary conditions among which the sea surface temperature (SST) may be defined as the most effective factor. Continuous integrations of the global spectral T41L15 model for the whole year 2000 (starting from January 1) were performed with the climatic SST and the Reynolds Archive SSTs. Monthly values of the SST were projected on the year days using spline interpolation technique. First, the global precipitation values in experiments were compared to the GPCP (Global Precipitation Climate Program) daily observation data. Although the global mean precipitation is underestimated by the model, some large-scale regional amounts correspond to the real ones (e.g. for Europe) fairly well. On the whole, however, anomaly phases failed to be reproduced. The precipitation averaged over the whole land revealed a greater sensitivity to the SSTs than that over the oceans. The wavelet analysis was applied to separate the low- and high-frequency signal of the SST influence on the large-scale circulation and precipitation. A derivative of the Wallace-Gutzler teleconnection index for the East-Atlantic oscillation was taken as the circulation characteristic. The daily oscillation index values and precipitation amounts averaged over Europe were decomposed using wavelet approach with different “mother wavelets” up to approximation level 3. It was demonstrated that an increase in the precipitation amount over Europe was associated with the zonal flow intensification over the Northern Atlantic when the real SSTs were used. Blocking structures in the circulation caused decreasing precipitation amounts. The wavelet approach gave a more distinctive discrimination in the modeled circulation and precipitation patterns versus different external forcing than a number of other statistical techniques. Several atmospheric instability indices (e.g. the Phillips like parameters, Richardson number etc) were additionally used in post-processing for a more detailed validation of the modeled large-scale and total precipitation amounts. It was shown that a reasonable variety of instability indices must be used for such validations and for precipitation output corrections. Their statistical stability may be substantiated only on the ensemble modeling basis. This work was performed with the financial support of the Russian Foundation for Basic Research (02-05-64655).

  17. Thermal and microstructural properties of fine-grained material at the Viking Lander 1 site

    NASA Astrophysics Data System (ADS)

    Paton, M. D.; Harri, A.-M.; Savijärvi, H.; Mäkinen, T.; Hagermann, A.; Kemppinen, O.; Johnston, A.

    2016-06-01

    As Viking Lander 1 touched down on Mars one of its footpads fully penetrated a patch of loose fine-grained drift material. The surrounding landing site, as observed by VL-1, was found to exhibit a complex terrain consisting of a crusted surface with an assortment of rocks, large dune-like drifts and smaller patches of drift material. We use a temperature sensor attached to the buried footpad and covered in fine-grained material to determine the thermal properties of drift material at the VL-1 site. The thermal properties are used to investigate the microstructure of the drift material and understand its relevance to surface-atmosphere interactions. We obtained a thermal inertia value of 103 ± 22 tiu. This value is in the upper range of previous thermal inertia estimates of martian dust as measured from orbit and is significantly lower than the regional thermal inertia of the VL-1 site, of around 283 tiu, obtained from orbit. We estimate a thermal inertia of around 263 ± 29 tiu for the duricrust at the VL-1 site. It was noted the patch of fine-grained regolith around the footpad was about 20-30 K warmer compared to similar material beyond the thermal influence of the lander. An effective diameter of 8 ± 5 μm was calculated for the particles in the drift material. This is larger than atmospheric dust and large compared to previous estimates of the drift material particle diameter. We interpret our results as the presence of a range of particle sizes, <8 μm, in the drift material with the thermal properties being controlled by a small amount of large particles (∼8 μm) and its cohesion being controlled by a large amount of smaller particles. The bulk of the particles in the drift material are therefore likely comparable in size to that of atmospheric dust. The possibility of larger particles being locked into a fine-grained material has implications for understanding the mobilisation of wind blown materials on Mars.

  18. Responses of arthropods to large-scale manipulations of dead wood in loblolly pine stands of the southeastern United States.

    PubMed

    Ulyshen, Michael D; Hanula, James L

    2009-08-01

    Large-scale experimental manipulations of dead wood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large number of snags were added, or (4) no coarse woody debris was added or removed. The target taxa were ground-dwelling arthropods, sampled by pitfall traps, and saproxylic beetles (i.e., dependent on dead wood), sampled by flight intercept traps and emergence traps. There were no differences in total ground-dwelling arthropod abundance, richness, diversity, or composition among treatments. Only the results for ground beetles (Carabidae), which were more species rich and diverse in log input plots, supported our prediction that ground-dwelling arthropods would benefit from additions of dead wood. There were also no differences in saproxylic beetle abundance, richness, diversity, or composition among treatments. The findings from this study are encouraging in that arthropods seem less sensitive than expected to manipulations of dead wood in managed pine forests of the southeastern United States. Based on our results, we cannot recommend inputting large amounts of dead wood for conservation purposes, given the expense of such measures. However, the persistence of saproxylic beetles requires that an adequate amount of dead wood is available in the landscape, and we recommend that dead wood be retained whenever possible in managed pine forests.

  19. A prospective survey of nutritional support practices in intensive care unit patients: what is prescribed? What is delivered?

    PubMed

    De Jonghe, B; Appere-De-Vechi, C; Fournier, M; Tran, B; Merrer, J; Melchior, J C; Outin, H

    2001-01-01

    To assess the amount of nutrients delivered, prescribed, and required for critically ill patients and to identify the reasons for discrepancies between prescriptions and requirements and between prescriptions and actual delivery of nutrition. Prospective cohort study. Twelve-bed medical intensive care unit in a university-affiliated general hospital. Fifty-one consecutive patients, receiving nutritional support either enterally or intravenously for > or = 2 days. We followed patients for the first 14 days of nutritional delivery. The amount of calories prescribed and the amount actually delivered were recorded daily and compared with the theoretical energy requirements. A combined regimen of enteral and parenteral nutrition was administered on 58% of the 484 nutrition days analyzed, and 63.5% of total caloric intake was delivered enterally. Seventy-eight percent of the mean caloric amount required was prescribed, and 71% was effectively delivered. The amount of calories actually delivered compared with the amount prescribed was significantly lower in enteral than in parenteral administration (86.8% vs. 112.4%, p < .001). Discrepancies between prescription and delivery of enterally administered nutrients were attributable to interruptions caused by digestive intolerance (27.7%, mean daily wasted volume 641 mL), airway management (30.8%, wasted volume 745 mL), and diagnostic procedures (26.6%, wasted volume 567 mL). Factors significantly associated with a low prescription rate of nutritional support were the administration of vasoactive drugs, central venous catheterization, and the need for extrarenal replacement. An inadequate delivery of enteral nutrition and a low rate of nutrition prescription resulted in low caloric intake in our intensive care unit patients. A large volume of enterally administered nutrients was wasted because of inadequate timing in stopping and restarting enteral feeding. The inverse correlation between the prescription rate of nutrition and the intensity of care required suggests that physicians need to pay more attention to providing appropriate nutritional support for the most severely ill patients.

  20. Bone Regeneration after Treatment with Covering Materials Composed of Flax Fibers and Biodegradable Plastics: A Histological Study in Rats

    PubMed Central

    Gedrange, Tomasz

    2016-01-01

    The aim of this study was to examine the osteogenic potential of new flax covering materials. Bone defects were created on the skull of forty rats. Materials of pure PLA and PCL and their composites with flax fibers, genetically modified producing PHB (PLA-transgen, PCL-transgen) and unmodified (PLA-wt, PCL-wt), were inserted. The skulls were harvested after four weeks and subjected to histological examination. The percentage of bone regeneration by using PLA was less pronounced than after usage of pure PCL in comparison with controls. After treatment with PCL-transgen, a large amount of new formed bone could be found. In contrast, PCL-wt decreased significantly the bone regeneration, compared to the other tested groups. The bone covers made of pure PLA had substantially less influence on bone regeneration and the bone healing proceeded with a lot of connective tissue, whereas PLA-transgen and PLA-wt showed nearly comparable amount of new formed bone. Regarding the histological data, the hypothesis could be proposed that PCL and its composites have contributed to a higher quantity of the regenerated bone, compared to PLA. The histological studies showed comparable bone regeneration processes after treatment with tested covering materials, as well as in the untreated bone lesions. PMID:27597965

  1. Bone Regeneration after Treatment with Covering Materials Composed of Flax Fibers and Biodegradable Plastics: A Histological Study in Rats.

    PubMed

    Gredes, Tomasz; Kunath, Franziska; Gedrange, Tomasz; Kunert-Keil, Christiane

    2016-01-01

    The aim of this study was to examine the osteogenic potential of new flax covering materials. Bone defects were created on the skull of forty rats. Materials of pure PLA and PCL and their composites with flax fibers, genetically modified producing PHB (PLA-transgen, PCL-transgen) and unmodified (PLA-wt, PCL-wt), were inserted. The skulls were harvested after four weeks and subjected to histological examination. The percentage of bone regeneration by using PLA was less pronounced than after usage of pure PCL in comparison with controls. After treatment with PCL-transgen, a large amount of new formed bone could be found. In contrast, PCL-wt decreased significantly the bone regeneration, compared to the other tested groups. The bone covers made of pure PLA had substantially less influence on bone regeneration and the bone healing proceeded with a lot of connective tissue, whereas PLA-transgen and PLA-wt showed nearly comparable amount of new formed bone. Regarding the histological data, the hypothesis could be proposed that PCL and its composites have contributed to a higher quantity of the regenerated bone, compared to PLA. The histological studies showed comparable bone regeneration processes after treatment with tested covering materials, as well as in the untreated bone lesions.

  2. Large Amounts of Reactivated Virus in Tears Precedes Recurrent Herpes Stromal Keratitis in Stressed Rabbits Latently Infected with Herpes Simplex Virus.

    PubMed

    Perng, Guey-Chuen; Osorio, Nelson; Jiang, Xianzhi; Geertsema, Roger; Hsiang, Chinhui; Brown, Don; BenMohamed, Lbachir; Wechsler, Steven L

    2016-01-01

    Recurrent herpetic stromal keratitis (rHSK), due to an immune response to reactivation of herpes simplex virus (HSV-1), can cause corneal blindness. The development of therapeutic interventions such as drugs and vaccines to decrease rHSK have been hampered by the lack of a small and reliable animal model in which rHSK occurs at a high frequency during HSV-1 latency. The aim of this study is to develop a rabbit model of rHSK in which stress from elevated temperatures increases the frequency of HSV-1 reactivations and rHSK. Rabbits latently infected with HSV-1 were subjected to elevated temperatures and the frequency of viral reactivations and rHSK were determined. In an experiment in which rabbits latently infected with HSV-1 were subjected to ill-defined stress as a result of failure of the vivarium air conditioning system, reactivation of HSV-1 occurred at over twice the normal frequency. In addition, 60% of eyes developed severe rHSK compared to <1% of eyes normally. All episodes of rHSK were preceded four to five days prior by an unusually large amount of reactivated virus in the tears of that eye and whenever this unusually large amount of reactivated virus was detected in tears, rHSK always appeared 4-5 days later. In subsequent experiments using well defined heat stress the reactivation frequency was similarly increased, but no eyes developed rHSK. The results reported here support the hypothesis that rHSK is associated not simply with elevated reactivation frequency, but rather with rare episodes of very high levels of reactivated virus in tears 4-5 days earlier.

  3. Large Amounts of Reactivated Virus in Tears Precedes Recurrent Herpes Stromal Keratitis in Stressed Rabbits Latently Infected with Herpes Simplex Virus

    PubMed Central

    Perng, Guey-Chuen; Osorio, Nelson; Jiang, Xianzhi; Geertsema, Roger; Hsiang, Chinhui; Brown, Don; BenMohamed, Lbachir; Wechsler, Steven L.

    2017-01-01

    Aim Recurrent herpetic stromal keratitis (rHSK), due to an immune response to reactivation of herpes simplex virus (HSV-1), can cause corneal blindness. The development of therapeutic interventions such as drugs and vaccines to decrease rHSK have been hampered by the lack of a small and reliable animal model in which rHSK occurs at a high frequency during HSV-1 latency. The aim of this study is to develop a rabbit model of rHSK in which stress from elevated temperatures increases the frequency of HSV-1 reactivations and rHSK. Materials and methods Rabbits latently infected with HSV-1 were subjected to elevated temperatures and the frequency of viral reactivations and rHSK were determined. Results In an experiment in which rabbits latently infected with HSV-1 were subjected to ill-defined stress as a result of failure of the vivarium air conditioning system, reactivation of HSV-1 occurred at over twice the normal frequency. In addition, 60% of eyes developed severe rHSK compared to <1% of eyes normally. All episodes of rHSK were preceded four to five days prior by an unusually large amount of reactivated virus in the tears of that eye and whenever this unusually large amount of reactivated virus was detected in tears, rHSK always appeared 4–5 days later. In subsequent experiments using well defined heat stress the reactivation frequency was similarly increased, but no eyes developed rHSK. Conclusions The results reported here support the hypothesis that rHSK is associated not simply with elevated reactivation frequency, but rather with rare episodes of very high levels of reactivated virus in tears 4–5 days earlier. PMID:25859798

  4. New Insights into Handling Missing Values in Environmental Epidemiological Studies

    PubMed Central

    Roda, Célina; Nicolis, Ioannis; Momas, Isabelle; Guihenneuc, Chantal

    2014-01-01

    Missing data are unavoidable in environmental epidemiologic surveys. The aim of this study was to compare methods for handling large amounts of missing values: omission of missing values, single and multiple imputations (through linear regression or partial least squares regression), and a fully Bayesian approach. These methods were applied to the PARIS birth cohort, where indoor domestic pollutant measurements were performed in a random sample of babies' dwellings. A simulation study was conducted to assess performances of different approaches with a high proportion of missing values (from 50% to 95%). Different simulation scenarios were carried out, controlling the true value of the association (odds ratio of 1.0, 1.2, and 1.4), and varying the health outcome prevalence. When a large amount of data is missing, omitting these missing data reduced statistical power and inflated standard errors, which affected the significance of the association. Single imputation underestimated the variability, and considerably increased risk of type I error. All approaches were conservative, except the Bayesian joint model. In the case of a common health outcome, the fully Bayesian approach is the most efficient approach (low root mean square error, reasonable type I error, and high statistical power). Nevertheless for a less prevalent event, the type I error is increased and the statistical power is reduced. The estimated posterior distribution of the OR is useful to refine the conclusion. Among the methods handling missing values, no approach is absolutely the best but when usual approaches (e.g. single imputation) are not sufficient, joint modelling approach of missing process and health association is more efficient when large amounts of data are missing. PMID:25226278

  5. Amount of Hispanic youth exposure to food and beverage advertising on Spanish- and English-language television.

    PubMed

    Fleming-Milici, Frances; Harris, Jennifer L; Sarda, Vishnudas; Schwartz, Marlene B

    2013-08-01

    Exposure to large numbers of television advertisements for foods and beverages with little or no nutritional value likely contributes to poor diet among youth. Given higher rates of obesity and overweight for Hispanic youth, it is important to understand the amount and types of food advertising they view. To quantify the amount of food and beverage advertising viewed by Hispanic youth on Spanish- and English-language television and compare it with the amount of food and beverage advertising viewed by non-Hispanic youth. Data on gross rating points that measured advertising viewed on national broadcast and cable television in 2010 using a Nielsen panel of television-viewing households of Hispanic and non-Hispanic preschoolers (2-5 years), children (6-11 years), and adolescents (12-17 years). Food and beverage television advertisements viewed on English- and Spanish-language television by product category and television-viewing times by age and language preference. EXPOSURE Food and beverage advertising on Spanish- and English-language television. RESULTS In 2010, Hispanic preschoolers, children, and adolescents viewed, on average, 11.6 to 12.4 television food ads per day; the majority of these ads (75%-85%) appeared on English-language television. Fast food represented a higher proportion of food ads on Spanish-language television. Consistent with television-viewing patterns, Hispanic preschoolers saw more Spanish-language food advertisements than did Hispanic children and adolescents. Owing to somewhat less food advertising on Spanish-language television, Hispanic children and adolescents viewed 14% and 24% fewer food ads overall, respectively, compared with non-Hispanic youth. Spanish-language television viewing was highly concentrated among youth who primarily speak Spanish. Both Hispanic and non-Hispanic youth view large numbers of television advertisements for nutrient-poor categories of food and beverage. Although Hispanic children and adolescents see somewhat fewer of these ads, the higher obesity rates among Hispanic youth, the greater exposure by Hispanic preschoolers, and the potential enhanced effects of targeted advertising on Hispanic youth suggest that this exposure may pose additional risks for Hispanic youth. Continued monitoring is warranted owing to food companies' stated intentions to increase marketing to Hispanics.

  6. "Big" versus "little" science: comparative analysis of program projects and individual research grants.

    PubMed

    Baumeister, A A; Bacharach, V R; Baumeister, A A

    1997-11-01

    Controversy about the amount and nature of funding for mental retardation research has persisted since the creation of NICHD. An issue that has aroused considerable debate, within the mental retardation research community as well as beyond, is distribution of funds between large group research grants, such as the program project (PO1) and the individual grant (RO1). Currently within the Mental Retardation and Developmental Disabilities Branch, more money is allocated to the PO1 mechanism than the RO1. We compared the two types of grants, focusing on success rates, productivity, costs, impact, publication practices, and outcome and conducted a comparative analysis of biomedical and behavioral research. Other related issues were considered, including review processes and cost-effectiveness.

  7. A MODIFIED METHOD OF OBTAINING LARGE AMOUNTS OF RICKETTSIA PROWAZEKI BY ROENTGEN IRRADIATION OF RATS

    PubMed Central

    Macchiavello, Atilio; Dresser, Richard

    1935-01-01

    The radiation method described by Zinsser and Castaneda for obtaining large amounts of Rickettsia has been carried out successfully with an ordinary radiographic machine. This allows the extension of the method to those communities which do not possess a high voltage Roentgen therapy unit as originally employed. PMID:19870416

  8. Distributed Processing of Projections of Large Datasets: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    Modern information needs have resulted in very large amounts of data being used in geographic information systems. Problems arise when trying to project these data in a reasonable amount of time and accuracy, however. Current single-threaded methods can suffer from two problems: fast projection with poor accuracy, or accurate projection with long processing time. A possible solution may be to combine accurate interpolation methods and distributed processing algorithms to quickly and accurately convert digital geospatial data between coordinate systems. Modern technology has made it possible to construct systems, such as Beowulf clusters, for a low cost and provide access to supercomputer-class technology. Combining these techniques may result in the ability to use large amounts of geographic data in time-critical situations.

  9. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  10. Profiling of lipid and glycogen accumulations under different growth conditions in the sulfothermophilic red alga Galdieria sulphuraria.

    PubMed

    Sakurai, Toshihiro; Aoki, Motohide; Ju, Xiaohui; Ueda, Tatsuya; Nakamura, Yasunori; Fujiwara, Shoko; Umemura, Tomonari; Tsuzuki, Mikio; Minoda, Ayumi

    2016-01-01

    The unicellular red alga Galdieria sulphuraria grows efficiently and produces a large amount of biomass in acidic conditions at high temperatures. It has great potential to produce biofuels and other beneficial compounds without becoming contaminated with other organisms. In G. sulphuraria, biomass measurements and glycogen and lipid analyses demonstrated that the amounts and compositions of glycogen and lipids differed when cells were grown under autotrophic, mixotrophic, and heterotrophic conditions. Maximum biomass production was obtained in the mixotrophic culture. High amounts of glycogen were obtained in the mixotrophic cultures, while the amounts of neutral lipids were similar between mixotrophic and heterotrophic cultures. The amounts of neutral lipids were highest in red algae, including thermophiles. Glycogen structure and fatty acids compositions largely depended on the growth conditions. Copyright © 2015. Published by Elsevier Ltd.

  11. Compression and information recovery in ptychography

    NASA Astrophysics Data System (ADS)

    Loetgering, L.; Treffer, D.; Wilhein, T.

    2018-04-01

    Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.

  12. 1H NMR quantitative determination of photosynthetic pigments from green beans (Phaseolus vulgaris L.).

    PubMed

    Valverde, Juan; This, Hervé

    2008-01-23

    Using 1H nuclear magnetic resonance spectroscopy (1D and 2D), the two types of photosynthetic pigments (chlorophylls, their derivatives, and carotenoids) of "green beans" (immature pods of Phaseolus vulgaris L.) were analyzed. Compared to other analytical methods (light spectroscopy or chromatography), 1H NMR spectroscopy is a fast analytical way that provides more information on chlorophyll derivatives (allomers and epimers) than ultraviolet-visible spectroscopy. Moreover, it gives a large amount of data without prior chromatographic separation.

  13. Application of Synchrophasor Measurements for Improving Situational Awareness of the Power System

    NASA Astrophysics Data System (ADS)

    Obushevs, A.; Mutule, A.

    2018-04-01

    The paper focuses on the application of synchrophasor measurements that present unprecedented benefits compared to SCADA systems in order to facilitate the successful transformation of the Nordic-Baltic-and-European electric power system to operate with large amounts of renewable energy sources and improve situational awareness of the power system. The article describes new functionalities of visualisation tools to estimate a grid inertia level in real time with monitoring results between Nordic and Baltic power systems.

  14. Development of dielectric elastomer nanocomposites as stretchable actuating materials

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Sun, L. Z.

    2017-10-01

    Dielectric elastomer nanocomposites (DENCs) filled with multi-walled carbon nanotubes are developed. The electromechanical responses of DENCs to applied electric fields are investigated through laser Doppler vibrometry. It is found that a small amount of carbon nanotube fillers can effectively enhance the electromechanical performance of DENCs. The enhanced electromechanical properties have shown not only that the desired thickness strain can be achieved with reduced required electric fields but also that significantly large thickness strain can be obtained with any electric fields compared to pristine dielectric elastomers.

  15. A Physiological Stimulating Factor of Water Intake during and after Dry Forage Feeding in Large-type Goats.

    PubMed

    Van Thang, Tran; Sunagawa, Katsunori; Nagamine, Itsuki; Kishi, Tetsuya; Ogura, Go

    2012-04-01

    When ruminants consume dry forage, they also drink large volumes of water. The objective of this study was to clarify which factor produced when feed boluses enter the rumen is mainly responsible for the marked increase in water intake in the second hour of the 2 h feeding period in large-type goats fed on dry forage for 2 h twice daily. Six large-type male esophageal- and ruminal-fistulated goats (crossbred Japanese Saanen/Nubian, aged 2 to 6 years, weighing 85.1±4.89 kg) were used in two experiments. In experiment 1, the water deprivation (WD) control and the water availability (WA) treatment were conducted to compare changes in water intake during and after dry forage feeding. In experiment 2, a normal feeding conditions (NFC) control and a feed bolus removal (FBR) treatment were carried out to investigate whether decrease in circulating plasma volume or increase in plasma osmolality is mainly responsible for the marked increase in water intake in the second hour of the 2 h feeding period. The results of experiment 1 showed that in the WA treatment, small amounts of water were consumed during the first hour of feeding while the majority of water intake was observed during the second hour of the 2 h feeding period. Therefore, the amounts of water consumed in the second hour of the 2 h feeding period accounted for 82.8% of the total water intake. The results of experiment 2 indicated that in comparison with the NFC control, decrease in plasma volume in the FBR treatment, which was indicated by increase in hematocrit and plasma total protein concentrations, was higher (p<0.05) in the second hour of the 2 h feeding period. However, plasma osmolality in the FBR treatment was lower (p<0.05) than compared to the NFC control from 30 min after the start of feeding. Therefore, thirst level in the FBR treatment was 82.7% less (p<0.01) compared with that in the NFC control upon conclusion of the 30 min drinking period. The results of the study indicate that the increased plasma osmolality in the second hour of the 2 h feeding period is the main physiological stimulating factor of water intake during and after dry forage feeding in large-type goats.

  16. A Physiological Stimulating Factor of Water Intake during and after Dry Forage Feeding in Large-type Goats

    PubMed Central

    Van Thang, Tran; Sunagawa, Katsunori; Nagamine, Itsuki; Kishi, Tetsuya; Ogura, Go

    2012-01-01

    When ruminants consume dry forage, they also drink large volumes of water. The objective of this study was to clarify which factor produced when feed boluses enter the rumen is mainly responsible for the marked increase in water intake in the second hour of the 2 h feeding period in large-type goats fed on dry forage for 2 h twice daily. Six large-type male esophageal- and ruminal-fistulated goats (crossbred Japanese Saanen/Nubian, aged 2 to 6 years, weighing 85.1±4.89 kg) were used in two experiments. In experiment 1, the water deprivation (WD) control and the water availability (WA) treatment were conducted to compare changes in water intake during and after dry forage feeding. In experiment 2, a normal feeding conditions (NFC) control and a feed bolus removal (FBR) treatment were carried out to investigate whether decrease in circulating plasma volume or increase in plasma osmolality is mainly responsible for the marked increase in water intake in the second hour of the 2 h feeding period. The results of experiment 1 showed that in the WA treatment, small amounts of water were consumed during the first hour of feeding while the majority of water intake was observed during the second hour of the 2 h feeding period. Therefore, the amounts of water consumed in the second hour of the 2 h feeding period accounted for 82.8% of the total water intake. The results of experiment 2 indicated that in comparison with the NFC control, decrease in plasma volume in the FBR treatment, which was indicated by increase in hematocrit and plasma total protein concentrations, was higher (p<0.05) in the second hour of the 2 h feeding period. However, plasma osmolality in the FBR treatment was lower (p<0.05) than compared to the NFC control from 30 min after the start of feeding. Therefore, thirst level in the FBR treatment was 82.7% less (p<0.01) compared with that in the NFC control upon conclusion of the 30 min drinking period. The results of the study indicate that the increased plasma osmolality in the second hour of the 2 h feeding period is the main physiological stimulating factor of water intake during and after dry forage feeding in large-type goats. PMID:25049591

  17. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  18. The Physician Payments Sunshine Act: Data Evaluation Regarding Payments to Ophthalmologists

    PubMed Central

    Chang, Jonathan S.

    2014-01-01

    Objective/Purpose To review data for ophthalmologists published online from the Physician Payments Sunshine Act. Design Retrospective data review using a publicly available electronic database Methods: Main Outcome Measures A database was downloaded from the Centers for Medicare and Medicaid Services (CMS) Website under Identified General Payments to Physicians and a primary specialty of ophthalmology. Basic statistical analysis was performed including mean, median and range of payments for both single payments and per provider. Data were also summarized by category of payment, geographic region and compared with other surgical subspecialties. Results From August 1, 2013 to December 31, 2013, a total of 55,996 individual payments were reported to 9,855 ophthalmologists for a total of $10,926,447. The mean amount received in a single payment was $195.13 (range $0.04–$193,073). The mean amount received per physician ID was $1,108 (range $1–$397,849) and median amount $112.01. Consulting fees made up the largest percentage of fees. There was not a large difference in payments received by region. The mean payments for the subspecialties of dermatology, neurosurgery, orthopedic surgery and urology ranged from $954–$6,980, and median payments in each field by provider identifier ranged from $88–$173. Conclusions A large amount of data was released by CMS for the Physician Payment Sunshine Act. In ophthalmology, mean and median payments per physician did not vary greatly from other surgical subspecialties. Most single payments were under $100, and most physicians received less than $500 in total payments. Payments for consulting made up the largest category of spending. How this affects patient perception, patient care and medical costs warrants further study. PMID:25578254

  19. Chemical weathering on the North Island of New Zealand: CO2 consumption and fluxes of Sr and Os

    NASA Astrophysics Data System (ADS)

    Blazina, Tim; Sharma, Mukul

    2013-09-01

    We present Os and Sr isotope ratios and Os, Sr and major/trace element concentrations for river waters, spring waters and rains on the North Island of New Zealand. The Os and Sr data are used to examine whether the NINZ is a significant contributor of unradiogenic Os and Sr to the oceans. Major element chemistry is used to quantify weathering and CO2 consumption rates on the island to investigate relationships between these processes and Os and Sr behavior. Chemical erosion rates and CO2 consumption rates across the island range from 44 to 555 km-2 yr-1 and 95 to 1900 × 103 mol CO2 km-2 yr-1, respectively. Strontium flux for the island range from 177 to 16,100 mol km-2 yr-1 and the rivers have an average flux normalized 87Sr/86Sr ratio of 0.7075. In agreement with the previous studies these findings provide further evidence that weathering of arc terrains contributes a disproportionally large amount of Sr to the oceans and consumes very large amounts of CO2 annually compared to their areal extent. However, the 87Sr/86Sr from the NINZ is not particularly unradiogenic and it is likely not contributing significant amounts of unradiogenic Sr to the oceans. Repeated Os analyses and bottle leaching experiments revealed extensive and variable sample contamination by Os leaching from rigorously precleaned LDPE bottles. An upper bound on the flux of Os from NINZ can nevertheless be assessed and indicates that island arcs cannot provide significant amounts of unradiogenic Os to the oceans.

  20. Effects of season and nitrogen supply on the partitioning of recently fixed carbon in understory vegetation using a 13CO2 pulse labeling technique

    NASA Astrophysics Data System (ADS)

    Hasselquist, Niles; Metcalfe, Daniel; Högberg, Peter

    2013-04-01

    Vegetation research in boreal forests has traditionally been focused on trees, with little attention given to understory vegetation. However, understory vegetation has been identified as a key driver for the functioning of boreal forests and may play an important role in the amount of carbon (C) that is entering and leaving these forested ecosystems. We conducted a large-scale 13C pulse labeling experiment to better understand how recently fixed C is allocated in the understory vegetation characteristic of boreal forests. We used transparent plastic chambers to pulse label the understory vegetation with enriched 13CO2 in the early (June) and late (August) growing seasons. This study was also replicated across a nitrogen (N) fertilization treatment to better understand the effects of N availability on C allocation patterns. We present data on the amount of 13C label found in different components of the understory vegetation (i.e. leaves, stems, lichens, mosses, rhizomes and fine roots) as well as CO2 efflux. Additionally, we provide estimates of C residence time (MRT) among the different components and examine how MRT of C is affected by seasonality and N availability. Seasonality had a large effect on how recently fixed C is allocated in understory vegetation, whereas N fertilization influenced the MRT of C in the different components of ericaceous vegetation. Moreover, there was a general trend that N additions increased the amount of 13C in CO2 efflux compared to the amount of 13C in biomass, suggesting that N fertilization may lead to an increase in the utilization of recently fixed C, whereas N-limitation promotes the storage of recently fixed C.

  1. The Influence of Cloud Field Uniformity on Observed Cloud Amount

    NASA Astrophysics Data System (ADS)

    Riley, E.; Kleiss, J.; Kassianov, E.; Long, C. N.; Riihimaki, L.; Berg, L. K.

    2017-12-01

    Two ground-based measurements of cloud amount include cloud fraction (CF) obtained from time series of zenith-pointing radar-lidar observations and fractional sky cover (FSC) acquired from a Total Sky Imager (TSI). In comparison with the radars and lidars, the TSI has a considerably larger field of view (FOV 100° vs. 0.2°) and therefore is expected to have a different sensitivity to inhomogeneity in a cloud field. Radiative transfer calculations based on cloud properties retrieved from narrow-FOV overhead cloud observations may differ from shortwave and longwave flux observations due to spatial variability in local cloud cover. This bias will impede radiative closure for sampling reasons rather than the accuracy of cloud microphysics retrievals or radiative transfer calculations. Furthermore, the comparison between observed and modeled cloud amount from large eddy simulations (LES) models may be affected by cloud field inhomogeneity. The main goal of our study is to estimate the anticipated impact of cloud field inhomogeneity on the level of agreement between CF and FSC. We focus on shallow cumulus clouds observed at the U.S. Department of Energy Atmospheric Radiation Measurement Facility's Southern Great Plains (SGP) site in Oklahoma, USA. Our analysis identifies cloud field inhomogeneity using a novel metric that quantifies the spatial and temporal uniformity of FSC over 100-degree FOV TSI images. We demonstrate that (1) large differences between CF and FSC are partly attributable to increases in inhomogeneity and (2) using the uniformity metric can provide a meaningful assessment of uncertainties in observed cloud amount to aide in comparing ground-based measurements to radiative transfer or LES model outputs at SGP.

  2. Resolving the tips of the tree of life: How much mitochondrialdata doe we need?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonett, Ronald M.; Macey, J. Robert; Boore, Jeffrey L.

    2005-04-29

    Mitochondrial (mt) DNA sequences are used extensively to reconstruct evolutionary relationships among recently diverged animals,and have constituted the most widely used markers for species- and generic-level relationships for the last decade or more. However, most studies to date have employed relatively small portions of the mt-genome. In contrast, complete mt-genomes primarily have been used to investigate deep divergences, including several studies of the amount of mt sequence necessary to recover ancient relationships. We sequenced and analyzed 24 complete mt-genomes from a group of salamander species exhibiting divergences typical of those in many species-level studies. We present the first comprehensive investigationmore » of the amount of mt sequence data necessary to consistently recover the mt-genome tree at this level, using parsimony and Bayesian methods. Both methods of phylogenetic analysis revealed extremely similar results. A surprising number of well supported, yet conflicting, relationships were found in trees based on fragments less than {approx}2000 nucleotides (nt), typical of the vast majority of the thousands of mt-based studies published to date. Large amounts of data (11,500+ nt) were necessary to consistently recover the whole mt-genome tree. Some relationships consistently were recovered with fragments of all sizes, but many nodes required the majority of the mt-genome to stabilize, particularly those associated with short internal branches. Although moderate amounts of data (2000-3000 nt) were adequate to recover mt-based relationships for which most nodes were congruent with the whole mt-genome tree, many thousands of nucleotides were necessary to resolve rapid bursts of evolution. Recent advances in genomics are making collection of large amounts of sequence data highly feasible, and our results provide the basis for comparative studies of other closely related groups to optimize mt sequence sampling and phylogenetic resolution at the ''tips'' of the Tree of Life.« less

  3. Mouthwash overdose

    MedlinePlus

    ... are: Chlorhexidine gluconate Ethanol (ethyl alcohol) Hydrogen peroxide Methyl salicylate ... amounts of alcohol (drunkenness). Swallowing large amounts of methyl salicylate and hydrogen peroxide may also cause serious stomach ...

  4. Casein polymorphism heterogeneity influences casein micelle size in milk of individual cows.

    PubMed

    Day, L; Williams, R P W; Otter, D; Augustin, M A

    2015-06-01

    Milk samples from individual cows producing small (148-155 nm) or large (177-222 nm) casein micelles were selected to investigate the relationship between the individual casein proteins, specifically κ- and β-casein phenotypes, and casein micelle size. Only κ-casein AA and β-casein A1A1, A1A2 and A2A2 phenotypes were found in the large casein micelle group. Among the small micelle group, both κ-casein and β-casein phenotypes were more diverse. κ-Casein AB was the dominant phenotype, and 3 combinations (AA, AB, and BB) were present in the small casein micelle group. A considerable mix of β-casein phenotypes was found, including B and I variants, which were only found in the small casein micelle group. The relative amount of κ-casein to total casein was significantly higher in the small micelle group, and the nonglycosylated and glycosylated κ-casein contents were higher in the milks with small casein micelles (primarily with κ-casein AB and BB variants) compared with the large micelle group. The ratio of glycosylated to nonglycosylated κ-casein was higher in the milks with small casein micelles compared with the milks with large casein micelles. This suggests that although the amount of κ-casein (both glycosylated and nonglycosylated) is associated with micelle size, an increased proportion of glycosylated κ-casein could be a more important and favorable factor for small micelle size. This suggests that the increased spatial requirement due to addition of the glycosyl group with increasing extent of glycosylation of κ-casein is one mechanism that controls casein micelle assembly and growth. In addition, increased electrostatic repulsion due to the sialyl residues on the glycosyl group could be a contributory factor. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. Using information theory to identify redundancy in common laboratory tests in the intensive care unit.

    PubMed

    Lee, Joon; Maslove, David M

    2015-07-31

    Clinical workflow is infused with large quantities of data, particularly in areas with enhanced monitoring such as the Intensive Care Unit (ICU). Information theory can quantify the expected amounts of total and redundant information contained in a given clinical data type, and as such has the potential to inform clinicians on how to manage the vast volumes of data they are required to analyze in their daily practice. The objective of this proof-of-concept study was to quantify the amounts of redundant information associated with common ICU lab tests. We analyzed the information content of 11 laboratory test results from 29,149 adult ICU admissions in the MIMIC II database. Information theory was applied to quantify the expected amount of redundant information both between lab values from the same ICU day, and between consecutive ICU days. Most lab values showed a decreasing trend over time in the expected amount of novel information they contained. Platelet, blood urea nitrogen (BUN), and creatinine measurements exhibited the most amount of redundant information on days 2 and 3 compared to the previous day. The creatinine-BUN and sodium-chloride pairs had the most redundancy. Information theory can help identify and discourage unnecessary testing and bloodwork, and can in general be a useful data analytic technique for many medical specialties that deal with information overload.

  6. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  7. Alkali activation processes for incinerator residues management.

    PubMed

    Lancellotti, Isabella; Ponzoni, Chiara; Barbieri, Luisa; Leonelli, Cristina

    2013-08-01

    Incinerator bottom ash (BA) is produced in large amount worldwide and in Italy, where 5.1 millionstons of municipal solid residues have been incinerated in 2010, corresponding to 1.2-1.5 millionstons of produced bottom ash. This residue has been used in the present study for producing dense geopolymers containing high percentage (50-70 wt%) of ash. The amount of potentially reactive aluminosilicate fraction in the ash has been determined by means of test in NaOH. The final properties of geopolymers prepared with or without taking into account this reactive fraction have been compared. The results showed that due to the presence of both amorphous and crystalline fractions with a different degree of reactivity, the incinerator BA geopolymers exhibit significant differences in terms of Si/Al ratio and microstructure when reactive fraction is considered. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Short-range, overpressure-driven methane migration in coarse-grained gas hydrate reservoirs

    DOE PAGES

    Nole, Michael; Daigle, Hugh; Cook, Ann E.; ...

    2016-08-31

    Two methane migration mechanisms have been proposed for coarse-grained gas hydrate reservoirs: short-range diffusive gas migration and long-range advective fluid transport from depth. Herein we demonstrate that short-range fluid flow due to overpressure in marine sediments is a significant additional methane transport mechanism that allows hydrate to precipitate in large quantities in thick, coarse-grained hydrate reservoirs. Two-dimensional simulations demonstrate that this migration mechanism, short-range advective transport, can supply significant amounts of dissolved gas and is unencumbered by limitations of the other two end-member mechanisms. Here, short-range advective migration can increase the amount of methane delivered to sands as compared tomore » the slow process of diffusion, yet it is not necessarily limited by effective porosity reduction as is typical of updip advection from a deep source.« less

  9. Short-range, overpressure-driven methane migration in coarse-grained gas hydrate reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nole, Michael; Daigle, Hugh; Cook, Ann E.

    Two methane migration mechanisms have been proposed for coarse-grained gas hydrate reservoirs: short-range diffusive gas migration and long-range advective fluid transport from depth. Herein we demonstrate that short-range fluid flow due to overpressure in marine sediments is a significant additional methane transport mechanism that allows hydrate to precipitate in large quantities in thick, coarse-grained hydrate reservoirs. Two-dimensional simulations demonstrate that this migration mechanism, short-range advective transport, can supply significant amounts of dissolved gas and is unencumbered by limitations of the other two end-member mechanisms. Here, short-range advective migration can increase the amount of methane delivered to sands as compared tomore » the slow process of diffusion, yet it is not necessarily limited by effective porosity reduction as is typical of updip advection from a deep source.« less

  10. Unit-Dose Bags For Formulating Intravenous Solutions

    NASA Technical Reports Server (NTRS)

    Finley, Mike; Kipp, Jim; Scharf, Mike; Packard, Jeff; Owens, Jim

    1993-01-01

    Smaller unit-dose flowthrough bags devised for use with large-volume parenteral (LVP) bags in preparing sterile intravenous solutions. Premeasured amount of solute stored in such unit-dose bag flushed by predetermined amount of water into LVP bag. Relatively small number of LVP bags used in conjunction with smaller unit-dose bags to formulate large number of LVP intravenous solutions in nonsterile environment.

  11. Taking Energy to the Physics Classroom from the Large Hadron Collider at CERN

    ERIC Educational Resources Information Center

    Cid, Xabier; Cid, Ramon

    2009-01-01

    In 2008, the greatest experiment in history began. When in full operation, the Large Hadron Collider (LHC) at CERN will generate the greatest amount of information that has ever been produced in an experiment before. It will also reveal some of the most fundamental secrets of nature. Despite the enormous amount of information available on this…

  12. Patchy reaction-diffusion and population abundance: the relative importance of habitat amount and arrangement

    Treesearch

    Curtis H. Flather; Michael Bevers

    2002-01-01

    A discrete reaction-diffusion model was used to estimate long-term equilibrium populations of a hypothetical species inhabiting patchy landscapes to examine the relative importance of habitat amount and arrangement in explaining population size. When examined over a broad range of habitat amounts and arrangements, population size was largely determined by a pure amount...

  13. Assessing the Provenance of regolith components in the South Pole-Aitken Basin: Results from LRO, M3, GRAIL, and Ejecta Modeling

    NASA Astrophysics Data System (ADS)

    Petro, N. E.; Cohen, B. A.; Jolliff, B. L.; Moriarty, D. P.

    2016-12-01

    Results from recent lunar missions are reshaping our view of the lunar surface, the evolution of the Moon, and the scale of processes that have affected the Moon. From orbital remote sensing data we can investigate surface mineralogy at the 100s m scale as well as corresponding high-resolution images to evaluate the exposures of various compositions. Coupled with geophysical data from the GRAIL mission, we can now assess the effects of large impacts (>200 km in diameter). These data are essential for assessing the composition of the interior of the South Pole-Aitken Basin (SPA), a key destination for future sample return (Jolliff et al., this conference). Data from the Lunar Reconnaissance Orbiter (LRO) shows that variations in surface roughness and morphology are broad and likely reflect both the ancient age of the basin floor, as well as younger volcanic and impact-related resurfacing events. Data from the Moon Mineralogy Mapper also reveal compositional variations across the interior of the basin and reflect both ancient volcanic activity as well as surface exposures of deep-seated crustal (SPA substrate) materials. These datasets are critical for delineating variations in surface compositions, which indicate formation mechanisms (e.g., volcanic vs. impact-derived). We investigate the resurfacing history of SPA, focusing on integrating data from multiple instruments, as well as updated modeling into the origin of regolith components (in the form of ejecta from near and distant impact craters). Recent advances include determination of the inventory of large craters as well as improved estimates of the amount of ejecta from such craters. As with past estimates of basin ejecta distribution, the volume of ejecta introduced to SPA is relatively small and quickly becomes diluted within the regolith. In addition, the contribution of ejecta by smaller, local craters is shown to distribute a comparable amount of material within the basin. Much of the material distributed by these local craters is SPA substrate, with a small amount of re-melted material. In most locations within SPA, the amount of reworked SPA substrate by ballistic ejecta emplacement and mixing from impacts within the presumed transient cavity greatly exceeds the amount of material contributed by ballistic sedimentation from large craters outside of SPA.

  14. Limited Effect of Rebamipide in Addition to Proton Pump Inhibitor (PPI) in the Treatment of Post-Endoscopic Submucosal Dissection Gastric Ulcers: A Randomized Controlled Trial Comparing PPI Plus Rebamipide Combination Therapy with PPI Monotherapy.

    PubMed

    Nakamura, Kazuhiko; Ihara, Eikichi; Akiho, Hirotada; Akahoshi, Kazuya; Harada, Naohiko; Ochiai, Toshiaki; Nakamura, Norimoto; Ogino, Haruei; Iwasa, Tsutomu; Aso, Akira; Iboshi, Yoichiro; Takayanagi, Ryoichi

    2016-11-15

    The ability of endoscopic submucosal dissection (ESD) to resect large early gastric cancers (EGCs) results in the need to treat large artificial gastric ulcers. This study assessed whether the combination therapy of rebamipide plus a proton pump inhibitor (PPI) offered benefits over PPI monotherapy. In this prospective, randomized, multicenter, open-label, and comparative study, patients who had undergone ESD for EGC or gastric adenoma were randomized into groups receiving either rabeprazole monotherapy (10 mg/day, n=64) or a combination of rabeprazole plus rebamipide (300 mg/day, n=66). The Scar stage (S stage) ratio after treatment was compared, and factors independently associated with ulcer healing were identified by using multivariate analyses. The S stage rates at 4 and 8 weeks were similar in the two groups, even in the subgroups of patients with large amounts of tissue resected and regardless of CYP2C19 genotype. Independent factors for ulcer healing were circumferential location of the tumor and resected tissue size; the type of treatment did not affect ulcer healing. Combination therapy with rebamipide and PPI had limited benefits compared with PPI monotherapy in the treatment of post-ESD gastric ulcer (UMIN Clinical Trials Registry, UMIN000007435).

  15. Limited Effect of Rebamipide in Addition to Proton Pump Inhibitor (PPI) in the Treatment of Post-Endoscopic Submucosal Dissection Gastric Ulcers: A Randomized Controlled Trial Comparing PPI Plus Rebamipide Combination Therapy with PPI Monotherapy

    PubMed Central

    Nakamura, Kazuhiko; Ihara, Eikichi; Akiho, Hirotada; Akahoshi, Kazuya; Harada, Naohiko; Ochiai, Toshiaki; Nakamura, Norimoto; Ogino, Haruei; Iwasa, Tsutomu; Aso, Akira; Iboshi, Yoichiro; Takayanagi, Ryoichi

    2016-01-01

    Background/Aims The ability of endoscopic submucosal dissection (ESD) to resect large early gastric cancers (EGCs) results in the need to treat large artificial gastric ulcers. This study assessed whether the combination therapy of rebamipide plus a proton pump inhibitor (PPI) offered benefits over PPI monotherapy. Methods In this prospective, randomized, multicenter, open-label, and comparative study, patients who had undergone ESD for EGC or gastric adenoma were randomized into groups receiving either rabeprazole monotherapy (10 mg/day, n=64) or a combination of rabeprazole plus rebamipide (300 mg/day, n=66). The Scar stage (S stage) ratio after treatment was compared, and factors independently associated with ulcer healing were identified by using multivariate analyses. Results The S stage rates at 4 and 8 weeks were similar in the two groups, even in the subgroups of patients with large amounts of tissue resected and regardless of CYP2C19 genotype. Independent factors for ulcer healing were circumferential location of the tumor and resected tissue size; the type of treatment did not affect ulcer healing. Conclusions Combination therapy with rebamipide and PPI had limited benefits compared with PPI monotherapy in the treatment of post-ESD gastric ulcer (UMIN Clinical Trials Registry, UMIN000007435). PMID:27282261

  16. Comparative study of absorption in tilted silicon nanowire arrays for photovoltaics

    PubMed Central

    2014-01-01

    Silicon nanowire arrays have been shown to demonstrate light trapping properties and promising potential for next-generation photovoltaics. In this paper, we show that the absorption enhancement in vertical nanowire arrays on a perfectly electric conductor can be further improved through tilting. Vertical nanowire arrays have a 66.2% improvement in ultimate efficiency over an ideal double-pass thin film of the equivalent amount of material. Tilted nanowire arrays, with the same amount of material, exhibit improved performance over vertical nanowire arrays across a broad range of tilt angles (from 38° to 72°). The optimum tilt of 53° has an improvement of 8.6% over that of vertical nanowire arrays and 80.4% over that of the ideal double-pass thin film. Tilted nanowire arrays exhibit improved absorption over the solar spectrum compared with vertical nanowires since the tilt allows for the excitation of additional modes besides the HE 1m modes that are excited at normal incidence. We also observed that tilted nanowire arrays have improved performance over vertical nanowire arrays for a large range of incidence angles (under about 60°). PMID:25435833

  17. Comparative study of absorption in tilted silicon nanowire arrays for photovoltaics.

    PubMed

    Kayes, Md Imrul; Leu, Paul W

    2014-01-01

    Silicon nanowire arrays have been shown to demonstrate light trapping properties and promising potential for next-generation photovoltaics. In this paper, we show that the absorption enhancement in vertical nanowire arrays on a perfectly electric conductor can be further improved through tilting. Vertical nanowire arrays have a 66.2% improvement in ultimate efficiency over an ideal double-pass thin film of the equivalent amount of material. Tilted nanowire arrays, with the same amount of material, exhibit improved performance over vertical nanowire arrays across a broad range of tilt angles (from 38° to 72°). The optimum tilt of 53° has an improvement of 8.6% over that of vertical nanowire arrays and 80.4% over that of the ideal double-pass thin film. Tilted nanowire arrays exhibit improved absorption over the solar spectrum compared with vertical nanowires since the tilt allows for the excitation of additional modes besides the HE 1m modes that are excited at normal incidence. We also observed that tilted nanowire arrays have improved performance over vertical nanowire arrays for a large range of incidence angles (under about 60°).

  18. Development of a new seal for use on large openings of pressurized spacecraft

    NASA Technical Reports Server (NTRS)

    Weddendorf, B.

    1994-01-01

    The goal of this project was to design, build, and test an example of the seal invented by the author for use on Space Station Freedom and patented in 1991. The seal features a metallic spring core and replaceable elastomeric sealing elements. The metallic spring is designed to retain the sealing force of the elastomeric element against both sides of face seal gland for any specified amount of waviness or separation of the glands. A seal able to tolerate at least 1.3 mm (0.05 in) of flange distortion or separation and a test fixture of this seal which allowed direct comparison testing of O-rings were built. These designs were tested to compare leakage at different amounts of flange deflection. Results of the testing show the development seal exceeded its requirement to seal 1.3 mm of flange separation by 1 mm. This compared with the O-ring leakage, increasing dramatically at 0.5 mm of separation. The development seal also leaked at a lower rate than the O-ring seals in all tests.

  19. Comparing spatially varying coefficient models: a case study examining violent crime rates and their relationships to alcohol outlets and illegal drug arrests

    NASA Astrophysics Data System (ADS)

    Wheeler, David C.; Waller, Lance A.

    2009-03-01

    In this paper, we compare and contrast a Bayesian spatially varying coefficient process (SVCP) model with a geographically weighted regression (GWR) model for the estimation of the potentially spatially varying regression effects of alcohol outlets and illegal drug activity on violent crime in Houston, Texas. In addition, we focus on the inherent coefficient shrinkage properties of the Bayesian SVCP model as a way to address increased coefficient variance that follows from collinearity in GWR models. We outline the advantages of the Bayesian model in terms of reducing inflated coefficient variance, enhanced model flexibility, and more formal measuring of model uncertainty for prediction. We find spatially varying effects for alcohol outlets and drug violations, but the amount of variation depends on the type of model used. For the Bayesian model, this variation is controllable through the amount of prior influence placed on the variance of the coefficients. For example, the spatial pattern of coefficients is similar for the GWR and Bayesian models when a relatively large prior variance is used in the Bayesian model.

  20. Improvements in nanoscale zero-valent iron production by milling through the addition of alumina

    NASA Astrophysics Data System (ADS)

    Ribas, D.; Cernik, M.; Martí, V.; Benito, J. A.

    2016-07-01

    A new milling procedure for a cost-effective production of nanoscale zero-valent iron for environmental remediation is presented. Conventional ball milling of iron in an organic solvent as Mono Ethylene Glycol produces flattened iron particles that are unlikely to break even after very long milling times. With the aim of breaking down these iron flakes, in this new procedure, further milling is carried out by adding an amount of fine alumina powder to the previously milled solution. As the amount of added alumina increases from 9 to 54 g l-1, a progressive decrease of the presence of flakes is observed. In the latter case, the appearance of the particles formed by fragments of former flakes is rather homogeneous, with most of the final nanoparticles having an equivalent diameter well below 1 µm and with an average particle size in solution of around 400 nm. An additional increase of alumina content results in a highly viscous solution showing worse particle size distribution. Milled particles, in the case of alumina concentrations of 54 g l-1, have a fairly large specific surface area and high Fe(0) content. These new particles show a very good Cr(VI) removal efficiency compared with other commercial products available. This good reactivity is related to the absence of an oxide layer, the large amount of superficial irregularities generated by the repetitive fracture process during milling and the presence of a fine nanostructure within the iron nanoparticles.

  1. [Injudicious and excessive use of antibiotics: public health and salmon aquaculture in Chile].

    PubMed

    Millanao B, Ana; Barrientos H, Marcela; Gómez C, Carolina; Tomova, Alexandra; Buschmann, Alejandro; Dölz, Humberto; Cabello, Felipe C

    2011-01-01

    Salmon aquaculture was one of the major growing and exporting industries in Chile. Its development was accompanied by an increasing and excessive use of large amounts of antimicrobials, such as quinolones, tetracyclines and florfenicol. The examination of the sanitary conditions in the industry as part of a more general investigation into the uncontrolled and extensive dissemination of the ISA virus epizootic in 2008, found numerous and wide-ranging shortcomings and limitations in management of preventive fish health. There was a growing industrial use of large amounts of antimicrobials as an attempt at prophylaxis of bacterial infections resulting from widespread unsanitary and unhealthy fish rearing conditions. As might be expected, these attempts were unsuccessful and this heavy antimicrobial use failed to prevent viral and parasitic epizootics. Comparative analysis of the amounts of antimicrobials, especially quinolones, consumed in salmon aquaculture and in human medicine in Chile robustly suggests that the most important selective pressure for antibiotic resistant bacteria in the country will be excessive antibiotic use in this industry. This excessive use will facilitate selection of resistant bacteria and resistance genes in water environments. The commonality of antibiotic resistance genes and the mobilome between environmental aquatic bacteria, fish pathogens and pathogens of terrestrial animals and humans suggests that horizontal gene transfer occurs between the resistome of these apparently independent and isolated bacterial populations. Thus, excessive antibiotic use in the marine environment in aquaculture is not innocuous and can potentially negatively affect therapy of bacterial infections of humans and terrestrial animals.

  2. The first products made in space: Monodisperse latex particles

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; El-Aasser, M. S.; Micale, F. J.; Sudol, E. D.; Tseng, C.-M.; Sheu, H.-R.; Kornfeld, D. M.

    1988-01-01

    The preparation of large particle size 3 to 30 micrometer monodisperse latexes in space confirmed that original rationale unequivocally. The flight polymerizations formed negligible amounts of coagulum as compared to increasing amounts for the ground-based polymerizations. The number of offsize large particles in the flight latexes was smaller than in the ground-based latexes. The particle size distribution broadened and more larger offsize particles were formed when the polymerizations of the partially converted STS-4 latexes were completed on Earth. Polymerization in space also showed other unanticipated advantages. The flight latexes had narrower particle size distributions than the ground-based latexes. The particles of the flight latexes were more perfect spheres than those of the ground-based latexes. The superior uniformity of the flight latexes was confirmed by the National Bureau of Standards acceptance of the 10 micrometer STS-6 latex and the 30 micrometer STS-11 latexes as Standard Reference Materials, the first products made in space for sale on Earth. The polymerization rates in space were the same as those on Earth within experimental error. Further development of the ground-based polymerization recipes gave monodisperse particles as large as 100 micrometer with tolerable levels of coagulum, but their uniformity was significantly poorer than the flight latexes. Careful control of the polymerization parameters gave uniform nonspherical particles: symmetrical and asymmetrical doublets, ellipsoids, egg-shaped, ice cream cone-shaped, and popcorn-shaped particles.

  3. A comparison of shoreline seines with fyke nets for sampling littoral fish communities in floodplain lakes

    USGS Publications Warehouse

    Clark, S.J.; Jackson, J.R.; Lochmann, S.E.

    2007-01-01

    We compared shoreline seines with fyke nets in terms of their ability to sample fish species in the littoral zone of 22 floodplain lakes of the White River, Arkansas. Lakes ranged in size from less than 0.5 to 51.0 ha. Most contained large amounts of coarse woody debris within the littoral zone, thus making seining in shallow areas difficult. We sampled large lakes (>2 ha) using three fyke nets; small lakes (<2 ha) were sampled using two fyke nets. Fyke nets were set for 24 h. Large lakes were sampled with an average of 11 seine hauls/ lake and small lakes were sampled with an average of 3 seine hauls/lake, but exact shoreline seining effort varied among lakes depending on the amount of open shoreline. Fyke nets collected more fish and produced greater species richness and diversity measures than did seining. Species evenness was similar for the two gear types. Two species were unique to seine samples, whereas 13 species and 3 families were unique to fyke-net samples. Although fyke nets collected more fish and more species than did shoreline seines, neither gear collected all the species present in the littoral zone of floodplain lakes. These results confirm the need for a multiple-gear approach to fully characterize the littoral fish assemblages in floodplain lakes. ?? Copyright by the American Fisheries Society 2007.

  4. Improvement of Strength-Toughness-Hardness Balance in Large Cross-Section 718H Pre-Hardened Mold Steel

    PubMed Central

    Liu, Hanghang; Fu, Paixian; Liu, Hongwei; Li, Dianzhong

    2018-01-01

    The strength-toughness combination and hardness uniformity in large cross-section 718H pre-hardened mold steel from a 20 ton ingot were investigated with three different heat treatments for industrial applications. The different microstructures, including tempered martensite, lower bainite, and retained austenite, were obtained at equivalent hardness. The microstructures were characterized by using metallographic observations, scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), and electron back-scattered diffraction (EBSD). The mechanical properties were compared by tensile, Charpy U-notch impact and hardness uniformity tests at room temperature. The results showed that the test steels after normalizing-quenching-tempering (N-QT) possessed the best strength-toughness combination and hardness uniformity compared with the conventional quenched-tempered (QT) steel. In addition, the test steel after austempering-tempering (A-T) demonstrated the worse hardness uniformity and lower yield strength while possessing relatively higher elongation (17%) compared with the samples after N-QT (14.5%) treatments. The better ductility of A-T steel mainly depended on the amount and morphology of retained austenite and thermal/deformation-induced twined martensite. This work elucidates the mechanisms of microstructure evolution during heat treatments and will highly improve the strength-toughness-hardness trade-off in large cross-section steels. PMID:29642642

  5. Multiple Small Diameter Drillings Increase Femoral Neck Stability Compared with Single Large Diameter Femoral Head Core Decompression Technique for Avascular Necrosis of the Femoral Head.

    PubMed

    Brown, Philip J; Mannava, Sandeep; Seyler, Thorsten M; Plate, Johannes F; Van Sikes, Charles; Stitzel, Joel D; Lang, Jason E

    2016-10-26

    Femoral head core decompression is an efficacious joint-preserving procedure for treatment of early stage avascular necrosis. However, postoperative fractures have been described which may be related to the decompression technique used. Femoral head decompressions were performed on 12 matched human cadaveric femora comparing large 8mm single bore versus multiple 3mm small drilling techniques. Ultimate failure strength of the femora was tested using a servo-hydraulic material testing system. Ultimate load to failure was compared between the different decompression techniques using two paired ANCOVA linear regression models. Prior to biomechanical testing and after the intervention, volumetric bone mineral density was determined using quantitative computed tomography to account for variation between cadaveric samples and to assess the amount of bone disruption by the core decompression. Core decompression, using the small diameter bore and multiple drilling technique, withstood significantly greater load prior to failure compared with the single large bore technique after adjustment for bone mineral density (p< 0.05). The 8mm single bore technique removed a significantly larger volume of bone compared to the 3mm multiple drilling technique (p< 0.001). However, total fracture energy was similar between the two core decompression techniques. When considering core decompression for the treatment of early stage avascular necrosis, the multiple small bore technique removed less bone volume, thereby potentially leading to higher load to failure.

  6. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    PubMed

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  7. Stopover decision during migration: physiological conditions predict nocturnal restlessness in wild passerines.

    PubMed

    Fusani, Leonida; Cardinale, Massimiliano; Carere, Claudio; Goymann, Wolfgang

    2009-06-23

    During migration, a number of bird species rely on stopover sites for resting and feeding before and after crossing ecological barriers such as deserts or seas. The duration of a stopover depends on the combined effects of environmental factors, endogenous programmes and physiological conditions. Previous studies indicated that lean birds prolong their refuelling stopover compared with fat birds; however, the quantitative relationship between physiological conditions and stopover behaviour has not been studied yet. Here, we tested in a large sample of free-living birds of three European passerines (whinchats, Saxicola rubetra, garden warblers, Sylvia borin and whitethroats, Sylvia communis) whether the amount of migratory restlessness (Zugunruhe) shown at a stopover site depends on physiological conditions. An integrated measure of condition based on body mass, amount of subcutaneous fat and thickness of pectoral muscles strongly predicted the intensity of Zugunruhe shown in recording cages in the night following capture. These results provide novel and robust quantitative evidence in support of the hypothesis that the amount of energy reserves plays a major role in determining the stopover duration in migratory birds.

  8. Stopover decision during migration: physiological conditions predict nocturnal restlessness in wild passerines

    PubMed Central

    Fusani, Leonida; Cardinale, Massimiliano; Carere, Claudio; Goymann, Wolfgang

    2009-01-01

    During migration, a number of bird species rely on stopover sites for resting and feeding before and after crossing ecological barriers such as deserts or seas. The duration of a stopover depends on the combined effects of environmental factors, endogenous programmes and physiological conditions. Previous studies indicated that lean birds prolong their refuelling stopover compared with fat birds; however, the quantitative relationship between physiological conditions and stopover behaviour has not been studied yet. Here, we tested in a large sample of free-living birds of three European passerines (whinchats, Saxicola rubetra, garden warblers, Sylvia borin and whitethroats, Sylvia communis) whether the amount of migratory restlessness (Zugunruhe) shown at a stopover site depends on physiological conditions. An integrated measure of condition based on body mass, amount of subcutaneous fat and thickness of pectoral muscles strongly predicted the intensity of Zugunruhe shown in recording cages in the night following capture. These results provide novel and robust quantitative evidence in support of the hypothesis that the amount of energy reserves plays a major role in determining the stopover duration in migratory birds. PMID:19324648

  9. Memory consolidation and contextual interference effects with computer games.

    PubMed

    Shewokis, Patricia A

    2003-10-01

    Some investigators of the contextual interference effect contend that there is a direct relation between the amount of practice and the contextual interference effect based on the prediction that the improvement in learning tasks in a random practice schedule, compared to a blocked practice schedule, increases in magnitude as the amount of practice during acquisition on the tasks increases. Research using computer games in contextual interference studies has yielded a large effect (f = .50) with a random practice schedule advantage during transfer. These investigations had a total of 36 and 72 acquisition trials, respectively. The present study tested this prediction by having 72 college students, who were randomly assigned to a blocked or random practice schedule, practice 102 trials of three computer-game tasks across three days. After a 24-hr. interval, 6 retention and 5 transfer trials were performed. Dependent variables were time to complete an event in seconds and number of errors. No significant differences were found for retention and transfer. These results are discussed in terms of how the amount of practice, task-related factors, and memory consolidation mediate the contextual interference effect.

  10. Electric Charge Accumulation in Polar and Non-Polar Polymers under Electron Beam Irradiation

    NASA Astrophysics Data System (ADS)

    Nagasawa, Kenichiro; Honjoh, Masato; Takada, Tatsuo; Miyake, Hiroaki; Tanaka, Yasuhiro

    The electric charge accumulation under an electron beam irradiation (40 keV and 60 keV) was measured by using the pressure wave propagation (PWP) method in the dielectric insulation materials, such as polar polymeric films (polycarbonate (PC), polyethylene-naphthalate (PEN), polyimide (PI), and polyethylene-terephthalate (PET)) and non-polar polymeric films (polystyrene (PS), polypropylene (PP), polyethylene (PE) and polytetrafluoroethylene (PTFE)). The PE and PTFE (non-polar polymers) showed the properties of large amount of electric charge accumulation over 50 C/m3 and long saturation time over 80 minutes. The PP and PS (non-polar polymer) showed the properties of middle amount of charge accumulation about 20 C/m3 and middle saturation time about 1 to 20 minutes. The PC, PEN, PI and PET (polar polymers) showed the properties of small amount of charge accumulation about 5 to 20 C/m3 and within short saturation time about 1.0 minutes. This paper summarizes the relationship between the properties of charge accumulation and chemical structural formula, and compares between the electro static potential distribution with negative charged polymer and its chemical structural formula.

  11. Branched ZnO wire structures for water collection inspired by cacti.

    PubMed

    Heng, Xin; Xiang, Mingming; Lu, Zhihui; Luo, Cheng

    2014-06-11

    In this work, motivated by an approach used in a cactus to collect fog, we have developed an artificial water-collection structure. This structure includes a large ZnO wire and an array of small ZnO wires that are branched on the large wire. All these wires have conical shapes, whose diameters gradually increase from the tip to the root of a wire. Accordingly, a water drop that is condensed on the tip of each wire is driven to the root by a capillary force induced by this diameter gradient. The lengths of stem and branched wires in the synthesized structures are in the orders of 1 mm and 100 μm, respectively. These dimensions are, respectively, comparable to and larger than their counterparts in the case of a cactus. Two groups of tests were conducted at relative humidity of 100% to compare the amounts of water collected by artificial and cactus structures within specific time durations of 2 and 35 s, respectively. The amount of water collected by either type of structures was in the order of 0.01 μL. However, on average, what has been collected by the artificial structures was 1.4-5.0 times more than that harvested by the cactus ones. We further examined the mechanism that a cactus used to absorb a collected water drop into its stem. On the basis of the gained understanding, we developed a setup to successfully collect about 6 μL of water within 30 min.

  12. Desiderata for Healthcare Integrated Data Repositories Based on Architectural Comparison of Three Public Repositories

    PubMed Central

    Huser, Vojtech; Cimino, James J.

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network’s Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management. PMID:24551366

  13. Desiderata for healthcare integrated data repositories based on architectural comparison of three public repositories.

    PubMed

    Huser, Vojtech; Cimino, James J

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network's Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management.

  14. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  15. Accelerating calculations of RNA secondary structure partition functions using GPUs

    PubMed Central

    2013-01-01

    Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434

  16. Dosage form design and in vitro/in vivo evaluation of cevimeline extended-release tablet formulations.

    PubMed

    Tajiri, Shinichiro; Kanamaru, Taro; Kamada, Makoto; Makoto, Kamada; Konno, Tsutomu; Nakagami, Hiroaki

    2010-01-04

    The objective of the present work is to develop an extended-release dosage form of cevimeline. Two types of extended-release tablets (simple matrix tablets and press-coated tablets) were prepared and their potential as extended-release dosage forms were assessed. Simple matrix tablets have a large amount of hydroxypropylcellulose as a rate-controlling polymer and the matrix is homogeneous throughout the tablet. The press-coated tablets consisted of a matrix core tablet, which was completely surrounded by an outer shell containing a large amount of hydroxypropylcellulose. The simple matrix tablets could not sustain the release of cevimeline effectively. In contrast, the press-coated tablets showed a slower dissolution rate compared with simple matrix tablets and the release curve was nearly linear. The dissolution of cevimeline from the press-coated tablets was not markedly affected by the pH of the dissolution medium or by a paddle rotating speed over the range of 50-200 rpm. Furthermore, cevimeline was constantly released from the press-coated tablets in the gastrointestinal tract and the steady-state plasma drug levels were maintained in beagle dogs. These results suggested that the designed PC tablets have a potential for extended-release dosage forms.

  17. Solar array flight experiment

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Emerging satellite designs require increasing amounts of electrical power to operate spacecraft instruments and to provide environments suitable for human habitation. In the past, electrical power was generated by covering rigid honeycomb panels with solar cells. This technology results in unacceptable weight and volume penalties when large amounts of power are required. To fill the need for large-area, lightweight solar arrays, a fabrication technique in which solar cells are attached to a copper printed circuit laminated to a plastic sheet was developed. The result is a flexible solar array with one-tenth the stowed volume and one-third the weight of comparably sized rigid arrays. An automated welding process developed to attack the cells to the printed circuit guarantees repeatable welds that are more tolerant of severe environments than conventional soldered connections. To demonstrate the flight readiness of this technology, the Solar Array Flight Experiment (SAFE) was developed and flown on the space shuttle Discovery in September 1984. The tests showed the modes and frequencies of the array to be very close to preflight predictions. Structural damping, however, was higher than anticipated. Electrical performance of the active solar panel was also tested. The flight performance and postflight data evaluation are described.

  18. Parallel hyperspectral compressive sensing method on GPU

    NASA Astrophysics Data System (ADS)

    Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.

    2015-10-01

    Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

  19. Accumulation of vitamin A in the hepatic stellate cell of arctic top predators.

    PubMed

    Senoo, Haruki; Imai, Katsuyuki; Mezaki, Yoshihiro; Miura, Mitsutaka; Morii, Mayako; Fujiwara, Mutsunori; Blomhoff, Rune

    2012-10-01

    We performed a systematic characterization of the hepatic vitamin A storage in mammals and birds of the Svalbard Archipelago and Greenland. The liver of top predators, including polar bear, Arctic fox, bearded seal, and glaucous gull, contained about 10-20 times more vitamin A than the liver of all other arctic animals studied, as well as their genetically related continental top predators. The values are also high compared to normal human and experimental animals like mouse and rat. This massive amount of hepatic vitamin A was located in large autofluorescent lipid droplets in hepatic stellate cells (HSCs; also called vitamin A-storing cells, lipocytes, interstitial cells, fat-storing cells, or Ito cells). The droplets made up most of the cells' cytoplasm. The development of such an efficient vitamin A-storing mechanism in HSCs may have contributed to the survival of top predators in the extreme environment of the arctic. These animals demonstrated no signs of hypervitaminosis A. We suggest that HSCs have capacity to take-up and store large amounts of vitamin A, which may play a pivotal role in maintenance of the food web, food chain, biodiversity, and eventually ecology of the arctic. Copyright © 2012 Wiley Periodicals, Inc.

  20. Self-reported overeating and attributions for food intake.

    PubMed

    Vartanian, Lenny R; Reily, Natalie M; Spanos, Samantha; Herman, C Peter; Polivy, Janet

    2017-04-01

    We examined whether people's attributions for their eating behaviour differ according to whether they believe they have eaten more, less or about the same as they normally would. Participants were served a small or large portion of pasta for lunch. Afterwards, they were asked to compare how much they ate in the study to how much they normally eat for lunch, resulting in three intake-evaluation categories: 'ate less', 'ate about the same' or 'ate more'. How much participants ate; the extent to which they attributed their food intake to an internal cue (i.e. hunger) and an external cue (i.e. the amount of food served). Participants served a large portion ate more than those served a small portion, but the magnitude of the portion-size effect did not vary across intake-evaluation categories. Furthermore, although participants in all groups indicated that their hunger influenced how much they ate, only those in the 'ate more' group indicated that the amount of food available influenced how much they ate. People appear to be willing to explain their food intake in terms of an external cue only when they believe that they have eaten more than they normally would.

  1. Old age and underlying interstitial abnormalities are risk factors for development of ARDS after pleurodesis using limited amount of large particle size talc.

    PubMed

    Shinno, Yuki; Kage, Hidenori; Chino, Haruka; Inaba, Atsushi; Arakawa, Sayaka; Noguchi, Satoshi; Amano, Yosuke; Yamauchi, Yasuhiro; Tanaka, Goh; Nagase, Takahide

    2018-01-01

    Talc pleurodesis is commonly performed to manage refractory pleural effusion or pneumothorax. It is considered as a safe procedure as long as a limited amount of large particle size talc is used. However, acute respiratory distress syndrome (ARDS) is a rare but serious complication after talc pleurodesis. We sought to determine the risk factors for the development of ARDS after pleurodesis using a limited amount of large particle size talc. We retrospectively reviewed patients who underwent pleurodesis with talc or OK-432 at the University of Tokyo Hospital. Twenty-seven and 35 patients underwent chemical pleurodesis using large particle size talc (4 g or less) or OK-432, respectively. Four of 27 (15%) patients developed ARDS after talc pleurodesis. Patients who developed ARDS were significantly older than those who did not (median 80 vs 66 years, P = 0.02) and had a higher prevalence of underlying interstitial abnormalities on chest computed tomography (CT; 2/4 vs 1/23, P < 0.05). No patient developed ARDS after pleurodesis with OK-432. This is the first case series of ARDS after pleurodesis using a limited amount of large particle size talc. Older age and underlying interstitial abnormalities on chest CT seem to be risk factors for developing ARDS after talc pleurodesis. © 2017 Asian Pacific Society of Respirology.

  2. 9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...

  3. 9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...

  4. 9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...

  5. 9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...

  6. 9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...

  7. 9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...

  8. Bag For Formulating And Dispersing Intravenous Solution

    NASA Technical Reports Server (NTRS)

    Kipp, Jim; Owens, Jim; Scharf, Mike; Finley, Mike; Dudar, Tom; Veillon, Joe; Ogle, Jim

    1993-01-01

    Large-volume parenteral (LVP) bag in which predetermined amount of sterile solution formulated by combining premeasured, prepackaged amount of sterile solute with predetermined amount of water. Bag designed to hold predetermined amount, typically 1 L, of sterile solution. Sterility of solution maintained during mixing by passing water into bag through sterilizing filter. System used in field or hospitals not having proper sterile facilities, and in field research.

  9. Low-authority control synthesis for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1982-01-01

    The control of vibrations of large space structures by distributed sensors and actuators is studied. A procedure is developed for calculating the feedback loop gains required to achieve specified amounts of damping. For moderate damping (Low Authority Control) the procedure is purely algebraic, but it can be applied iteratively when larger amounts of damping are required and is generalized for arbitrary time invariant systems.

  10. The algorithm study for using the back propagation neural network in CT image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi

    2017-01-01

    Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.

  11. Amazon River carbon dioxide outgassing fuelled by wetlands.

    PubMed

    Abril, Gwenaël; Martinez, Jean-Michel; Artigas, L Felipe; Moreira-Turcq, Patricia; Benedetti, Marc F; Vidal, Luciana; Meziane, Tarik; Kim, Jung-Hyun; Bernardes, Marcelo C; Savoye, Nicolas; Deborde, Jonathan; Souza, Edivaldo Lima; Albéric, Patrick; Landim de Souza, Marcelo F; Roland, Fabio

    2014-01-16

    River systems connect the terrestrial biosphere, the atmosphere and the ocean in the global carbon cycle. A recent estimate suggests that up to 3 petagrams of carbon per year could be emitted as carbon dioxide (CO2) from global inland waters, offsetting the carbon uptake by terrestrial ecosystems. It is generally assumed that inland waters emit carbon that has been previously fixed upstream by land plant photosynthesis, then transferred to soils, and subsequently transported downstream in run-off. But at the scale of entire drainage basins, the lateral carbon fluxes carried by small rivers upstream do not account for all of the CO2 emitted from inundated areas downstream. Three-quarters of the world's flooded land consists of temporary wetlands, but the contribution of these productive ecosystems to the inland water carbon budget has been largely overlooked. Here we show that wetlands pump large amounts of atmospheric CO2 into river waters in the floodplains of the central Amazon. Flooded forests and floating vegetation export large amounts of carbon to river waters and the dissolved CO2 can be transported dozens to hundreds of kilometres downstream before being emitted. We estimate that Amazonian wetlands export half of their gross primary production to river waters as dissolved CO2 and organic carbon, compared with only a few per cent of gross primary production exported in upland (not flooded) ecosystems. Moreover, we suggest that wetland carbon export is potentially large enough to account for at least the 0.21 petagrams of carbon emitted per year as CO2 from the central Amazon River and its floodplains. Global carbon budgets should explicitly address temporary or vegetated flooded areas, because these ecosystems combine high aerial primary production with large, fast carbon export, potentially supporting a substantial fraction of CO2 evasion from inland waters.

  12. Tryptophan promotes charitable donating

    PubMed Central

    Steenbergen, Laura; Sellaro, Roberta; Colzato, Lorenza S.

    2014-01-01

    The link between serotonin (5-HT) and one of the most important elements of prosocial behavior, charity, has remained largely uninvestigated. In the present study, we tested whether charitable donating can be promoted by administering the food supplement L-Tryptophan (TRP), the biochemical precursor of 5-HT. Participants were compared with respect to the amount of money they donated when given the opportunity to make a charitable donation. As expected, compared to a neutral placebo, TRP appears to increase the participants’ willingness to donate money to a charity. This result supports the idea that the food we eat may act as a cognitive enhancer modulating the way we think and perceive the world and others. PMID:25566132

  13. Fermentation of animal components in strict carnivores: a comparative study with cheetah fecal inoculum.

    PubMed

    Depauw, S; Bosch, G; Hesta, M; Whitehouse-Tedd, K; Hendriks, W H; Kaandorp, J; Janssens, G P J

    2012-08-01

    The natural diet of felids contains highly digestible animal tissues but also fractions resistant to small intestinal digestion, which enter the large intestine where they may be fermented by the resident microbial population. Little information exists on the microbial degradability of animal tissues in the large intestine of felids consuming a natural diet. This study aimed to rank animal substrates in their microbial degradability by means of an in vitro study using captive cheetahs fed a strict carnivorous diet as fecal donors. Fresh cheetah fecal samples were collected, pooled, and incubated with various raw animal substrates (chicken cartilage, collagen, glucosamine-chondroitin, glucosamine, rabbit bone, rabbit hair, and rabbit skin; 4 replicates per substrate) for cumulative gas production measurement in a batch culture technique. Negative (cellulose) and positive (casein and fructo-oligosaccharides; FOS) controls were incorporated in the study. Additionally, after 72 h of incubation, short-chain fatty acids (SCFA), including branched-chain fatty acids (BCFA), and ammonia concentrations were determined for each substrate. Glucosamine and glucosamine-chondroitin yielded the greatest organic matter cumulative gas volume (OMCV) among animal substrates (P < 0.05), whereas total SCFA production was greatest for collagen (P < 0.05). Collagen induced an acetate production comparable with FOS and a markedly high acetate-to-propionate ratio (8.41:1) compared with all other substrates (1.67:1 to 2.97:1). Chicken cartilage was rapidly fermentable, indicated by a greater maximal rate of gas production (R(max)) compared with all other substrates (P < 0.05). In general, animal substrates showed an earlier occurrence for maximal gas production rate compared with FOS. Rabbit hair, skin, and bone were poorly fermentable substrates, indicated by the least amount of OMCV and total SCFA among animal substrates (P < 0.05). The greatest amount of ammonia production among animal substrates was measured after incubation of collagen and rabbit bone (P < 0.05). This study provides the first insight into the potential of animal tissues to influence large intestinal fermentation in a strict carnivore, and indicates that animal tissues have potentially similar functions as soluble or insoluble plant fibers in vitro. Further research is warranted to assess the impact of fermentation of each type of animal tissue on gastro-intestinal function and health in the cheetah and other felid species.

  14. Discovering cell types in flow cytometry data with random matrix theory

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Nussenblatt, Robert; Losert, Wolfgang

    Flow cytometry is a widely used experimental technique in immunology research. During the experiments, peripheral blood mononuclear cells (PBMC) from a single patient, labeled with multiple fluorescent stains that bind to different proteins, are illuminated by a laser. The intensity of each stain on a single cell is recorded and reflects the amount of protein expressed by that cell. The data analysis focuses on identifying specific cell types related to a disease. Different cell types can be identified by the type and amount of protein they express. To date, this has most often been done manually by labelling a protein as expressed or not while ignoring the amount of expression. Using a cross correlation matrix of stain intensities, which contains both information on the proteins expressed and their amount, has been largely ignored by researchers as it suffers from measurement noise. Here we present an algorithm to identify cell types in flow cytometry data which uses random matrix theory (RMT) to reduce noise in a cross correlation matrix. We demonstrate our method using a published flow cytometry data set. Compared with previous analysis techniques, we were able to rediscover relevant cell types in an automatic way. Department of Physics, University of Maryland, College Park, MD 20742.

  15. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process.

    PubMed

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S; Jazar, Reza N; Khayyam, Hamid

    2018-03-05

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.

  16. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process

    PubMed Central

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S.; Jazar, Reza N.; Khayyam, Hamid

    2018-01-01

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large. PMID:29510592

  17. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  18. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  19. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  20. Forming-free resistive switching characteristics of Ag/CeO2/Pt devices with a large memory window

    NASA Astrophysics Data System (ADS)

    Zheng, Hong; Kim, Hyung Jun; Yang, Paul; Park, Jong-Sung; Kim, Dong Wook; Lee, Hyun Ho; Kang, Chi Jung; Yoon, Tae-Sik

    2017-05-01

    Ag/CeO2(∼45 nm)/Pt devices exhibited forming-free bipolar resistive switching with a large memory window (low-resistance-state (LRS)/high-resistance-state (HRS) ratio >106) at a low switching voltage (<±1 ∼ 2 V) in voltage sweep condition. Also, they retained a large memory window (>104) at a pulse operation (±5 V, 50 μs). The high oxygen ionic conductivity of the CeO2 layer as well as the migration of silver facilitated the formation of filament for the transition to LRS at a low voltage without a high voltage forming operation. Also, a certain amount of defects in the CeO2 layer was required for stable HRS with space-charge-limited-conduction, which was confirmed comparing the devices with non-annealed and annealed CeO2 layers.

  1. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  2. Rapid MALDI-TOF mass spectrometry strain typing during a large outbreak of Shiga-Toxigenic Escherichia coli.

    PubMed

    Christner, Martin; Trusch, Maria; Rohde, Holger; Kwiatkowski, Marcel; Schlüter, Hartmut; Wolters, Manuel; Aepfelbacher, Martin; Hentschke, Moritz

    2014-01-01

    In 2011 northern Germany experienced a large outbreak of Shiga-Toxigenic Escherichia coli O104:H4. The large amount of samples sent to microbiology laboratories for epidemiological assessment highlighted the importance of fast and inexpensive typing procedures. We have therefore evaluated the applicability of a MALDI-TOF mass spectrometry based strategy for outbreak strain identification. Specific peaks in the outbreak strain's spectrum were identified by comparative analysis of archived pre-outbreak spectra that had been acquired for routine species-level identification. Proteins underlying these discriminatory peaks were identified by liquid chromatography tandem mass spectrometry and validated against publicly available databases. The resulting typing scheme was evaluated against PCR genotyping with 294 E. coli isolates from clinical samples collected during the outbreak. Comparative spectrum analysis revealed two characteristic peaks at m/z 6711 and m/z 10883. The underlying proteins were found to be of low prevalence among genome sequenced E. coli strains. Marker peak detection correctly classified 292 of 293 study isolates, including all 104 outbreak isolates. MALDI-TOF mass spectrometry allowed for reliable outbreak strain identification during a large outbreak of Shiga-Toxigenic E. coli. The applied typing strategy could probably be adapted to other typing tasks and might facilitate epidemiological surveys as part of the routine pathogen identification workflow.

  3. Temporal Variations of Different Solar Activity Indices Through the Solar Cycles 21-23

    NASA Astrophysics Data System (ADS)

    Göker, Ü. D.; Singh, J.; Nutku, F.; Priyal, M.

    2017-12-01

    Here, we compare the sunspot counts and the number of sunspot groups (SGs) with variations of total solar irradiance (TSI), magnetic activity, Ca II K-flux, faculae and plage areas. We applied a time series method for extracting the data over the descending phases of solar activity cycles (SACs) 21, 22 and 23, and the ascending phases 22 and 23. Our results suggest that there is a strong correlation between solar activity indices and the changes in small (A, B, C and H-modified Zurich Classification) and large (D, E and F) SGs. This somewhat unexpected finding suggests that plage regions substantially decreased in spite of the higher number of large SGs in SAC 23 while the Ca II K-flux did not decrease by a large amount nor was it comparable with SAC 22 and relates with C and DEF type SGs. In addition to this, the increase of facular areas which are influenced by large SGs, caused a small percentage decrease in TSI while the decrement of plage areas triggered a higher decrease in the magnetic field flux. Our results thus reveal the potential of such a detailed comparison of the SG analysis with solar activity indices for better understanding and predicting future trends in the SACs.

  4. Supply of large woody debris in a stream channel

    USGS Publications Warehouse

    Diehl, Timothy H.; Bryan, Bradley A.

    1993-01-01

    The amount of large woody debris that potentially could be transported to bridge sites was assessed in the basin of the West Harpeth River in Tennessee in the fall of 1992. The assessment was based on inspections of study sites at 12 bridges and examination of channel reaches between bridges. It involved estimating the amount of woody material at least 1.5 meters long, stored in the channel, and not rooted in soil. Study of multiple sites allowed estimation of the amount, characteristics, and sources of debris stored in the channel, and identification of geomorphic features of the channel associated with debris production. Woody debris is plentiful in the channel network, and much of the debris could be transported by a large flood. Tree trunks with attached root masses are the dominant large debris type. Death of these trees is primarily the result of bank erosion. Bank instability seems to be the basin characteristic most useful in identifying basins with a high potential for abundant production of debris.

  5. Diet - liver disease

    MedlinePlus

    ... of toxic waste products. Increasing your intake of carbohydrates to be in proportion with the amount of ... severe liver disease include: Eat large amounts of carbohydrate foods. Carbohydrates should be the major source of ...

  6. Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce

    NASA Astrophysics Data System (ADS)

    Farhan Husain, Mohammad; Doshi, Pankil; Khan, Latifur; Thuraisingham, Bhavani

    Handling huge amount of data scalably is a matter of concern for a long time. Same is true for semantic web data. Current semantic web frameworks lack this ability. In this paper, we describe a framework that we built using Hadoop to store and retrieve large number of RDF triples. We describe our schema to store RDF data in Hadoop Distribute File System. We also present our algorithms to answer a SPARQL query. We make use of Hadoop's MapReduce framework to actually answer the queries. Our results reveal that we can store huge amount of semantic web data in Hadoop clusters built mostly by cheap commodity class hardware and still can answer queries fast enough. We conclude that ours is a scalable framework, able to handle large amount of RDF data efficiently.

  7. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    PubMed

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  8. The use of modeling and suspended sediment concentration measurements for quantifying net suspended sediment transport through a large tidally dominated inlet

    USGS Publications Warehouse

    Erikson, Li H.; Wright, Scott A.; Elias, Edwin; Hanes, Daniel M.; Schoellhamer, David H.; Largier, John; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    Sediment exchange at large energetic inlets is often difficult to quantify due complex flows, massive amounts of water and sediment exchange, and environmental conditions limiting long-term data collection. In an effort to better quantify such exchange this study investigated the use of suspended sediment concentrations (SSC) measured at an offsite location as a surrogate for sediment exchange at the tidally dominated Golden Gate inlet in San Francisco, CA. A numerical model was calibrated and validated against water and suspended sediment flux measured during a spring–neap tide cycle across the Golden Gate. The model was then run for five months and net exchange was calculated on a tidal time-scale and compared to SSC measurements at the Alcatraz monitoring site located in Central San Francisco Bay ~ 5 km from the Golden Gate. Numerically modeled tide averaged flux across the Golden Gate compared well (r2 = 0.86, p-value

  9. Impact of Healthy Vending Machine Options in a Large Community Health Organization.

    PubMed

    Grivois-Shah, Ravi; Gonzalez, Juan R; Khandekar, Shashank P; Howerter, Amy L; O'Connor, Patrick A; Edwards, Barbara A

    2017-01-01

    To determine whether increasing the proportion of healthier options in vending machines decreases the amount of calories, fat, sugar, and sodium vended, while maintaining total sales revenue. This study evaluated the impact of altering nutritious options to vending machines throughout the Banner Health organization by comparing vended items' sales and nutrition information over 6 months compared to the same 6 months of the previous year. Twenty-three locations including corporate and patient-care centers. Changing vending machine composition toward more nutritious options. Comparisons of monthly aggregates of sales, units vended, calories, fat, sodium, and sugar vended by site. A pre-post analysis using paired t tests comparing 6 months before implementation to the equivalent 6 months postimplementation. Significant average monthly decreases were seen for calories (16.7%, P = .002), fat (27.4%, P ≤ .0001), sodium (25.9%, P ≤ .0001), and sugar (11.8%, P = .045) vended from 2014 to 2015. Changes in revenue and units vended did not change from 2014 to 2015 ( P = .58 and P = .45, respectively). Increasing the proportion of healthier options in vending machines from 20% to 80% significantly lowered the amount of calories, sodium, fat, and sugar vended, while not reducing units vended or having a negative financial impact.

  10. Increasing glucose load while maintaining normoglycemia does not evoke neuronal damage in prolonged critically ill rabbits.

    PubMed

    Sonneville, Romain; den Hertog, Heleen M; Derde, Sarah; Güiza, Fabian; Derese, Inge; Van den Berghe, Greet; Vanhorebeek, Ilse

    2013-12-01

    Preventing severe hyperglycemia with insulin reduced the neuropathological alterations in frontal cortex during critical illness. We investigated the impact of increasing glucose load under normoglycemia on neurons and glial cells. Hyperinflammatory critically ill rabbits were randomized to fasting or combined parenteral nutrition containing progressively increasing amounts of glucose (low, intermediate, high) within the physiological range but with a similar amount of amino acids and lipids. In all groups, normoglycemia was maintained with insulin. On day 7, we studied the neuropathological alterations in frontal cortex neurons, astrocytes and microglia, and MnSOD as marker of oxidative stress. The percentage of damaged neurons was comparable among all critically ill and healthy rabbits. Critical illness induced an overall 1.8-fold increase in astrocyte density and activation status, largely irrespective of the nutritional intake. The percentage of microglia activation in critically ill rabbits was comparable with that in healthy rabbits, irrespective of glucose load. Likewise, MnSOD expression was comparable in critically ill and healthy rabbits without any clear impact of the nutritional interventions. During prolonged critical illness, increasing intravenous glucose infusion while strictly maintaining normoglycemia appeared safe for neuronal integrity and did not substantially affect glial cells in frontal cortex. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  11. Microstructures of superhydrophobic plant leaves - inspiration for efficient oil spill cleanup materials.

    PubMed

    Zeiger, Claudia; Rodrigues da Silva, Isabelle C; Mail, Matthias; Kavalenka, Maryna N; Barthlott, Wilhelm; Hölscher, Hendrik

    2016-08-16

    The cleanup of accidental oil spills in water is an enormous challenge; conventional oil sorbents absorb large amounts of water in addition to oil and other cleanup methods can cause secondary pollution. In contrast, fresh leaves of the aquatic ferns Salvinia are superhydrophobic and superoleophilic, and can selectively absorb oil while repelling water. These selective wetting properties are optimal for natural oil absorbent applications and bioinspired oil sorbent materials. In this paper we quantify the oil absorption capacity of four Salvinia species with different surface structures, water lettuce (Pistia stratiotes) and Lotus leaves (Nelumbo nucifera), and compare their absorption capacity to artificial oil sorbents. Interestingly, the oil absorption capacities of Salvinia molesta and Pistia stratiotes leaves are comparable to artificial oil sorbents. Therefore, these pantropical invasive plants, often considered pests, qualify as environmentally friendly materials for oil spill cleanup. Furthermore, we investigated the influence of oil density and viscosity on the oil absorption, and examine how the presence and morphology of trichomes affect the amount of oil absorbed by their surfaces. Specifically, the influence of hair length and shape is analyzed by comparing different hair types ranging from single trichomes of Salvinia cucullata to complex eggbeater-shaped trichomes of Salvinia molesta to establish a basis for improving artificial bioinspired oil absorbents.

  12. Spontaneous, generalized lipidosis in captive greater horseshoe bats (Rhinolophus ferrumequinum).

    PubMed

    Gozalo, Alfonso S; Schwiebert, Rebecca S; Metzner, Walter; Lawson, Gregory W

    2005-11-01

    During a routine 6-month quarantine period, 3 of 34 greater horseshoe bats (Rhinolophus ferrumequinum) captured in mainland China and transported to the United States for use in echolocation studies were found dead with no prior history of illness. All animals were in good body condition at the time of death. At necropsy, a large amount of white fat was found within the subcutis, especially in the sacrolumbar region. The liver, kidneys, and heart were diffusely tan in color. Microscopic examination revealed that hepatocytes throughout the liver were filled with lipid, and in some areas, lipid granulomas were present. renal lesions included moderate amounts of lipid in the cortical tubular epithelium and large amounts of protein and lipid within Bowman's capsules in the glomeruli. In addition, one bat had large lipid vacuoles diffusely distributed throughout the myocardium. The exact pathologic mechanism inducing the hepatic, renal, and cardiac lipidosis is unknown. The horseshoe bats were captured during hibernation and immediately transported to the United States. It is possible that the large amount of fat stored coupled with changes in photoperiod, lack of exercise, and/or the stress of captivity might have contributed to altering the normal metabolic processes, leading to anorexia and consequently lipidosis in these animals.

  13. Method for large-scale fabrication of atomic-scale structures on material surfaces using surface vacancies

    DOEpatents

    Lim, Chong Wee; Ohmori, Kenji; Petrov, Ivan Georgiev; Greene, Joseph E.

    2004-07-13

    A method for forming atomic-scale structures on a surface of a substrate on a large-scale includes creating a predetermined amount of surface vacancies on the surface of the substrate by removing an amount of atoms on the surface of the material corresponding to the predetermined amount of the surface vacancies. Once the surface vacancies have been created, atoms of a desired structure material are deposited on the surface of the substrate to enable the surface vacancies and the atoms of the structure material to interact. The interaction causes the atoms of the structure material to form the atomic-scale structures.

  14. Contrails of Small and Very Large Optical Depth

    NASA Technical Reports Server (NTRS)

    Atlas, David; Wang, Zhien

    2010-01-01

    This work deals with two kinds of contrails. The first comprises a large number of optically thin contrails near the tropopause. They are mapped geographically using a lidar to obtain their height and a camera to obtain azimuth and elevation. These high-resolution maps provide the local contrail geometry and the amount of optically clear atmosphere. The second kind is a single trail of unprecedentedly large optical thickness that occurs at a lower height. The latter was observed fortuitously when an aircraft moving along the wind direction passed over the lidar, thus providing measurements for more than 3 h and an equivalent distance of 620 km. It was also observed by Geostationary Operational Environmental Satellite (GOES) sensors. The lidar measured an optical depth of 2.3. The corresponding extinction coefficient of 0.023 per kilometer and ice water content of 0.063 grams per cubic meter are close to the maximum values found for midlatitude cirrus. The associated large radar reflectivity compares to that measured by ultrasensitive radar, thus providing support for the reality of the large optical depth.

  15. Effects of consumption of choline and lecithin on neurological and cardiovascular systems.

    PubMed

    Wood, J L; Allison, R G

    1982-12-01

    This report concerns possible adverse health effects and benefits that might result from consumption of large amounts of choline, lecithin, or phosphatidylcholine. Indications from preliminary investigations that administration of choline or lecithin might alleviate some neurological disturbances, prevent hypercholesteremia and atherosclerosis, and restore memory and cognition have resulted in much research and public interest. Symptoms of tardive dyskinesia and Alzheimer's disease have been ameliorated in some patients and varied responses have been observed in the treatment of Gilles de la Tourette's disease, Friedreich's ataxia, levodopa-induced dyskinesia, mania, Huntington's disease, and myasthenic syndrome. Further clinical trials, especially in conjunction with cholinergic drugs, are considered worthwhile but will require sufficient amounts of pure phosphatidylcholine. The public has access to large amounts of commercial lecithin. Because high intakes of lecithin or choline produce acute gastrointestinal distress, sweating, salivation, and anorexia, it is improbable that individuals will incur lasting health hazards from self-administration of either compound. Development of depression or supersensitivity of dopamine receptors and disturbance of the cholinergic-dopaminergic-serotinergic balance is a concern with prolonged, repeated intakes of large amounts of lecithin.

  16. A convolutional neural network-based screening tool for X-ray serial crystallography

    PubMed Central

    Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K.

    2018-01-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. PMID:29714177

  17. Implementation of NFC technology for industrial applications: case flexible production

    NASA Astrophysics Data System (ADS)

    Sallinen, Mikko; Strömmer, Esko; Ylisaukko-oja, Arto

    2007-09-01

    Near Field communication (NFC) technology enables a flexible short range communication. It has large amount of envisaged applications in consumer, welfare and industrial sector. Compared with other short range communication technologies such as Bluetooth or Wibree it provides advantages that we will introduce in this paper. In this paper, we present an example of applying NFC technology to industrial application where simple tasks can be automatized and industrial assembly process can be improved radically by replacing manual paperwork and increasing trace of the products during the production.

  18. A convolutional neural network-based screening tool for X-ray serial crystallography.

    PubMed

    Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K

    2018-05-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.

  19. A convolutional neural network-based screening tool for X-ray serial crystallography

    DOE PAGES

    Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; ...

    2018-04-24

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.

  20. A convolutional neural network-based screening tool for X-ray serial crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.

  1. Land use and pollution patterns on the Great Lakes. [eastern wisconsin

    NASA Technical Reports Server (NTRS)

    Haugen, R. K. (Principal Investigator); Mckim, H. L.; Marlar, T. L.

    1975-01-01

    The author has identified the following significant results. The final mapping of the large watersheds of the Manitowoc and the Oconto was done using the 25% sampling approach. Comparisons were made with earlier strip mapping efforts of the Oconto and Manitowoc watersheds. Regional differences were noted. Strip mapping of the Oconto resulted in overestimation of the amount of agricultural land compared to the random sampling method. For the Manitowoc, the strip mapping approach produced a slight underestimate of agricultural land, and an overestimate of the forest category.

  2. Subsurface damage in precision ground ULE(R) and Zerodur(R) surfaces.

    PubMed

    Tonnellier, X; Morantz, P; Shore, P; Baldwin, A; Evans, R; Walker, D D

    2007-09-17

    The total process cycle time for large ULE((R)) and Zerodur((R))optics can be improved using a precise and rapid grinding process, with low levels of surface waviness and subsurface damage. In this paper, the amounts of defects beneath ULE((R)) and Zerodur((R) )surfaces ground using a selected grinding mode were compared. The grinding response was characterised by measuring: surface roughness, surface profile and subsurface damage. The observed subsurface damage can be separated into two distinct depth zones, which are: 'process' and 'machine dynamics' related.

  3. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system.

    PubMed

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.

  5. State of gas exchange in recumbent and orthostatic positions and under physical load in healthy persons of varying age, sex and body build

    NASA Technical Reports Server (NTRS)

    Glezer, G. A.; Charyyev, M.; Zilbert, N. L.

    1980-01-01

    Age effect on gas exchange was studied in the recumbent and orthostatic positions and under physical load. In the case of the older age group and for normal as compared with hypersthenic persons, oxygen consumption during rest and during moderate physical overload diminishes. When the vertical position is assumed oxygen consumption in persons of various age groups is distinctly increased, particularly in the elderly group. There is a reduction in the amount of oxygen consumption, oxygen pulse, recovery coefficient, and work efficiency under moderate overload. In persons over 50, physical labor induces a large oxygen requirement and a sharp rise in the level of lactic acid and the blood's lactate/pyruvate ratio. No distinct difference was noted in the amount of oxygen consumed during rest and during physical overload in men and women of the same physical development and age.

  6. The association between seizures and deposition of collagen in the brain in porcine Taenia solium neurocysticercosis.

    PubMed

    Christensen, Nina M; Trevisan, Chiara; Leifsson, Páll S; Johansen, Maria V

    2016-09-15

    Neurocysticercosis caused by infection with Taenia solium is a significant cause of epilepsy and seizures in humans. The aim of this study was to assess the association between seizures and the deposition of collagen in brain tissue in pigs with T. solium neurocysticercosis. In total 78 brain tissue sections from seven pigs were examined histopathologically i.e. two pigs with epileptic seizures and T. solium cysts, four pigs without seizures but with cysts, and one non-infected control pig. Pigs with epileptic seizures had a larger amount of collagen in their brain tissue, showing as large fibrotic scars and moderate amount of collagen deposited around cysts, compared to pigs without seizures and the negative control pig. Our results indicate that collagen is likely to play a considerable part in the pathogenesis of seizures in T. solium neurocysticercosis. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. The preparation and structure of salty ice VII under pressure

    NASA Astrophysics Data System (ADS)

    Klotz, Stefan; Bove, Livia E.; Strässle, Thierry; Hansen, Thomas C.; Saitta, Antonino M.

    2009-05-01

    It is widely accepted that ice, no matter what phase, is unable to incorporate large amounts of salt into its structure. This conclusion is based on the observation that on freezing of salt water, ice expels the salt almost entirely as brine. Here, we show that this behaviour is not an intrinsic physico-chemical property of ice phases. We demonstrate by neutron diffraction that substantial amounts of dissolved LiCl can be built homogeneously into the ice VII structure if it is produced by recrystallization of its glassy (amorphous) state under pressure. Such `alloyed' ice VII has significantly different structural properties compared with pure ice VII, such as an 8% larger unit cell volume, 5 times larger displacement factors, an absence of a transition to an ordered ice VIII structure and plasticity. Our study suggests that there could be a whole new class of `salty' high-pressure ice forms.

  8. Smelting Magnesium Metal using a Microwave Pidgeon Method

    PubMed Central

    Wada, Yuji; Fujii, Satoshi; Suzuki, Eiichi; Maitani, Masato M.; Tsubaki, Shuntaro; Chonan, Satoshi; Fukui, Miho; Inazu, Naomi

    2017-01-01

    Magnesium (Mg) is a lightweight metal with applications in transportation and sustainable battery technologies, but its current production through ore reduction using the conventional Pidgeon process emits large amounts of CO2 and particulate matter (PM2.5). In this work, a novel Pidgeon process driven by microwaves has been developed to produce Mg metal with less energy consumption and no direct CO2 emission. An antenna structure consisting of dolomite as the Mg source and a ferrosilicon antenna as the reducing material was used to confine microwave energy emitted from a magnetron installed in a microwave oven to produce a practical amount of pure Mg metal. This microwave Pidgeon process with an antenna configuration made it possible to produce Mg with an energy consumption of 58.6 GJ/t, corresponding to a 68.6% reduction when compared to the conventional method. PMID:28401910

  9. The preparation and structure of salty ice VII under pressure.

    PubMed

    Klotz, Stefan; Bove, Livia E; Strässle, Thierry; Hansen, Thomas C; Saitta, Antonino M

    2009-05-01

    It is widely accepted that ice, no matter what phase, is unable to incorporate large amounts of salt into its structure. This conclusion is based on the observation that on freezing of salt water, ice expels the salt almost entirely as brine. Here, we show that this behaviour is not an intrinsic physico-chemical property of ice phases. We demonstrate by neutron diffraction that substantial amounts of dissolved LiCl can be built homogeneously into the ice VII structure if it is produced by recrystallization of its glassy (amorphous) state under pressure. Such 'alloyed' ice VII has significantly different structural properties compared with pure ice VII, such as an 8% larger unit cell volume, 5 times larger displacement factors, an absence of a transition to an ordered ice VIII structure and plasticity. Our study suggests that there could be a whole new class of 'salty' high-pressure ice forms.

  10. Contribution of Organically Grown Crops to Human Health

    PubMed Central

    Johansson, Eva; Hussain, Abrar; Kuktaite, Ramune; Andersson, Staffan C.; Olsson, Marie E.

    2014-01-01

    An increasing interest in organic agriculture for food production is seen throughout the world and one key reason for this interest is the assumption that organic food consumption is beneficial to public health. The present paper focuses on the background of organic agriculture, important public health related compounds from crop food and variations in the amount of health related compounds in crops. In addition, influence of organic farming on health related compounds, on pesticide residues and heavy metals in crops, and relations between organic food and health biomarkers as well as in vitro studies are also the focus of the present paper. Nutritionally beneficial compounds of highest relevance for public health were micronutrients, especially Fe and Zn, and bioactive compounds such as carotenoids (including pro-vitamin A compounds), tocopherols (including vitamin E) and phenolic compounds. Extremely large variations in the contents of these compounds were seen, depending on genotype, climate, environment, farming conditions, harvest time, and part of the crop. Highest amounts seen were related to the choice of genotype and were also increased by genetic modification of the crop. Organic cultivation did not influence the content of most of the nutritional beneficial compounds, except the phenolic compounds that were increased with the amounts of pathogens. However, higher amounts of pesticide residues and in many cases also of heavy metals were seen in the conventionally produced crops compared to the organic ones. Animal studies as well as in vitro studies showed a clear indication of a beneficial effect of organic food/extracts as compared to conventional ones. Thus, consumption of organic food seems to be positive from a public health point of view, although the reasons are unclear, and synergistic effects between various constituents within the food are likely. PMID:24717360

  11. Characteristics of Alcian-blue Dye Adsorption of Natural Biofilm Matrix

    NASA Astrophysics Data System (ADS)

    Kurniawan, A.; Yamamoto, T.; Sukandar; Guntur

    2018-01-01

    In this study, natural biofilm matrices formed on stones have been used for the adsorption of Alcian blue dye. Alcian blue is a member of polyvalent basic dyes that largely used from laboratory until industrial dying purposes. The adsorption of the dye onto the biofilm matrix has been carried out at different experimental conditions such as adsorption isotherm and kinetic of adsorption. The electric charge properties of biofilm matrix and its changes related to the adsorption of Alcian blue have been also investigated. Moreover, the results of Alcian blue adsorption to the biofilm were compared to those onto the acidic and neutral resin. The kinetics of adsorption result showed that the adsorption of the Alcian blue dye reached to a maximum adsorption amount within 60 minutes. The adsorption amount of Alcian blue to biofilm increased monotonously, and the maximum adsorption amount was greater compared to the resins. On the contrary, Alcian blue did not attach to the neutral resin having no electric charge. It seems that Alcian blue attached to the acidic resins due to electrostatic attractive force, and the same seems to be the case for adsorption of Alcian blue to biofilm. The adsorption of Alcian blue to the biofilm and acidic resins fitted to Langmuir type indicates that the binding of Alcian blue to the biofilm and acidic resins occurred in a monolayer like form. The maximum adsorption amount of Alcian blue on the biofilm (0.24 mmol/dry-g) was greater than those of acidic resin (0.025 mmol/dry-g). This indicates that the biofilm has many more sites for Alcian blue attachment than acidic resins. According to the result of this study, the biofilm matrix can be a good adsorbent for dye such as Alcian blue or other dyes that causing hazards in nature.

  12. Unprecedented Arctic Ozone Loss in 2011

    NASA Image and Video Library

    2011-10-02

    In mid-March 2011, NASA Aura spacecraft observed ozone in Earth stratosphere -- low ozone amounts are shown in purple and grey colors, large amounts of chlorine monoxide are shown in dark blue colors.

  13. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    NASA Astrophysics Data System (ADS)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  14. Sound Levels in East Texas Schools.

    ERIC Educational Resources Information Center

    Turner, Aaron Lynn

    A survey of sound levels was taken in several Texas schools to determine the amount of noise and sound present by size of class, type of activity, location of building, and the presence of air conditioning and large amounts of glass. The data indicate that class size and relative amounts of glass have no significant bearing on the production of…

  15. What Determines the Amount Students Borrow? Revisiting the Crisis-Convenience Debate

    ERIC Educational Resources Information Center

    Hart, Natala K.; Mustafa, Shoumi

    2008-01-01

    Recent studies have questioned the wisdom in blaming college costs for the escalation of student loans. It would appear that less affluent students borrow large amounts because inexpensive subsidized loans are available. This study attempted to verify the claim, estimating a model of the amount of loan received by students as a function of net…

  16. Flux Calculation Using CARIBIC DOAS Aircraft Measurements: SO2 Emission of Norilsk

    NASA Technical Reports Server (NTRS)

    Walter, D.; Heue, K.-P.; Rauthe-Schoech, A.; Brenninkmeijer, C. A. M.; Lamsal, L. N.; Krotkov, N. A.; Platt, U.

    2012-01-01

    Based on a case-study of the nickel smelter in Norilsk (Siberia), the retrieval of trace gas fluxes using airborne remote sensing is discussed. A DOAS system onboard an Airbus 340 detected large amounts of SO2 and NO2 near Norilsk during a regular passenger flight within the CARIBIC project. The remote sensing data were combined with ECMWF wind data to estimate the SO2 output of the Norilsk industrial complex to be around 1 Mt per year, which is in agreement with independent estimates. This value is compared to results using data from satellite remote sensing (GOME, OMI). The validity of the assumptions underlying our estimate is discussed, including the adaptation of this method to other gases and sources like the NO2 emissions of large industries or cities.

  17. Cellular interface morphologies in directional solidification. III - The effects of heat transfer and solid diffusivity

    NASA Technical Reports Server (NTRS)

    Ungar, Lyle H.; Bennett, Mark J.; Brown, Robert A.

    1985-01-01

    The shape and stability of two-dimensional finite-amplitude cellular interfaces arising during directional solidification are compared for several solidification models that account differently for latent heat released at the interface, unequal thermal conductivities of melt and solid, and solute diffusivity in the solid. Finite-element analysis and computer-implemented perturbation methods are used to analyze the families of steadily growing cellular forms that evolve from the planar state. In all models a secondary bifurcation between different families of finite-amplitude cells exists that halves the spatial wavelength of the stable interface. The quantitative location of this transition is very dependent on the details of the model. Large amounts of solute diffusion in the solid retard the growth of large-amplitude cells.

  18. New dielectric elastomers with improved properties for energy harvesting and actuation

    NASA Astrophysics Data System (ADS)

    Stiubianu, George; Bele, Adrian; Tugui, Codrin; Musteata, Valentina

    2015-02-01

    New materials with large value for dielectric constant were obtained by using siloxane and chemically modified lignin. The modified lignin does not act as a stiffening filler material for the siloxane but acts as bulk filler, preserving the softness and low value of Young's modulus specific for silicones. The measured values for dielectric constant compare positively with the ones for previously tested dielectric elastomers based on siloxane rubber or acrylic rubber loaded with ceramic nanoparticles. The new materials use the well-known silicone chemistry and lignin which is available worldwide in large amounts as a by-product of pulp and paper industry, making its manufacturing affordable. The prepared dielectric elastomers were tested for possible applications for wave, wind and kinetic body motion energy harvesting. Siloxane, lignin, dielectric

  19. The effect of the 2011 flood on agricultural chemical and sediment movement in the lower Mississippi River Basin

    NASA Astrophysics Data System (ADS)

    Welch, H.; Coupe, R.; Aulenbach, B.

    2012-04-01

    Extreme hydrologic events, such as floods, can overwhelm a surface water system's ability to process chemicals and can move large amounts of material downstream to larger surface water bodies. The Mississippi River is the 3rd largest River in the world behind the Amazon in South America and the Congo in Africa. The Mississippi-Atchafalaya River basin grows much of the country's corn, soybean, rice, cotton, pigs, and chickens. This is large-scale modern day agriculture with large inputs of nutrients to increase yields and large applied amounts of crop protection chemicals, such as pesticides. The basin drains approximately 41% of the conterminous United States and is the largest contributor of nutrients to the Gulf of Mexico each spring. The amount of water and nutrients discharged from the Mississippi River has been related to the size of the low dissolved oxygen area that forms off of the coast of Louisiana and Texas each summer. From March through April 2011, the upper Mississippi River basin received more than five times more precipitation than normal, which combined with snow melt from the Missouri River basin, created a historic flood event that lasted from April through July. The U.S. Geological Survey, as part of the National Stream Quality Accounting Network (NASQAN), collected samples from six sites located in the lower Mississippi-Atchafalaya River basin, as well as, samples from the three flow-diversion structures or floodways: the Birds Point-New Madrid in Missouri and the Morganza and Bonnet Carré in Louisiana, from April through July. Samples were analyzed for nutrients, pesticides, suspended sediments, and particle size; results were used to determine the water quality of the river during the 2011 flood. Monthly loads for nitrate, phosphorus, pesticides (atrazine, glyphosate, fluometuron, and metolachlor), and sediment were calculated to quantify the movement of agricultural chemicals and sediment into the Gulf of Mexico. Nutrient loads were compared to historic loads to assess the effect of the flood on the zone of hypoxia that formed in the Gulf of Mexico during the spring of 2011.

  20. Energy Contents of Frequently Ordered Restaurant Meals and Comparison with Human Energy Requirements and US Department of Agriculture Database Information: A Multisite Randomized Study

    PubMed Central

    Urban, Lorien E.; Weber, Judith L.; Heyman, Melvin B.; Schichtl, Rachel L.; Verstraete, Sofia; Lowery, Nina S.; Das, Sai Krupa; Schleicher, Molly M.; Rogers, Gail; Economos, Christina; Masters, William A.; Roberts, Susan B.

    2017-01-01

    Background Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ~50% of US restaurants are individual or small-chain (non–chain) establishments that do not provide nutrition information. Objective To measure the energy content of frequently ordered meals in non–chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. Design A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non–chain restaurants, together with equivalent meals from large-chain restaurants. Setting Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Main outcome measures Meal energy content determined by bomb calorimetry. Statistical analysis performed Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non–chain and chain meals, human energy requirements, and food database values. Results Meals from non–chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non–chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Conclusions Non–chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. PMID:26803805

  1. Energy Contents of Frequently Ordered Restaurant Meals and Comparison with Human Energy Requirements and U.S. Department of Agriculture Database Information: A Multisite Randomized Study.

    PubMed

    Urban, Lorien E; Weber, Judith L; Heyman, Melvin B; Schichtl, Rachel L; Verstraete, Sofia; Lowery, Nina S; Das, Sai Krupa; Schleicher, Molly M; Rogers, Gail; Economos, Christina; Masters, William A; Roberts, Susan B

    2016-04-01

    Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ∼50% of US restaurants are individual or small-chain (non-chain) establishments that do not provide nutrition information. To measure the energy content of frequently ordered meals in non-chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non-chain restaurants, together with equivalent meals from large-chain restaurants. Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Meal energy content determined by bomb calorimetry. Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non-chain and chain meals, human energy requirements, and food database values. Meals from non-chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non-chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Non-chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  2. Experimental investigations about the effect of trace amount of propane on the formation of mixed hydrates of methane and propane

    NASA Astrophysics Data System (ADS)

    Cai, W.; Lu, H.; Huang, X.

    2016-12-01

    In natural gas hydrates, some heavy hydrocarbons are always detected in addition to methane. However, it is still not well understood how the trace amount of heavy gas affect the hydrate properties. Intensive studies have been carried out to study the thermodynamic properties and structure types of mixed gases hydrates, but comparatively few investigations have been carried out on the cage occupancies of guest molecules in mixed gases hydrates. For understanding how trace amount of propane affects the formation of mixed methane-propane hydrates, X-ray diffraction, Raman spectroscopy, and gas chromatography were applied to the synthesized mixed methane-propane hydrate specimens, to get their structural characteristics (structure type, structural parameters, cage occupancy, etc.) and gas compositions. The mixed methane-propane hydrates were prepared by reacting fine ice powders with various gas mixtures of methane and propane. When the propane content was below 0.4%, the hydrates synthesized were found containing both sI methane hydrate and sII methane-propane hydrate; while the hydrates were found always sII when propane was over certain content. Detail studies about the cage occupancies of propane and methane in sII hydrate revealed that: 1) with the increase in propane content of methane-propane mixture, the occupancy of propane in large cage increased as accompanied with the decrease in methane occupancy in large cage, however the occupancy of methane in small cage didn't experience significant change; 2) temperature and pressure seemed no obvious influence on cage occupancy.

  3. Optimized energy harvesting materials and generator design

    NASA Astrophysics Data System (ADS)

    Graf, Christian; Hitzbleck, Julia; Feller, Torsten; Clauberg, Karin; Wagner, Joachim; Krause, Jens; Maas, Jürgen

    2013-04-01

    Electroactive polymers are soft capacitors made of thin elastic and electrically insulating films coated with compliant electrodes offering a large amount of deformation. They can either be used as actuators by applying an electric charge or they can be used as energy converters based on the electrostatic principle. These unique properties enable the industrial development of highly efficient and environmentally sustainable energy converters, which opens up the possibility to further exploit large renewable and inexhaustible energy sources like wind and water that are widely unused otherwise. Compared to other electroactive polymer materials, polyurethanes, whose formulations have been systematically modified and optimized for energy harvesting applications, have certain advantages over silicones and acrylates. The inherently higher dipole content results in a significantly increased permittivity and the dielectric breakdown strength is higher, too, whereby the overall specific energy, a measure for the energy gain, is better by at least factor ten, i.e. more than ten times the energy can be gained out of the same amount of material. In order to reduce conduction losses on the electrode during charging and discharging, a highly conductive bidirectional stretchable electrode has been developed. Other important material parameters like stiffness and bulk resistivity have been optimized to fit the requirements. To realize high power energy harvesting systems, substantial amounts of electroactive polymer material are necessary as well as a smart mechanical and electrical design of the generator. In here we report on different measures to evaluate and improve electroactive polymer materials for energy harvesting by e.g. reducing the defect occurrence and improving the electrode behavior.

  4. Environmental impact evaluation of feeds prepared from food residues using life cycle assessment.

    PubMed

    Ogino, Akifumi; Hirooka, Hiroyuki; Ikeguchi, Atsuo; Tanaka, Yasuo; Waki, Miyoko; Yokoyama, Hiroshi; Kawashima, Tomoyuki

    2007-01-01

    There is increasing concern about feeds prepared from food residues (FFR) from an environmental viewpoint; however, various forms of energy are consumed in the production of FFR. Environmental impacts of three scenarios were therefore investigated and compared using life cycle assessment (LCA): production of liquid FFR by sterilization with heat (LQ), production of dehydrated FFR by dehydration (DH), and disposal of food residues by incineration (IC). The functional unit was defined as 1 kg dry matter of produced feed standardized to a fixed energy content. The system boundaries included collection of food residues and production of feed from food residues. In IC, food residues are incinerated as waste, and thus the impacts of production and transportation of commercial concentrate feeds equivalent to the FFR in the other scenarios are included in the analysis. Our results suggested that the average amounts of greenhouse gas (GHG) emissions from LQ, DH, and IC were 268, 1073, and 1066 g of CO(2) equivalent, respectively. The amount of GHG emissions from LQ was remarkably small, indicating that LQ was effective for reducing the environmental impact of animal production. Although the average amount of GHG emissions from DH was nearly equal to that from IC, a large variation of GHG emissions was observed among the DH units. The energy consumption of the three scenarios followed a pattern similar to that of GHG emissions. The water consumption of the FFR-producing units was remarkably smaller than that of IC due to the large volumes of water consumed in forage crop production.

  5. Fog-Based Two-Phase Event Monitoring and Data Gathering in Vehicular Sensor Networks

    PubMed Central

    Yang, Fan; Su, Jinsong; Zhou, Qifeng; Wang, Tian; Zhang, Lu; Xu, Yifan

    2017-01-01

    Vehicular nodes are equipped with more and more sensing units, and a large amount of sensing data is generated. Recently, more and more research considers cooperative urban sensing as the heart of intelligent and green city traffic management. The key components of the platform will be a combination of a pervasive vehicular sensing system, as well as a central control and analysis system, where data-gathering is a fundamental component. However, the data-gathering and monitoring are also challenging issues in vehicular sensor networks because of the large amount of data and the dynamic nature of the network. In this paper, we propose an efficient continuous event-monitoring and data-gathering framework based on fog nodes in vehicular sensor networks. A fog-based two-level threshold strategy is adopted to suppress unnecessary data upload and transmissions. In the monitoring phase, nodes sense the environment in low cost sensing mode and generate sensed data. When the probability of the event is high and exceeds some threshold, nodes transfer to the event-checking phase, and some nodes would be selected to transfer to the deep sensing mode to generate more accurate data of the environment. Furthermore, it adaptively adjusts the threshold to upload a suitable amount of data for decision making, while at the same time suppressing unnecessary message transmissions. Simulation results showed that the proposed scheme could reduce more than 84 percent of the data transmissions compared with other existing algorithms, while it detects the events and gathers the event data. PMID:29286320

  6. The single chest tube versus double chest tube application after pulmonary lobectomy: A systematic review and meta-analysis.

    PubMed

    Zhang, Xuefei; Lv, Desheng; Li, Mo; Sun, Ge; Liu, Changhong

    2016-12-01

    Draining of the chest cavity with two chest tubes after pulmonary lobectomy is a common practice. The objective of this study was to evaluate whether using two tubes after a pulmonary lobectomy is more effective than using a single tube. We performed a meta-analysis of five randomized studies that compared the single chest tube with double chest tube application after pulmonary lobectomy. The primary end-point was amount of drainage and duration of chest tube drainage. The secondary end-points were the patient's numbers of new drain insertion after operation, hospital stay after operation, the patient's numbers of subcutaneous emphysema after operation, the patient's numbers of residual pleural air space, pain score, the number of patients who need thoracentesis, and cost. Five randomized controlled trials totaling 502 patients were included. Meta-analysis results are as follows: There were statistically significant differences in amount of drainage (risk ratio [RR] = -0.15; 95% confidence interval [CI] = -3.17, -0.12, P = 0. 03), duration of chest tube drainage (RR = -0.43; 95% CI = -0.57, -0.19, P = 0.02), pain score (P < 0.05). Compared with patients receiving the double chest tube group, there were no statistically significant differences between the two groups with regard to the patient's numbers of new drain insertion after operation. Compared with the double chest tube, the single chest tube significantly decreases amount of drainage, duration of chest tube drainage, pain score, the number of patients who need thoracentesis, and cost. Although there is convincing evidence to confirm the results mentioned herein, they still need to be confirmed by large-sample, multicenter, randomized, controlled trials.

  7. Hall Effect–Mediated Magnetic Flux Transport in Protoplanetary Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Xue-Ning; Stone, James M.

    2017-02-10

    The global evolution of protoplanetary disks (PPDs) has recently been shown to be largely controlled by the amount of poloidal magnetic flux threading the disk. The amount of magnetic flux must also coevolve with the disk, as a result of magnetic flux transport, a process that is poorly understood. In weakly ionized gas as in PPDs, magnetic flux is largely frozen in the electron fluid, except when resistivity is large. When the disk is largely laminar, we show that the relative drift between the electrons and ions (the Hall drift), and the ions and neutral fluids (ambipolar drift) can playmore » a dominant role on the transport of magnetic flux. Using two-dimensional simulations that incorporate the Hall effect and ambipolar diffusion (AD) with prescribed diffusivities, we show that when large-scale poloidal field is aligned with disk rotation, the Hall effect rapidly drags magnetic flux inward at the midplane region, while it slowly pushes flux outward above/below the midplane. This leads to a highly radially elongated field configuration as a global manifestation of the Hall-shear instability. This field configuration further promotes rapid outward flux transport by AD at the midplane, leading to instability saturation. In quasi-steady state, magnetic flux is transported outward at approximately the same rate at all heights, and the rate is comparable to the Hall-free case. For anti-aligned field polarity, the Hall effect consistently transports magnetic flux outward, leading to a largely vertical field configuration in the midplane region. The field lines in the upper layer first bend radially inward and then outward to launch a disk wind. Overall, the net rate of outward flux transport is about twice as fast as that of the aligned case. In addition, the rate of flux transport increases with increasing disk magnetization. The absolute rate of transport is sensitive to disk microphysics, which remains to be explored in future studies.« less

  8. Fermi Large Area Telescope observations of Local Group galaxies: detection of M 31 and search for M 33

    DOE PAGES

    Abdo, A. A.

    2010-11-01

    Context. Cosmic rays (CRs) can be studied through the galaxy-wide gamma-ray emission that they generate when propagating in the interstellar medium. The comparison of the diffuse signals from different systems may inform us about the key parameters in CR acceleration and transport. Aims. We aim to determine and compare the properties of the cosmic-ray-induced gamma-ray emission of several Local Group galaxies. Methods. We use 2 years of nearly continuous sky-survey observations obtained with the Large Area Telescope aboard the Fermi Gamma-ray Space Telescope to search for gamma-ray emission from M 31 and M 33. We compare the results with thosemore » for the Large Magellanic Cloud, the Small Magellanic Cloud, the Milky Way, and the starburst galaxies M 82 and NGC 253. Results. We detect a gamma-ray signal at 5σ significance in the energy range 200 MeV–20 GeV that is consistent with originating from M 31. The integral photon flux above 100 MeV amounts to (9.1 ± 1.9stat ± 1.0sys) × 10 -9 ph cm-2 s -1. We find no evidence for emission from M 33 and derive an upper limit on the photon flux >100 MeV of 5.1 × 10 -9 ph cm -2 s -1 (2σ). Comparing these results to the properties of other Local Group galaxies, we find indications of a correlation between star formation rate and gamma-ray luminosity that also holds for the starburst galaxies. Conclusions. The gamma-ray luminosity of M 31 is about half that of the Milky Way, which implies that the ratio between the average CR densities in M 31 and the Milky Way amounts to ξ = 0.35 ± 0.25. The observed correlation between gamma-ray luminosity and star formation rate suggests that the flux of M 33 is not far below the current upper limit from the LAT observations.« less

  9. Salt- and pH-induced desorption: Comparison between non-aggregated and aggregated mussel adhesive protein, Mefp-1, and a synthetic cationic polyelectrolyte.

    PubMed

    Krivosheeva, Olga; Dedinaite, Andra; Claesson, Per M

    2013-10-15

    Mussel adhesive proteins are of great interest in many applications due to their ability to bind strongly to many types of surfaces under water. Effective use such proteins, for instance the Mytilus edulis foot protein - Mefp-1, for surface modification requires achievement of a large adsorbed amount and formation of a layer that is resistant towards desorption under changing conditions. In this work we compare the adsorbed amount and layer properties obtained by using a sample containing small Mefp-1 aggregates with that obtained by using a non-aggregated sample. We find that the use of the sample containing small aggregates leads to higher adsorbed amount, larger layer thickness and similar water content compared to what can be achieved with a non-aggregated sample. The layer formed by the aggregated Mefp-1 was, after removal of the protein from bulk solution, exposed to aqueous solutions with high ionic strength (up to 1M NaCl) and to solutions with low pH in order to reduce the electrostatic surface affinity. It was found that the preadsorbed Mefp-1 layer under all conditions explored was significantly more resistant towards desorption than a layer built by a synthetic cationic polyelectrolyte with similar charge density. These results suggest that the non-electrostatic surface affinity for Mefp-1 is larger than for the cationic polyelectrolyte. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Interaction of SO2 and CO with the Ti2O3(101¯2) surface

    NASA Astrophysics Data System (ADS)

    Smith, Kevin E.; Henrich, Victor E.

    1985-10-01

    The interaction of sulfur dioxide with the nearly perfect (101¯2) surface of the corundum transition-metal oxide Ti2O3 has been studied using ultraviolet and x-ray photoemission spectroscopies and low-energy electron diffraction. The reaction of SO2 with Ti2O3 is found to be extremely vigorous, with SO2 adsorbing dissociatively and catalyzing the complete oxidation of the surface to TiO2 and TiS2. This result is significant since exposure to large amounts of O2 does not result in the production of large amounts of TiO2 at the Ti2O3 surface. Dissociative adsorption of SO2 continues for exposures up to at least 104 L (1 L=10-6Torr sec). The reaction is accompanied by large scale surface disorder and by an increase in the work function of 1.32 eV. In contrast, CO adsorbs molecularly for exposures >=105 L, with an extramolecular relaxation-polarization shift of 3.0 eV. For CO exposures <=104 L, the chemisorption mechanism is tentatively identified as dissociative adsorption at defect sites. Inclusive of this study, the interaction of four oxygen-containing molecules (SO2, CO, H2O, and O2) with Ti2O3(101¯2) surfaces has been studied, and their behavior is compared and trends isolated with a view to understanding the oxidation of Ti2O3.

  11. Large temporal scale and capacity subsurface bulk energy storage with CO2

    NASA Astrophysics Data System (ADS)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  12. Human Growth Hormone Adsorption Kinetics and Conformation on Self-Assembled Monolayers

    PubMed Central

    Buijs, Jos; Britt, David W.; Hlady, Vladimir

    2012-01-01

    The adsorption process of the recombinant human growth hormone on organic films, created by self-assembly of octadecyltrichlorosilane, arachidic acid, and dipalmitoylphosphatidylcholine, is investigated and compared to adsorption on silica and methylated silica substrates. Information on the adsorption process of human growth hormone (hGH) is obtained by using total internal reflection fluorescence (TIRF). The intensity, spectra, and quenching of the intrinsic fluorescence emitted by the growth hormone’s single tryptophan are monitored and related to adsorption kinetics and protein conformation. For the various alkylated hydrophobic surfaces with differences in surface density and conformational freedom it is observed that the adsorbed amount of growth hormone is relatively large if the alkyl chains are in an ordered structure while the amounts adsorbed are considerably lower for adsorption onto less ordered alkyl chains of fatty acid and phospholipid layers. Adsorption on methylated surfaces results in a relatively large conformational change in the growth hormone’s structure, as displayed by a 7 nm blue shift in emission wavelength and a large increase in the effectiveness of fluorescence quenching. Conformational changes are less evident for hGH adsorption onto the fatty acid and phospholipid alkyl chains. Adsorption kinetics on the hydrophilic head groups of the self-assembled monolayers are similar to those on solid hydrophilic surfaces. The relatively small conformational changes in the hGH structure observed for adsorption on silica are even further reduced for adsorption on fatty acid head groups. PMID:25125795

  13. Identification of a Lead Candidate in the Search for Carbene-Stabilised Homoaromatics.

    PubMed

    Mattock, James D; Vargas, Alfredo; Dewhurst, Rian D

    2015-11-16

    The effect of carbenes as Lewis donor groups on the homoaromaticity of mono- and bicyclic organic molecules is surveyed. The search for viable carbene-stabilised homoaromatics resulted in a large amount of rejected candidates as well as nine promising candidates that are further analysed for their homoaromaticity by using a number of metrics. Of these, five appeared to show modest homoaromaticity, whereas another compound showed a level of homoaromaticity comparable with the homotropylium cation benchmark compound. Isoelectronic analogues and constitutional isomers of the lead compound were investigated, however, none of these showed comparable homoaromaticity. The implications of these calculations on the design of donor-stabilised homoaromatics are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Vanadium accumulation in carbonaceous rocks: A review of geochemical controls during deposition and diagenesis

    USGS Publications Warehouse

    Breit, G.N.; Wanty, R.B.

    1991-01-01

    Published data relevant to the geochemistry of vanadium were used to evaluate processes and conditions that control vanadium accumulation in carbonaceous rocks. Reduction, adsorption, and complexation of dissolved vanadium favor addition of vanadium to sediments rich in organic carbon. Dissolved vanadate (V(V)) species predominate in oxic seawater and are reduced to vanadyl ion (V(IV)) by organic compounds or H2S. Vanadyl ion readily adsorbs to particle surfaces and is added to the sediment as the particles settle. The large vanadium concentrations of rocks deposited in marine as compared to lacustrine environments are the result of the relatively large amount of vanadium provided by circulating ocean water compared to terrestrial runoff. Vanadium-rich carbonaceous rocks typically have high contents of organically bound sulfur and are stratigraphically associated with phosphate-rich units. A correspondence between vanadium content and organically bound sulfur is consistent with high activities of H2S during sediment deposition. Excess H2S exited the sediment into bottom waters and favored reduction of dissolved V(V) to V(IV) or possibly V(III). The stratigraphic association of vanadiferous and phosphatic rocks reflects temporal and spatial shifts in bottom water chemistry from suboxic (phosphate concentrated) to more reducing (euxinic?) conditions that favor vanadium accumulation. During diagenesis some vanadium-organic complexes migrate with petroleum out of carbonaceous rocks, but significant amounts of vanadium are retained in refractory organic matter or clay minerals. As carbon in the rock evolves toward graphite during metamorphism, vanadium is incorporated into silicate minerals. ?? 1991.

  15. Lost in space: Onboard star identification using CCD star tracker data without an a priori attitude

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A.; Tolson, Robert H.

    1993-01-01

    There are many algorithms in use today which determine spacecraft attitude by identifying stars in the field of view of a star tracker. Some methods, which date from the early 1960's, compare the angular separation between observed stars with a small catalog. In the last 10 years, several methods have been developed which speed up the process and reduce the amount of memory needed, a key element to onboard attitude determination. However, each of these methods require some a priori knowledge of the spacecraft attitude. Although the Sun and magnetic field generally provide the necessary coarse attitude information, there are occasions when a spacecraft could get lost when it is not prudent to wait for sunlight. Also, the possibility of efficient attitude determination using only the highly accurate CCD star tracker could lead to fully autonomous spacecraft attitude determination. The need for redundant coarse sensors could thus be eliminated at substantial cost reduction. Some groups have extended their algorithms to implement a computation intense full sky scan. Some require large data bases. Both storage and speed are concerns for autonomous onboard systems. Neural network technology is even being explored by some as a possible solution, but because of the limited number of patterns that can be stored and large overhead, nothing concrete has resulted from these efforts. This paper presents an algorithm which, by descretizing the sky and filtering by visual magnitude of the brightness observed star, speeds up the lost in space star identification process while reducing the amount of necessary onboard computer storage compared to existing techniques.

  16. Stress-induced microcrack density evolution in β-eucryptite ceramics: Experimental observations and possible route to strain hardening

    DOE PAGES

    Müller, B. R.; Cooper, R. C.; Lange, A.; ...

    2017-11-01

    In order to investigate their microcracking behaviour, the microstructures of several β-eucryptite ceramics, obtained from glass precursor and cerammed to yield different grain sizes and microcrack densities, were characterized by laboratory and synchrotron x-ray refraction and tomography. Here, results were compared with those obtained from scanning electron microscopy (SEM). In SEM images, the characterized materials appeared fully dense but computed tomography showed the presence of pore clusters. Uniaxial tensile testing was performed on specimens while strain maps were recorded and analyzed by Digital Image Correlation (DIC). X-ray refraction techniques were applied on specimens before and after tensile testing to measuremore » the amount of the internal specific surface (i.e., area per unit volume). X-ray refraction revealed that (a) the small grain size (SGS) material contained a large specific surface, originating from the grain boundaries and the interfaces of TiO 2 precipitates; (b) the medium (MGS) and large grain size (LGS) materials possessed higher amounts of specific surface compared to SGS material due to microcracks, which decreased after tensile loading; (c) the precursor glass had negligible internal surface. The unexpected decrease in the internal surface of MGS and LGS after tensile testing is explained by the presence of compressive regions in the DIC strain maps and further by theoretical arguments. It is suggested that while some microcracks merge via propagation, more close mechanically, thereby explaining the observed X-ray refraction results. Lastly, the mechanisms proposed would allow the development of a strain hardening route in ceramics.« less

  17. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  18. Influence of impurities and contact scale on the lubricating properties of bovine submaxillary mucin (BSM) films on a hydrophobic surface.

    PubMed

    Nikogeorgos, Nikolaos; Madsen, Jan Busk; Lee, Seunghwan

    2014-10-01

    Lubricating properties of bovine submaxillary mucin (BSM) on a compliant, hydrophobic surface were studied as influenced by impurities, in particular bovine serum albumin (BSA), at macro and nanoscale contacts by means of pin-on-disk tribometry and friction force microscopy (FFM), respectively. At both contact scales, the purity of BSM and the presence of BSA were quantitatively discriminated. The presence of BSA was responsible for higher frictional forces observed from BSM samples containing relatively larger amount of BSA. But, the mechanisms contributing to higher friction forces by BSA were different at different contact scales. At the macroscale contact, higher friction forces were caused by faster and dominant adsorption of BSA into the contacting area under a continuous cycle of desorption and re-adsorption of the macromolecules from tribostress. Nevertheless, all BSMs lowered the interfacial friction forces due to large contact area and a large number of BSM molecules in the contact area. At the nanoscale contact, however, no significant desorption of the macromolecules is expected in tribological contacts because of too small contact area and extremely small number of BSM molecules involved in the contact area. Instead, increasingly higher friction forces with increasing amount of BSA in BSM layer are attributed to higher viscosity caused by BSA in the layer. Comparable size of AFM probes with BSM molecules allowed them to penetrate through the BSM layers and to scratch on the underlying substrates, and thus induced higher friction forces compared to the sliding contact on bare substrates. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Stress-induced microcrack density evolution in β-eucryptite ceramics: Experimental observations and possible route to strain hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, B. R.; Cooper, R. C.; Lange, A.

    In order to investigate their microcracking behaviour, the microstructures of several β-eucryptite ceramics, obtained from glass precursor and cerammed to yield different grain sizes and microcrack densities, were characterized by laboratory and synchrotron x-ray refraction and tomography. Here, results were compared with those obtained from scanning electron microscopy (SEM). In SEM images, the characterized materials appeared fully dense but computed tomography showed the presence of pore clusters. Uniaxial tensile testing was performed on specimens while strain maps were recorded and analyzed by Digital Image Correlation (DIC). X-ray refraction techniques were applied on specimens before and after tensile testing to measuremore » the amount of the internal specific surface (i.e., area per unit volume). X-ray refraction revealed that (a) the small grain size (SGS) material contained a large specific surface, originating from the grain boundaries and the interfaces of TiO 2 precipitates; (b) the medium (MGS) and large grain size (LGS) materials possessed higher amounts of specific surface compared to SGS material due to microcracks, which decreased after tensile loading; (c) the precursor glass had negligible internal surface. The unexpected decrease in the internal surface of MGS and LGS after tensile testing is explained by the presence of compressive regions in the DIC strain maps and further by theoretical arguments. It is suggested that while some microcracks merge via propagation, more close mechanically, thereby explaining the observed X-ray refraction results. Lastly, the mechanisms proposed would allow the development of a strain hardening route in ceramics.« less

  20. Supraventricular tachycardia induced by chocolate: is chocolate too sweet for the heart?

    PubMed

    Parasramka, Saurabh; Dufresne, Alix

    2012-09-01

    Conflicting studies have been published concerning the association between chocolate and cardiovascular diseases. Fewer articles have described the potential arrhythmogenic risk related to chocolate intake. We present a case of paroxysmal supraventricular tachycardia in a woman after consumption of large quantity of chocolate. A 53-year-old woman with no significant medical history presented to us with complaints of palpitations and shortness of breath after consuming large amounts of chocolate. Electrocardiogram showed supraventricular tachycardia at 165 beats per minute, which was restored to sinus rhythm after adenosine bolus injection. Electrophysiology studies showed atrioventricular nodal reentry tachycardia, which was treated with radiofrequency ablation. Chocolate contains caffeine and theobromine, which are methylxanthines and are competitive antagonists of adenosine and can have arrhythmogenic potential. Our case very well describes an episode of tachycardia precipitated by large amount of chocolate consumption in a patient with underlying substrate. There are occasional case reports describing association between chocolate, caffeine, and arrhythmias. A large Danish study, however, did not find any association between amount of daily caffeine consumption and risk of arrhythmia.

  1. Role of Stress and Smoking as Modifiable Risk Factors for Nonpersistent and Persistent Back Pain in Women.

    PubMed

    Schmelzer, Amy C; Salt, Elizabeth; Wiggins, Amanda; Crofford, Leslie J; Bush, Heather; Mannino, David M

    2016-03-01

    The purpose of this study was to examine the association between smoking and stress with nonpersistent and persistent back pain. Participants included 3703 women who took part in the Kentucky Women's Health Registry in 2008 and 2011. Multivariate logistic regression modeling was used to examine whether smoking status and stress levels were predictive of nonpersistent and persistent back pain, controlling for sociodemographic characteristics. Stress level was associated with both nonpersistent and persistent back pain, whereas smoking was associated with only persistent back pain. Current smokers were 1.5 times more likely to report persistent back pain compared with never smokers, controlling for age, race, body mass index, educational attainment, and employment status. Women experiencing large or overwhelming amounts of stress were 1.8 times more likely to have nonpersistent back pain and 1.6 times more likely to report persistent back pain, compared with women experiencing small amounts of stress. This study further substantiates the findings of prior research that describes a significant relationship between back pain, stress, and smoking. Understanding the role of modifiable risk factors (ie, smoking and stress) and their impact on back pain provides an opportunity to offer a comprehensive and tailored treatment plan.

  2. The relationship between violent video games, acculturation, and aggression among Latino adolescents.

    PubMed

    Escobar-Chaves, S Liliana; Kelder, Steve; Orpinas, Pamela

    2002-12-01

    Multiple factors are involved in the occurrence of aggressive behavior. The purpose of this study was to evaluate the hypotheses that Latino middle school children exposed to higher levels of video game playing will exhibit a higher level of aggression and fighting compared to children exposed to lower levels and that the more acculturated middle school Latino children will play more video games and will prefer more violent video games compared to less acculturated middle school Latino children. This study involved 5,831 students attending eight public schools in Texas. A linear relationship was observed between the time spent playing video games and aggression scores. Higher aggression scores were significantly associated with heavier video playing for boys and girls (p < 0.0001). The more students played video games, the more they fought at school (p < 0.0001). As Latino middle school students were more acculturated, their preference for violent video game playing increased, as well as the amount of time they played video games. Students who reported speaking more Spanish at home and with their friends were less likely to spend large amounts of time playing video games and less likely to prefer violent video games (p < 0.05).

  3. A Cost Benefit Analysis of Emerging LED Water Purification Systems in Expeditionary Environments

    DTIC Science & Technology

    2017-03-23

    the initial contingency response phase, ROWPUs are powered by large generators which require relatively large amounts of fossil fuels. The amount of...they attract and cling together forming a larger particle (Chem Treat, 2016). Flocculation is the addition of a polymer to water that clumps...smaller particles together to form larger particles. The idea for both methods is that larger particles will either settle out of or be removed from the

  4. Galaxy And Mass Assembly (GAMA): the connection between metals, specific SFR and H I gas in galaxies: the Z-SSFR relation

    NASA Astrophysics Data System (ADS)

    Lara-López, M. A.; Hopkins, A. M.; López-Sánchez, A. R.; Brough, S.; Colless, M.; Bland-Hawthorn, J.; Driver, S.; Foster, C.; Liske, J.; Loveday, J.; Robotham, A. S. G.; Sharp, R. G.; Steele, O.; Taylor, E. N.

    2013-06-01

    We study the interplay between gas phase metallicity (Z), specific star formation rate (SSFR) and neutral hydrogen gas (H I) for galaxies of different stellar masses. Our study uses spectroscopic data from Galaxy and Mass Assembly and Sloan Digital Sky Survey (SDSS) star-forming galaxies, as well as H I detection from the Arecibo Legacy Fast Arecibo L-band Feed Array (ALFALFA) and Galex Arecibo SDSS Survey (GASS) public catalogues. We present a model based on the Z-SSFR relation that shows that at a given stellar mass, depending on the amount of gas, galaxies will follow opposite behaviours. Low-mass galaxies with a large amount of gas will show high SSFR and low metallicities, while low-mass galaxies with small amounts of gas will show lower SSFR and high metallicities. In contrast, massive galaxies with a large amount of gas will show moderate SSFR and high metallicities, while massive galaxies with small amounts of gas will show low SSFR and low metallicities. Using ALFALFA and GASS counterparts, we find that the amount of gas is related to those drastic differences in Z and SSFR for galaxies of a similar stellar mass.

  5. Effects of partial mixed rations and supplement amounts on milk production and composition, ruminal fermentation, bacterial communities, and ruminal acidosis.

    PubMed

    Golder, H M; Denman, S E; McSweeney, C; Wales, W J; Auldist, M J; Wright, M M; Marett, L C; Greenwood, J S; Hannah, M C; Celi, P; Bramley, E; Lean, I J

    2014-09-01

    Late-lactation Holstein cows (n=144) that were offered 15kg dry matter (DM)/cow per day of perennial ryegrass to graze were randomized into 24 groups of 6. Each group contained a fistulated cow and groups were allocated to 1 of 3 feeding strategies: (1) control (10 groups): cows were fed crushed wheat grain twice daily in the milking parlor and ryegrass silage at pasture; (2) partial mixed ration (PMR; 10 groups): PMR that was isoenergetic to the control diet and fed twice daily on a feed pad; (3) PMR+canola (4 groups): a proportion of wheat in the PMR was replaced with canola meal to produce more estimated metabolizable protein than other groups. Supplements were fed to the control and PMR cows at 8, 10, 12, 14, or 16kg of DM/d, and to the PMR+canola cows at 14 or 16kg of DM/d. The PMR-fed cows had a lower incidence of ruminal acidosis compared with controls, and ruminal acidosis increased linearly and quadratically with supplement fed. Yield of milk fat was highest in the PMR+canola cows fed 14 or 16kg of total supplement DM/d, followed by the PMR-fed cows, and was lowest in controls fed at these amounts; a similar trend was observed for milk fat percentage. Milk protein yield was higher in the PMR+canola cows fed 14 or 16kg of total supplement DM/d. Milk yield and milk protein percentage were not affected by feeding strategy. Milk, energy-corrected milk, and milk protein yields increased linearly with supplement fed, whereas milk fat percentage decreased. Ruminal butyrate and d-lactate concentrations, acetate-to-propionate ratio, (acetate + butyrate)/propionate, and pH increased in PMR-fed cows compared with controls for all supplement amounts, whereas propionate and valerate concentrations decreased. Ruminal acetate, butyrate, and ammonia concentrations, acetate-to-propionate ratio, (acetate + butyrate)/propionate, and pH linearly decreased with amounts of supplement fed. Ruminal propionate concentration linearly increased and valerate concentration linearly and quadratically increased with supplement feeding amount. The Bacteroidetes and Firmicutes were the dominant bacterial phyla identified. The Prevotellaceae, Ruminococcaceae, and Lachnospiraceae were the dominant bacterial families, regardless of feeding group, and were influenced by feeding strategy, supplement feeding amount, or both. The Veillonellaceae family decreased in relative abundance in PMR-fed cows compared with controls, and the Streptococcaeae and Lactobacillaceae families were present in only minor relative abundances, regardless of feeding group. Despite large among- and within-group variation in bacterial community composition, distinct bacterial communities occurred among feeding strategies, supplement amounts, and sample times and were associated with ruminal fermentation measures. Control cows fed 16kg of DM of total supplement per day had the most distinct ruminal bacterial community composition. Bacterial community composition was most significantly associated with supplement feeding amount and ammonia, butyrate, valerate, and propionate concentrations. Feeding supplements in a PMR reduced the incidence of ruminal acidosis and altered ruminal bacterial communities, regardless of supplement feeding amount, but did not result in increased milk measures compared with isoenergetic control diets component-fed to late-lactation cows. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Inclusion bodies of aggregated hemosiderins in liver macrophages.

    PubMed

    Hayashi, Hisao; Tatsumi, Yasuaki; Wakusawa, Shinya; Shigemasa, Ryota; Koide, Ryoji; Tsuchida, Ken-Ichi; Morotomi, Natsuko; Yamashita, Tetsuji; Kumagai, Kotaro; Ono, Yukiya; Hayashi, Kazuhiko; Ishigami, Masatoshi; Goto, Hidemi; Kato, Ayako; Kato, Koichi

    2017-12-01

    Hemosiderin formation is a structural indication of iron overload. We investigated further adaptations of the liver to excess iron. Five patients with livers showing iron-rich inclusions larger than 2 µm were selected from our database. The clinical features of patients and structures of the inclusions were compared with those of 2 controls with mild iron overload. All patients had severe iron overload with more than 5000 ng/mL of serum ferritin. Etiologies were variable, from hemochromatosis to iatrogenic iron overload. Their histological stages were either portal fibrosis or cirrhosis. Inclusion bodies were ultra-structurally visualized as aggregated hemosiderins in the periportal macrophages. X-ray analysis always identified, in addition to a large amount of iron complexes including oxygen and phosphorus, a small amount of copper and sulfur in the mosaic matrixes of inclusions. There were no inclusions in the control livers. Inclusion bodies, when the liver is loaded with excess iron, may appear in the macrophages as isolated organella of aggregated hemosiderins. Trace amounts of copper-sulfur complexes were always identified in the mosaic matrices of the inclusions, suggesting cuproprotein induction against excess iron. In conclusion, inclusion formation in macrophages may be an adaptation of the liver loaded with excess iron.

  7. [The intestinal microflora of persons subjected to a radiation lesion].

    PubMed

    Sudenko, V I; Nagornaia, S S; Groma, L I

    1992-01-01

    The content of large intestine has been studied in persons exposed to radiation injury in consequence of the accident at the Chernobyl Atomic Power Plant. It is stated that bifidobacteria (10(7)-10(10) cells in 1 g of feces) prevailed (as in healthy people), Bifidobacterium indicum being a dominating species. Amount of lactic-acid bacteria in 1 g of defecations of examined patients was within the range of 10(6)-10(9) cells and in certain persons it reached 10(10) cells (primarily fecal Enterococci). A considerable amount of patients with acute radiation sickness of the 3d degree had in their intestine 10(9)/g of lactic-acid bacteria, Lactobacillus casei and L. plantarum prevailing there. The frequency of yeast isolation from defecations of patients constituted 83%, while the number of cells in 1 g of feces--from 10 to 10(4). Yeast of the Candida genus, mainly Candida parapsilosis, prevailed. The species composition of isolated microorganisms has no substantial differences from microcenosis of healthy people. The content of intestine of persons suffered from radiation is characterized only by greater amount of lactic-acid bacteria and enterococci as compared with healthy adults.

  8. Bulk Thermoelectric Materials Reinforced with SiC Whiskers

    NASA Astrophysics Data System (ADS)

    Akao, Takahiro; Fujiwara, Yuya; Tarui, Yuki; Onda, Tetsuhiko; Chen, Zhong-Chun

    2014-06-01

    SiC whiskers have been incorporated into Zn4Sb3 compound as reinforcements to overcome its extremely brittle nature. The bulk samples were prepared by either hot-extrusion or hot-pressing techniques. The obtained products containing 1 vol.% to 5 vol.% SiC whiskers were confirmed to exhibit sound appearance, high density, and fine-grained microstructure. Mechanical properties such as the hardness and fracture resistance were improved by the addition of SiC whiskers, as a result of dispersion strengthening and microstructural refinement induced by a pinning effect. Furthermore, crack deflection and/or bridging/pullout mechanisms are invoked by the whiskers. Regarding the thermoelectric properties, the Seebeck coefficient and electrical resistivity values comparable to those of the pure compound are retained over the entire range of added whisker amount. However, the thermal conductivity becomes large with increasing amount of SiC whiskers because of the much higher conductivity of SiC relative to the Zn4Sb3 matrix. This results in a remarkable degradation of the dimensionless figure of merit in the samples with addition of SiC whiskers. Therefore, the optimum amount of SiC whiskers in the Zn4Sb3 matrix should be determined by balancing the mechanical properties and thermoelectric performance.

  9. [Food consumption and anthropometry related to the frailty syndrome in low-income community-living elderly in a large city].

    PubMed

    Mello, Amanda de Carvalho; Carvalho, Marilia Sá; Alves, Luciana Correia; Gomes, Viviane Pereira; Engstrom, Elyne Montenegro

    2017-08-21

    The aim of this study was to describe anthropometric and food intake data related to the frailty syndrome in the elderly. This was a cross-sectional study in individuals ≥ 60 years of age in a household survey in the Manguinhos neighborhood of Rio de Janeiro, Brazil (n = 137). Frailty syndrome was diagnosed according to Fried et al., anthropometric measures were taken, and a food frequency questionnaire was applied and the results compared to Brazilian Ministry of Health guidelines. In the pre-frail and frail groups, body mass index and measures of central adiposity showed higher levels, while lean muscle parameters showed lower values, proportional to the syndrome's gradation. Frail elderly consumed higher amounts of grains and lower amounts of beans and fruit; pre-frail elderly consumed more vegetables, dairy products, and high-sugar and high-fat foods; the two groups consumed similar amounts of meat. Thus, diagnosis of the syndrome, anthropometric evaluation, and dietary assessment should be included in health policies for the elderly, since they assist in early identification of risk and favor interventions for disease prevention and health and nutritional promotion.

  10. Pitch discrimination as a function of the inter-stimulus interval: Evidence against a simple model of perceptual memory

    NASA Astrophysics Data System (ADS)

    Demany, Laurent; Montandon, Gaspard; Semal, Catherine

    2003-04-01

    A listener's ability to compare two sounds separated by a silent time interval T is limited by a sum of ``sensory noise'' and ``memory noise.'' The present work was intended to test a model according to which these two components of internal noise are independent and, for a given sensory continuum, the memory noise depends only on T. In three experiments using brief sounds (<80 ms), pitch discrimination performances were measured in terms of d' as a function of T (0.1-4 s) and a physical parameter affecting the amount of sensory noise (pitch salience). As T increased, d' first increased rapidly and then declined more slowly. According to the tested model, the relative decline of d' beyond the optimal value of T should have been slower when pitch salience was low (large amount of sensory noise) than when pitch salience was high (small amount of sensory noise). However, this prediction was disproved in each of the three experiments. It was also found, when a ``roving'' procedure was used, that the optimal value of T was markedly shorter for very brief tone bursts (6 sine cycles) than for longer tone bursts (30 sine cycles).

  11. Effect of mask dead space and occlusion of mask holes on delivery of nebulized albuterol.

    PubMed

    Berlinski, Ariel

    2014-08-01

    Infants and children with respiratory conditions are often prescribed bronchodilators. Face masks are used to facilitate the administration of nebulized therapy in patients unable to use a mouthpiece. Masks incorporate holes into their design, and their occlusion during aerosol delivery has been a common practice. Masks are available in different sizes and different dead volumes. The aim of this study was to compare the effect of different degrees of occlusion of the mask holes and different mask dead space on the amount of nebulized albuterol available at the mouth opening in a model of a spontaneously breathing child. A breathing simulator mimicking infant (tidal volume [VT] = 50 mL, breathing frequency = 30 breaths/min, inspiratory-expiratory ratio [I:E] = 1:3), child (VT = 155 mL, breathing frequency = 25 breaths/min, I:E = 1:2), and adult (VT = 500 mL, breathing frequency = 15 breaths/min, I:E = 1:2) breathing patterns was connected to a collection filter hidden behind a face plate. A pediatric size mask and an adult size mask connected to a continuous output jet nebulizer were sealed to the face plate. Three nebulizers were loaded with albuterol sulfate (2.5 mg/3 mL) and operated with 6 L/min compressed air for 5 min. Experiments were repeated with different degrees of occlusion (0%, 50%, and 90%). Albuterol was extracted from the filter and measured with a spectrophotometer at 276 nm. Occlusion of the holes in the large mask did not increase the amount of albuterol in any of the breathing patterns. The amount of albuterol captured at the mouth opening did not change when the small mask was switched to the large mask, except with the breathing pattern of a child, and when the holes in the mask were 50% occluded (P = .02). Neither decreasing the dead space of the mask nor occluding the mask holes increased the amount of nebulized albuterol captured at the mouth opening.

  12. Characterizing differences in precipitation regimes of extreme wet and dry years: implications for climate change experiments.

    PubMed

    Knapp, Alan K; Hoover, David L; Wilcox, Kevin R; Avolio, Meghan L; Koerner, Sally E; La Pierre, Kimberly J; Loik, Michael E; Luo, Yiqi; Sala, Osvaldo E; Smith, Melinda D

    2015-02-03

    Climate change is intensifying the hydrologic cycle and is expected to increase the frequency of extreme wet and dry years. Beyond precipitation amount, extreme wet and dry years may differ in other ways, such as the number of precipitation events, event size, and the time between events. We assessed 1614 long-term (100 year) precipitation records from around the world to identify key attributes of precipitation regimes, besides amount, that distinguish statistically extreme wet from extreme dry years. In general, in regions where mean annual precipitation (MAP) exceeded 1000 mm, precipitation amounts in extreme wet and dry years differed from average years by ~40% and 30%, respectively. The magnitude of these deviations increased to >60% for dry years and to >150% for wet years in arid regions (MAP<500 mm). Extreme wet years were primarily distinguished from average and extreme dry years by the presence of multiple extreme (large) daily precipitation events (events >99th percentile of all events); these occurred twice as often in extreme wet years compared to average years. In contrast, these large precipitation events were rare in extreme dry years. Less important for distinguishing extreme wet from dry years were mean event size and frequency, or the number of dry days between events. However, extreme dry years were distinguished from average years by an increase in the number of dry days between events. These precipitation regime attributes consistently differed between extreme wet and dry years across 12 major terrestrial ecoregions from around the world, from deserts to the tropics. Thus, we recommend that climate change experiments and model simulations incorporate these differences in key precipitation regime attributes, as well as amount into treatments. This will allow experiments to more realistically simulate extreme precipitation years and more accurately assess the ecological consequences. © 2015 John Wiley & Sons Ltd.

  13. Evaluating the cloud radiative forcing over East Asia during summer simulated by CMIP5 models

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Wang, Y.; Liu, X.

    2017-12-01

    A large degree of uncertainty in global climate models (GCMs) can be attributed to the representation of clouds and its radiative forcing (CRF). In this study, the simulated CRFs, total cloud fraction (CF) and cloud properties over East Asia from 20 CMIP5 AMIP models are evaluated and compared with multiple satellite observations, and the possible causes for the CRF bias in the CMIP5 models are then investigated. Based on the satellite observation, strong Long wave CRF (LWCRF) and Short wave CRF (SWCRF) are found to be located over Southwestern China, with minimum SWCRF less than -130Wm-2 and this is associated with the large amount of cloud in the region. By contrast, weak CRFs are located over Northwest China and Western Pacific region because of less cloud amount. In Northeastern China, the strong SWCRF and week LWCRF can be found due to the dominant low-level cloud. In Eastern China, the CRFs is moderate due to the co-existence of the multi-layer cloud. CMIP5 models can basically capture the structure of CRFs in East Asia, with the spatial correlation coefficient between 0.5 and 0.9. But most models underestimate CRFs in East Asia, which is highly associated with the underestimation of cloud amount in the region. The performance of CMIP5 models varies in different part of East Asian region, with a larger deviation in Eastern China (EC). Further investigation suggests that, underestimation of the cloud amount in EC can lead to the weak bias of CRFs in EC, however, this CRF bias can be cancelled out by the overestimation effect of CRF due to excessive cloud optical depth (COD) simulated by the models. The annual cycle of simulated CRF over Eastern China is also examined, and it is found, CMIP models are unable to reproduce the northward migration of CRF in summer monsoon season, which is closely related with northward shift of East Asian summer monsoon rain belt.

  14. Absorption and distribution of lycopene in rat colon.

    PubMed

    Oshima, S; Inakuma, T; Narisawa, T

    1999-01-01

    Colonic absorption and distribution of lycopene, which inhibited rat colon carcinogenesis in our previous studies, were investigated in Sprague-Dawley rats. Three groups of six rats each with or without a single-barreled colostomy at the mid colon were given a single intragastric or intracolonic dose of 0.2 mL of corn oil containing 12 mg of lycopene. Twenty-four hours later, all rats were sacrificed and the blood and some tissues were collected. The contents of lycopene in the samples were assayed by HPLC. Lycopene was detected in an appreciable amount in the liver, but only in trace amount in the serum of all rats treated with an intracolonic dose of lycopene and in rats with an intragastric dose. After an intragastric lycopene treatment, lycopene was detected in the mucosa of the proximal colon and of the distal colon of the colostomized rats, whose distal colon had been excluded from the fecal stream. A large amount of lycopene was recovered in the feces. None was detected in any sample from the control rats treated with an intragastric or intracolonic dose of plain corn oil. The results suggest that lycopene is absorbed from the colon and also from the small intestine. It might be concluded that both ways of absorption contribute to a comparative amount of lycopene accumulation in the colon mucosa after ingestion of this carotenoid.

  15. Relationship Between Nutritional Knowledge and the Amount of Sugar-Sweetened Beverages Consumed in Los Angeles County.

    PubMed

    Gase, Lauren N; Robles, Brenda; Barragan, Noel C; Kuo, Tony

    2014-08-01

    Although consumption of sugar-sweetened beverages (SSBs) is associated with many negative health outcomes, including obesity, diabetes, and cardiovascular disease, the relationship between consumer nutritional knowledge and the amount consumed is poorly understood. The objective of this study was to examine the relationship between knowledge of daily calorie recommendations and the amount of SSBs consumed in a large, economically and racially diverse sample of adults recruited at selected Metro subway and bus shelters in Los Angeles County. In June 2012, the Los Angeles County Department of Public Health conducted street intercept surveys to assess food attitudes and consumption behaviors and public opinions related to a recent 8-week health marketing campaign targeting SSB consumption. Descriptive and comparative analyses were conducted, including a negative binomial regression model, to examine the relationship between knowledge of the daily calorie recommendations and the amount of SSBs consumed. Among survey respondents (n = 1,041), less than one third correctly identified the daily calorie recommendations for a typical adult. After controlling for sociodemographics and weight status, respondents who correctly identified recommended calorie needs reported, on average, drinking nine fewer SSBs per month than respondents who did not. Results suggest that efforts to reduce SSB consumption might benefit from the inclusion of educational interventions that empower consumers to make healthy choices. © 2014 Society for Public Health Education.

  16. Proteinortho: Detection of (Co-)orthologs in large-scale analysis

    PubMed Central

    2011-01-01

    Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987

  17. Information Fusion of Conflicting Input Data.

    PubMed

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-10-29

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μ BalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.

  18. Anomalous Ion Heating, Intrinsic and Induced Rotation in the Pegasus Toroidal Experiment

    NASA Astrophysics Data System (ADS)

    Burke, M. G.; Barr, J. L.; Bongard, M. W.; Fonck, R. J.; Hinson, E. T.; Perry, J. M.; Redd, A. J.; Thome, K. E.

    2014-10-01

    Pegasus plasmas are initiated through either standard, MHD stable, inductive current drive or non-solenoidal local helicity injection (LHI) current drive with strong reconnection activity, providing a rich environment to study ion dynamics. During LHI discharges, a large amount of anomalous impurity ion heating has been observed, with Ti ~ 800 eV but Te < 100 eV. The ion heating is hypothesized to be a result of large-scale magnetic reconnection activity, as the amount of heating scales with increasing fluctuation amplitude of the dominant, edge localized, n = 1 MHD mode. Chordal Ti spatial profiles indicate centrally peaked temperatures, suggesting a region of good confinement near the plasma core surrounded by a stochastic region. LHI plasmas are observed to rotate, perhaps due to an inward radial current generated by the stochastization of the plasma edge by the injected current streams. H-mode plasmas are initiated using a combination of high-field side fueling and Ohmic current drive. This regime shows a significant increase in rotation shear compared to L-mode plasmas. In addition, these plasmas have been observed to rotate in the counter-Ip direction without any external momentum sources. The intrinsic rotation direction is consistent with predictions from the saturated Ohmic confinement regime. Work supported by US DOE Grant DE-FG02-96ER54375.

  19. Integrated process development-a robust, rapid method for inclusion body harvesting and processing at the microscale level.

    PubMed

    Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid

    2017-10-21

    Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.

  20. Information Fusion of Conflicting Input Data

    PubMed Central

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-01-01

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874

  1. Evaluation of new superficially porous particles with carbon core and nanodiamond-polymer shell for proteins characterization.

    PubMed

    Bobály, Balázs; Guillarme, Davy; Fekete, Szabolcs

    2015-02-01

    A new superficially porous material possessing a carbon core and nanodiamond-polymer shell and pore size of 180Å was evaluated for the analysis of large proteins. Because the stationary phase on this new support contains a certain amount of protonated amino groups within the shell structure, the resulting retention mechanism is most probably a mix between reversed phase and anion exchange. However, under the applied conditions (0.1-0.5% TFA in the mobile phase), it seemed that the main retention mechanism for proteins was hydrophobic interaction with the C18 alkylchains on this carbon based material. In this study, we demonstrated that there was no need to increase mobile phase temperature, as the peak capacity was not modified considerably between 30 and 80°C for model proteins. Thus, the risk of thermal on-column degradation or denaturation of large proteins is not relevant. Another important difference compared to silica-based materials is that this carbon-based column requires larger amount of TFA, comprised between 0.2 and 0.5%. Finally, it is important to mention that selectivity between closely related proteins (oxidized, native and reduced forms of Interferon α-2A variants) could be changed mostly through mobile phase temperature. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. The unexpectedly large dust and gas content of quiescent galaxies at z > 1.4

    NASA Astrophysics Data System (ADS)

    Gobat, R.; Daddi, E.; Magdis, G.; Bournaud, F.; Sargent, M.; Martig, M.; Jin, S.; Finoguenov, A.; Béthermin, M.; Hwang, H. S.; Renzini, A.; Wilson, G. W.; Aretxaga, I.; Yun, M.; Strazzullo, V.; Valentino, F.

    2018-03-01

    Early-type galaxies (ETGs) contain most of the stars present in the local Universe and, above a stellar mass content of 5 × 1010 solar masses, vastly outnumber spiral galaxies such as the Milky Way. These massive spheroidal galaxies have, in the present day, very little gas or dust in proportion to their mass1, and their stellar populations have been evolving passively for over 10 billion years. The physical mechanisms that led to the termination of star formation in these galaxies and depletion of their interstellar medium remain largely conjectural. In particular, there are currently no direct measurements of the amount of residual gas that might still be present in newly quiescent spheroidals at high redshift2. Here we show that quiescent ETGs at redshift z 1.8, close to their epoch of quenching, contained at least two orders of magnitude more dust at a fixed stellar mass compared with local ETGs. This implies the presence of substantial amounts of gas (5-10%), which has been consumed less efficiently than in more active galaxies, probably due to their spheroidal morphology, consistent with our simulations. This lower star formation efficiency, combined with an extended hot gas halo possibly maintained by persistent feedback from an active galactic nucleus, keep ETGs mostly passive throughout cosmic time.

  3. Comparing forest fragmentation and its drivers in China and the USA with Globcover v2.2

    USGS Publications Warehouse

    Chen, Mingshi; Mao, Lijun; Zhou, Chunguo; Vogelmann, James E.; Zhu, Zhiliang

    2010-01-01

    Forest loss and fragmentation are of major concern to the international community, in large part because they impact so many important environmental processes. The main objective of this study was to assess the differences in forest fragmentation patterns and drivers between China and the conterminous United States (USA). Using the latest 300-m resolution global land cover product, Globcover v2.2, a comparative analysis of forest fragmentation patterns and drivers was made. The fragmentation patterns were characterized by using a forest fragmentation model built on the sliding window analysis technique in association with landscape indices. Results showed that China’s forests were substantially more fragmented than those of the USA. This was evidenced by a large difference in the amount of interior forest area share, with China having 48% interior forest versus the 66% for the USA. China’s forest fragmentation was primarily attributed to anthropogenic disturbances, driven particularly by agricultural expansion from an increasing and large population, as well as poor forest management practices. In contrast, USA forests were principally fragmented by natural land cover types. However, USA urban sprawl contributed more to forest fragmentation than in China. This is closely tied to the USA’s economy, lifestyle and institutional processes. Fragmentation maps were generated from this study, which provide valuable insights and implications regarding habitat planning for rare and endangered species. Such maps enable development of strategic plans for sustainable forest management by identifying areas with high amounts of human-induced fragmentation, which improve risk assessments and enable better targeting for protection and remediation efforts. Because forest fragmentation is a long-term, complex process that is highly related to political, institutional, economic and philosophical arenas, both nations need to take effective and comprehensive measures to mitigate the negative effects of forest loss and fragmentation on the existing forest ecosystems.

  4. Genome Data Exploration Using Correspondence Analysis

    PubMed Central

    Tekaia, Fredj

    2016-01-01

    Recent developments of sequencing technologies that allow the production of massive amounts of genomic and genotyping data have highlighted the need for synthetic data representation and pattern recognition methods that can mine and help discovering biologically meaningful knowledge included in such large data sets. Correspondence analysis (CA) is an exploratory descriptive method designed to analyze two-way data tables, including some measure of association between rows and columns. It constructs linear combinations of variables, known as factors. CA has been used for decades to study high-dimensional data, and remarkable inferences from large data tables were obtained by reducing the dimensionality to a few orthogonal factors that correspond to the largest amount of variability in the data. Herein, I review CA and highlight its use by considering examples in handling high-dimensional data that can be constructed from genomic and genetic studies. Examples in amino acid compositions of large sets of species (viruses, phages, yeast, and fungi) as well as an example related to pairwise shared orthologs in a set of yeast and fungal species, as obtained from their proteome comparisons, are considered. For the first time, results show striking segregations between yeasts and fungi as well as between viruses and phages. Distributions obtained from shared orthologs show clusters of yeast and fungal species corresponding to their phylogenetic relationships. A direct comparison with the principal component analysis method is discussed using a recently published example of genotyping data related to newly discovered traces of an ancient hominid that was compared to modern human populations in the search for ancestral similarities. CA offers more detailed results highlighting links between modern humans and the ancient hominid and their characterizations. Compared to the popular principal component analysis method, CA allows easier and more effective interpretation of results, particularly by the ability of relating individual patterns with their corresponding characteristic variables. PMID:27279736

  5. Comparing forest fragmentation and its drivers in China and the USA with Globcover v2.2.

    PubMed

    Li, Mingshi; Mao, Lijun; Zhou, Chunguo; Vogelmann, James E; Zhu, Zhiliang

    2010-12-01

    Forest loss and fragmentation are of major concern to the international community, in large part because they impact so many important environmental processes. The main objective of this study was to assess the differences in forest fragmentation patterns and drivers between China and the conterminous United States (USA). Using the latest 300-m resolution global land cover product, Globcover v2.2, a comparative analysis of forest fragmentation patterns and drivers was made. The fragmentation patterns were characterized by using a forest fragmentation model built on the sliding window analysis technique in association with landscape indices. Results showed that China's forests were substantially more fragmented than those of the USA. This was evidenced by a large difference in the amount of interior forest area share, with China having 48% interior forest versus the 66% for the USA. China's forest fragmentation was primarily attributed to anthropogenic disturbances, driven particularly by agricultural expansion from an increasing and large population, as well as poor forest management practices. In contrast, USA forests were principally fragmented by natural land cover types. However, USA urban sprawl contributed more to forest fragmentation than in China. This is closely tied to the USA's economy, lifestyle and institutional processes. Fragmentation maps were generated from this study, which provide valuable insights and implications regarding habitat planning for rare and endangered species. Such maps enable development of strategic plans for sustainable forest management by identifying areas with high amounts of human-induced fragmentation, which improve risk assessments and enable better targeting for protection and remediation efforts. Because forest fragmentation is a long-term, complex process that is highly related to political, institutional, economic and philosophical arenas, both nations need to take effective and comprehensive measures to mitigate the negative effects of forest loss and fragmentation on the existing forest ecosystems. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. A Comparison Study of Multivariate Fixed Models and Gene Association with Multiple Traits (GAMuT) for Next-Generation Sequencing

    PubMed Central

    Chiu, Chi-yang; Jung, Jeesun; Wang, Yifan; Weeks, Daniel E.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Amos, Christopher I.; Mills, James L.; Boehnke, Michael; Xiong, Momiao; Fan, Ruzong

    2016-01-01

    In this paper, extensive simulations are performed to compare two statistical methods to analyze multiple correlated quantitative phenotypes: (1) approximate F-distributed tests of multivariate functional linear models (MFLM) and additive models of multivariate analysis of variance (MANOVA), and (2) Gene Association with Multiple Traits (GAMuT) for association testing of high-dimensional genotype data. It is shown that approximate F-distributed tests of MFLM and MANOVA have higher power and are more appropriate for major gene association analysis (i.e., scenarios in which some genetic variants have relatively large effects on the phenotypes); GAMuT has higher power and is more appropriate for analyzing polygenic effects (i.e., effects from a large number of genetic variants each of which contributes a small amount to the phenotypes). MFLM and MANOVA are very flexible and can be used to perform association analysis for: (i) rare variants, (ii) common variants, and (iii) a combination of rare and common variants. Although GAMuT was designed to analyze rare variants, it can be applied to analyze a combination of rare and common variants and it performs well when (1) the number of genetic variants is large and (2) each variant contributes a small amount to the phenotypes (i.e., polygenes). MFLM and MANOVA are fixed effect models which perform well for major gene association analysis. GAMuT can be viewed as an extension of sequence kernel association tests (SKAT). Both GAMuT and SKAT are more appropriate for analyzing polygenic effects and they perform well not only in the rare variant case, but also in the case of a combination of rare and common variants. Data analyses of European cohorts and the Trinity Students Study are presented to compare the performance of the two methods. PMID:27917525

  7. Large-ion lithophile elements delivered by saline fluids to the sub-arc mantle

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuhiko; Mibe, Kenji; Bureau, Hélène; Reguer, Solenn; Mocuta, Cristian; Kubsky, Stefan; Thiaudière, Dominique; Ono, Shigeaki; Kogiso, Tetsu

    2014-12-01

    Geochemical signatures of arc basalts can be explained by addition of aqueous fluids, melts, and/or supercritical fluids from the subducting slab to the sub-arc mantle. Partitioning of large-ion lithophile elements between aqueous fluids and melts is crucial as these two liquid phases are present in the sub-arc pressure-temperature conditions. Using a micro-focused synchrotron X-ray beam, in situ X-ray fluorescence (XRF) spectra were obtained from aqueous fluids and haplogranite or jadeite melts at 0.3 to 1.3 GPa and 730°C to 830°C under varied concentrations of (Na, K)Cl (0 to 25 wt.%). Partition coefficients between the aqueous fluids and melts were calculated for Pb, Rb, and Sr ([InlineEquation not available: see fulltext.]). There was a positive correlation between [InlineEquation not available: see fulltext.] values and pressure, as well as [InlineEquation not available: see fulltext.] values and salinity. As compared to the saline fluids with 25 wt.% (Na, K)Cl, the Cl-free aqueous fluids can only dissolve one tenth (Pb, Rb) to one fifth (Sr) of the amount of large-ion lithophile elements when they coexist with the melts. In the systems with 13 to 25 wt.% (Na, K)Cl, [InlineEquation not available: see fulltext.] values were greater than unity, which is indicative of the capacity of such highly saline fluids to effectively transfer Pb and Rb. Enrichment of large-ion lithophile elements such as Pb and Rb in arc basalts relative to mid-oceanic ridge basalts (MORB) has been attributed to mantle source fertilization by aqueous fluids from dehydrating oceanic plates. Such aqueous fluids are likely to contain Cl, although the amount remains to be quantified.

  8. A case for automated tape in clinical imaging.

    PubMed

    Bookman, G; Baune, D

    1998-08-01

    Electronic archiving of radiology images over many years will require many terabytes of storage with a need for rapid retrieval of these images. As more large PACS installations are installed and implemented, a data crisis occurs. The ability to store this large amount of data using the traditional method of optical jukeboxes or online disk alone becomes an unworkable solution. The amount of floor space number of optical jukeboxes, and off-line shelf storage required to store the images becomes unmanageable. With the recent advances in tape and tape drives, the use of tape for long term storage of PACS data has become the preferred alternative. A PACS system consisting of a centrally managed system of RAID disk, software and at the heart of the system, tape, presents a solution that for the first time solves the problems of multi-modality high end PACS, non-DICOM image, electronic medical record and ADT data storage. This paper will examine the installation of the University of Utah, Department of Radiology PACS system and the integration of automated tape archive. The tape archive is also capable of storing data other than traditional PACS data. The implementation of an automated data archive to serve the many other needs of a large hospital will also be discussed. This will include the integration of a filmless cardiology department and the backup/archival needs of a traditional MIS department. The need for high bandwidth to tape with a large RAID cache will be examined and how with an interface to a RIS pre-fetch engine, tape can be a superior solution to optical platters or other archival solutions. The data management software will be discussed in detail. The performance and cost of RAID disk cache and automated tape compared to a solution that includes optical will be examined.

  9. Antibody recognition of the glycoprotein g of viral haemorrhagic septicemia virus (VHSV) purified in large amounts from insect larvae

    PubMed Central

    2011-01-01

    Background There are currently no purification methods capable of producing the large amounts of fish rhabdoviral glycoprotein G (gpG) required for diagnosis and immunisation purposes or for studying structure and molecular mechanisms of action of this molecule (ie. pH-dependent membrane fusion). As a result of the unavailability of large amounts of the gpG from viral haemorrhagic septicaemia rhabdovirus (VHSV), one of the most dangerous viruses affecting cultured salmonid species, research interests in this field are severely hampered. Previous purification methods to obtain recombinant gpG from VHSV in E. coli, yeast and baculovirus grown in insect cells have not produced soluble conformations or acceptable yields. The development of large-scale purification methods for gpGs will also further research into other fish rhabdoviruses, such as infectious haematopoietic necrosis virus (IHNV), spring carp viremia virus (SVCV), hirame rhabdovirus (HIRRV) and snakehead rhabdovirus (SHRV). Findings Here we designed a method to produce milligram amounts of soluble VHSV gpG. Only the transmembrane and carboxy terminal-deleted (amino acid 21 to 465) gpG was efficiently expressed in insect larvae. Recognition of G21-465 by ß-mercaptoethanol-dependent neutralizing monoclonal antibodies (N-MAbs) and pH-dependent recognition by sera from VHSV-hyperimmunized or VHSV-infected rainbow trout (Oncorhynchus mykiss) was demonstrated. Conclusions Given that the purified G21-465 conserved some of its most important properties, this method might be suitable for the large-scale production of fish rhabdoviral gpGs for use in diagnosis, fusion and antigenicity studies. PMID:21693048

  10. Antibody recognition of the glycoprotein g of viral haemorrhagic septicemia virus (VHSV) purified in large amounts from insect larvae.

    PubMed

    Encinas, Paloma; Gomez-Sebastian, Silvia; Nunez, Maria Carmen; Gomez-Casado, Eduardo; Escribano, Jose M; Estepa, Amparo; Coll, Julio

    2011-06-21

    There are currently no purification methods capable of producing the large amounts of fish rhabdoviral glycoprotein G (gpG) required for diagnosis and immunisation purposes or for studying structure and molecular mechanisms of action of this molecule (ie. pH-dependent membrane fusion). As a result of the unavailability of large amounts of the gpG from viral haemorrhagic septicaemia rhabdovirus (VHSV), one of the most dangerous viruses affecting cultured salmonid species, research interests in this field are severely hampered. Previous purification methods to obtain recombinant gpG from VHSV in E. coli, yeast and baculovirus grown in insect cells have not produced soluble conformations or acceptable yields. The development of large-scale purification methods for gpGs will also further research into other fish rhabdoviruses, such as infectious haematopoietic necrosis virus (IHNV), spring carp viremia virus (SVCV), hirame rhabdovirus (HIRRV) and snakehead rhabdovirus (SHRV). Here we designed a method to produce milligram amounts of soluble VHSV gpG. Only the transmembrane and carboxy terminal-deleted (amino acid 21 to 465) gpG was efficiently expressed in insect larvae. Recognition of G21-465 by ß-mercaptoethanol-dependent neutralizing monoclonal antibodies (N-MAbs) and pH-dependent recognition by sera from VHSV-hyperimmunized or VHSV-infected rainbow trout (Oncorhynchus mykiss) was demonstrated. Given that the purified G21-465 conserved some of its most important properties, this method might be suitable for the large-scale production of fish rhabdoviral gpGs for use in diagnosis, fusion and antigenicity studies.

  11. Rapid MALDI-TOF Mass Spectrometry Strain Typing during a Large Outbreak of Shiga-Toxigenic Escherichia coli

    PubMed Central

    Christner, Martin; Trusch, Maria; Rohde, Holger; Kwiatkowski, Marcel; Schlüter, Hartmut; Wolters, Manuel; Aepfelbacher, Martin; Hentschke, Moritz

    2014-01-01

    Background In 2011 northern Germany experienced a large outbreak of Shiga-Toxigenic Escherichia coli O104:H4. The large amount of samples sent to microbiology laboratories for epidemiological assessment highlighted the importance of fast and inexpensive typing procedures. We have therefore evaluated the applicability of a MALDI-TOF mass spectrometry based strategy for outbreak strain identification. Methods Specific peaks in the outbreak strain’s spectrum were identified by comparative analysis of archived pre-outbreak spectra that had been acquired for routine species-level identification. Proteins underlying these discriminatory peaks were identified by liquid chromatography tandem mass spectrometry and validated against publicly available databases. The resulting typing scheme was evaluated against PCR genotyping with 294 E. coli isolates from clinical samples collected during the outbreak. Results Comparative spectrum analysis revealed two characteristic peaks at m/z 6711 and m/z 10883. The underlying proteins were found to be of low prevalence among genome sequenced E. coli strains. Marker peak detection correctly classified 292 of 293 study isolates, including all 104 outbreak isolates. Conclusions MALDI-TOF mass spectrometry allowed for reliable outbreak strain identification during a large outbreak of Shiga-Toxigenic E. coli. The applied typing strategy could probably be adapted to other typing tasks and might facilitate epidemiological surveys as part of the routine pathogen identification workflow. PMID:25003758

  12. Engineering large cartilage tissues using dynamic bioreactor culture at defined oxygen conditions.

    PubMed

    Daly, Andrew C; Sathy, Binulal N; Kelly, Daniel J

    2018-01-01

    Mesenchymal stem cells maintained in appropriate culture conditions are capable of producing robust cartilage tissue. However, gradients in nutrient availability that arise during three-dimensional culture can result in the development of spatially inhomogeneous cartilage tissues with core regions devoid of matrix. Previous attempts at developing dynamic culture systems to overcome these limitations have reported suppression of mesenchymal stem cell chondrogenesis compared to static conditions. We hypothesize that by modulating oxygen availability during bioreactor culture, it is possible to engineer cartilage tissues of scale. The objective of this study was to determine whether dynamic bioreactor culture, at defined oxygen conditions, could facilitate the development of large, spatially homogeneous cartilage tissues using mesenchymal stem cell laden hydrogels. A dynamic culture regime was directly compared to static conditions for its capacity to support chondrogenesis of mesenchymal stem cells in both small and large alginate hydrogels. The influence of external oxygen tension on the response to the dynamic culture conditions was explored by performing the experiment at 20% O 2 and 3% O 2 . At 20% O 2 , dynamic culture significantly suppressed chondrogenesis in engineered tissues of all sizes. In contrast, at 3% O 2 dynamic culture significantly enhanced the distribution and amount of cartilage matrix components (sulphated glycosaminoglycan and collagen II) in larger constructs compared to static conditions. Taken together, these results demonstrate that dynamic culture regimes that provide adequate nutrient availability and a low oxygen environment can be employed to engineer large homogeneous cartilage tissues. Such culture systems could facilitate the scaling up of cartilage tissue engineering strategies towards clinically relevant dimensions.

  13. Computational fluid dynamics study of the variable-pitch split-blade fan concept

    NASA Technical Reports Server (NTRS)

    Kepler, C. E.; Elmquist, A. R.; Davis, R. L.

    1992-01-01

    A computational fluid dynamics study was conducted to evaluate the feasibility of the variable-pitch split-blade supersonic fan concept. This fan configuration was conceived as a means to enable a supersonic fan to switch from the supersonic through-flow type of operation at high speeds to a conventional fan with subsonic inflow and outflow at low speeds. During this off-design, low-speed mode of operation, the fan would operate with a substantial static pressure rise across the blade row like a conventional transonic fan; the front (variable-pitch) blade would be aligned with the incoming flow, and the aft blade would remain fixed in the position set by the supersonic design conditions. Because of these geometrical features, this low speed configuration would inherently have a large amount of turning and, thereby, would have the potential for a large total pressure increase in a single stage. Such a high-turning blade configuration is prone to flow separation; it was hoped that the channeling of the flow between the blades would act like a slotted wing and help alleviate this problem. A total of 20 blade configurations representing various supersonic and transonic configurations were evaluated using a Navier Stokes CFD program called ADAPTNS because of its adaptive grid features. The flow fields generated by this computational procedure were processed by another data reduction program which calculated average flow properties and simulated fan performance. These results were employed to make quantitative comparisons and evaluations of blade performance. The supersonic split-blade configurations generated performance comparable to a single-blade supersonic, through-flow fan configuration. Simulated rotor total pressure ratios of the order of 2.5 or better were achieved for Mach 2.0 inflow conditions. The corresponding fan efficiencies were approximately 75 percent or better. The transonic split-blade configurations having large amounts of turning were able to generate large amounts of total turning and achieve simulated total pressure ratios of 3.0 or better with subsonic inflow conditions. These configurations had large losses and low fan efficiencies in the 70's percent. They had large separated regions and low velocity wakes. Additional turning and diffusion of this flow in a subsequent stator row would probably be very inefficient. The high total pressure ratios indicated by the rotor performance would be substantially reduced by the stators, and the stage efficiency would be substantially lower. Such performance leaves this dual-mode fan concept less attractive than originally postulated.

  14. Synthesis of Large and Few Atomic Layers of Hexagonal Boron Nitride on Melted Copper

    PubMed Central

    Khan, Majharul Haque; Huang, Zhenguo; Xiao, Feng; Casillas, Gilberto; Chen, Zhixin; Molino, Paul J.; Liu, Hua Kun

    2015-01-01

    Hexagonal boron nitride nanosheets (h-BNNS) have been proposed as an ideal substrate for graphene-based electronic devices, but the synthesis of large and homogeneous h-BNNS is still challenging. In this contribution, we report a facile synthesis of few-layer h-BNNS on melted copper via an atmospheric pressure chemical vapor deposition process. Comparative studies confirm the advantage of using melted copper over solid copper as a catalyst substrate. The former leads to the formation of single crystalline h-BNNS that is several microns in size and mostly in mono- and bi-layer forms, in contrast to the polycrystalline and mixed multiple layers (1–10) yielded by the latter. This difference is likely to be due to the significantly reduced and uniformly distributed nucleation sites on the smooth melted surface, in contrast to the large amounts of unevenly distributed nucleation sites that are associated with grain boundaries and other defects on the solid surface. This synthesis is expected to contribute to the development of large-scale manufacturing of h-BNNS/graphene-based electronics. PMID:25582557

  15. Polarization of the prompt gamma-ray emission from the gamma-ray burst of 6 December 2002.

    PubMed

    Coburn, Wayne; Boggs, Steven E

    2003-05-22

    Observations of the afterglows of gamma-ray bursts (GRBs) have revealed that they lie at cosmological distances, and so correspond to the release of an enormous amount of energy. The nature of the central engine that powers these events and the prompt gamma-ray emission mechanism itself remain enigmatic because, once a relativistic fireball is created, the physics of the afterglow is insensitive to the nature of the progenitor. Here we report the discovery of linear polarization in the prompt gamma-ray emission from GRB021206, which indicates that it is synchrotron emission from relativistic electrons in a strong magnetic field. The polarization is at the theoretical maximum, which requires a uniform, large-scale magnetic field over the gamma-ray emission region. A large-scale magnetic field constrains possible progenitors to those either having or producing organized fields. We suggest that the large magnetic energy densities in the progenitor environment (comparable to the kinetic energy densities of the fireball), combined with the large-scale structure of the field, indicate that magnetic fields drive the GRB explosion.

  16. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  17. Synthesis of large scale graphene oxide using plasma enhanced chemical vapor deposition method and its application in humidity sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yang; Chen, Yuming, E-mail: yumingchen@fudan.edu.cn; Engineering Research Center of Advanced Lighting Technology, Ministry of Education, 220 Handan Road, Shanghai 00433

    2016-03-14

    Large scale graphene oxide (GO) is directly synthesized on copper (Cu) foil by plasma enhanced chemical vapor deposition method under 500 °C and even lower temperature. Compared to the modified Hummer's method, the obtained GO sheet in this article is large, and it is scalable according to the Cu foil size. The oxygen-contained groups in the GO are introduced through the residual gas of methane (99.9% purity). To prevent the Cu surface from the bombardment of the ions in the plasma, we use low intensity discharge. Our experiment reveals that growth temperature has important influence on the carbon to oxygen ratiomore » (C/O ratio) in the GO; and it also affects the amount of π-π* bonds between carbon atoms. Preliminary experiments on a 6 mm × 12 mm GO based humidity sensor prove that the synthesized GO reacts well to the humidity change. Our GO synthesis method may provide another channel for obtaining large scale GO in gas sensing or other applications.« less

  18. Synthesis of large and few atomic layers of hexagonal boron nitride on melted copper.

    PubMed

    Khan, Majharul Haque; Huang, Zhenguo; Xiao, Feng; Casillas, Gilberto; Chen, Zhixin; Molino, Paul J; Liu, Hua Kun

    2015-01-13

    Hexagonal boron nitride nanosheets (h-BNNS) have been proposed as an ideal substrate for graphene-based electronic devices, but the synthesis of large and homogeneous h-BNNS is still challenging. In this contribution, we report a facile synthesis of few-layer h-BNNS on melted copper via an atmospheric pressure chemical vapor deposition process. Comparative studies confirm the advantage of using melted copper over solid copper as a catalyst substrate. The former leads to the formation of single crystalline h-BNNS that is several microns in size and mostly in mono- and bi-layer forms, in contrast to the polycrystalline and mixed multiple layers (1-10) yielded by the latter. This difference is likely to be due to the significantly reduced and uniformly distributed nucleation sites on the smooth melted surface, in contrast to the large amounts of unevenly distributed nucleation sites that are associated with grain boundaries and other defects on the solid surface. This synthesis is expected to contribute to the development of large-scale manufacturing of h-BNNS/graphene-based electronics.

  19. Transmitted wavefront testing with large dynamic range based on computer-aided deflectometry

    NASA Astrophysics Data System (ADS)

    Wang, Daodang; Xu, Ping; Gong, Zhidong; Xie, Zhongmin; Liang, Rongguang; Xu, Xinke; Kong, Ming; Zhao, Jun

    2018-06-01

    The transmitted wavefront testing technique is demanded for the performance evaluation of transmission optics and transparent glass, in which the achievable dynamic range is a key issue. A computer-aided deflectometric testing method with fringe projection is proposed for the accurate testing of transmitted wavefronts with a large dynamic range. Ray tracing of the modeled testing system is carried out to achieve the virtual ‘null’ testing of transmitted wavefront aberrations. The ray aberration is obtained from the ray tracing result and measured slope, with which the test wavefront aberration can be reconstructed. To eliminate testing system modeling errors, a system geometry calibration based on computer-aided reverse optimization is applied to realize accurate testing. Both numerical simulation and experiments have been carried out to demonstrate the feasibility and high accuracy of the proposed testing method. The proposed testing method can achieve a large dynamic range compared with the interferometric method, providing a simple, low-cost and accurate way for the testing of transmitted wavefronts from various kinds of optics and a large amount of industrial transmission elements.

  20. Effect of noble gases on an atmospheric greenhouse /Titan/.

    NASA Technical Reports Server (NTRS)

    Cess, R.; Owen, T.

    1973-01-01

    Several models for the atmosphere of Titan have been investigated, taking into account various combinations of neon and argon. The investigation shows that the addition of large amounts of Ne and/or Ar will substantially reduce the hydrogen abundance required for a given greenhouse effect. The fact that a large amount of neon should be present if the atmosphere is a relic of the solar nebula is an especially attractive feature of the models, because it is hard to justify appropriate abundances of other enhancing agents.

  1. Overuse or underuse? An observation of pesticide use in China.

    PubMed

    Zhang, Chao; Hu, Ruifa; Shi, Guanming; Jin, Yanhong; Robson, Mark G; Huang, Xusheng

    2015-12-15

    Pesticide use has experienced a dramatic increase worldwide, especially in China, where a wide variety of pesticides are used in large amounts by farmers to control crop pests. While Chinese farmers are often criticized for pesticide overuse, this study shows the coexistence of overuse and underuse of pesticide based on the survey data of pesticide use in rice, cotton, maize, and wheat production in three provinces in China. A novel index amount approach is proposed to convert the amount of multiple pesticides used to control the same pest into an index amount of a referenced pesticide. We compare the summed index amount with the recommended dosage range of the referenced pesticide to classify whether pesticides are overused or underused. Using this new approach, the following main results were obtained. Pesticide overuse and underuse coexist after examining a total of 107 pesticides used to control up to 54 crop pests in rice, cotton, maize, and wheat production. In particular, pesticide overuse in more than half of the total cases for 9 crop pest species is detected. In contrast, pesticide underuse accounts for more than 20% of the total cases for 11 pests. We further indicate that the lack of knowledge and information on pesticide use and pest control among Chinese farmers may cause the coexistence of pesticide overuse and underuse. Our analysis provides indirect evidence that the commercialized agricultural extension system in China probably contributes to the coexistence of overuse and underuse. To improve pesticide use, it is urgent to reestablish the monitoring and forecasting system regarding pest control in China. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Measurements of jet-related observables at the LHC

    NASA Astrophysics Data System (ADS)

    Kokkas, P.

    2015-11-01

    During the first years of the LHC operation a large amount of jet data was recorded by the ATLAS and CMS experiments. In this review several measurements of jet-related observables are presented, such as multi-jet rates and cross sections, ratios of jet cross sections, jet shapes and event shape observables. All results presented here are based on jet data collected at a centre-of-mass energy of 7 TeV. Data are compared to various Monte Carlo generators, as well as to theoretical next-to-leading-order calculations allowing a test of perturbative Quantum Chromodynamics in a previously unexplored energy region.

  3. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1977-01-01

    One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.

  4. Impingement of Water Droplets on NACA 65A004 Airfoil at 8 deg Angle of Attack

    NASA Technical Reports Server (NTRS)

    Brun, R. J.; Gallagher, H. M.; Vogt, D. E.

    1954-01-01

    The trajectories of droplets in the air flowing past an NACA 65AO04 airfoil at an angle of attack of 8 deg were determined.. The amount of water in droplet form impinging on the airfoil, the area of droplet impingement, and the rate of droplet impingement per unit area on the airfoil surface were calculated from the trajectories and presented to cover a large range of flight and atmospheric conditions. These impingement characteristics are compared briefly with those previously reported for the same airfoil at an angle of attack of 4 deg.

  5. Positron annihilation spectroscopy techniques applied to the study of an HPGe detector

    NASA Astrophysics Data System (ADS)

    Nascimento, E. do; Vanin, V. R.; Maidana, N. L.; Silva, T. F.; Rizzutto, M. A.; Fernández-Varea, J. M.

    2013-05-01

    Doppler Broadening Spectroscopy of the large Ge crystal of an HPGe detector was performed using positrons from pair production of 6.13 MeV γ-rays from the 19F(p,αγ)16O reaction. Two HPGe detectors facing opposite sides of the Ge crystal acting as target provided both coincidence and singles spectra. Changes in the shape of the annihilation peak were observed when the high voltage applied to the target detector was switched on or off, amounting to somewhat less than 20% when the areas of equivalent energy intervals in the corresponding normalized spectra are compared.

  6. Characterizing Marine Soundscapes.

    PubMed

    Erbe, Christine; McCauley, Robert; Gavrilov, Alexander

    2016-01-01

    The study of marine soundscapes is becoming widespread and the amount of data collected is increasing rapidly. Data owners (typically academia, industry, government, and defense) are negotiating data sharing and generating potential for data syntheses, comparative studies, analyses of trends, and large-scale and long-term acoustic ecology research. A problem is the lack of standards and commonly agreed protocols for the recording of marine soundscapes, data analysis, and reporting that make a synthesis and comparison of results difficult. We provide a brief overview of the components in a marine soundscape, the hard- and software tools for recording and analyzing marine soundscapes, and common reporting formats.

  7. Impact of the BALLOTS Shared Cataloging System on the Amount of Change in the Library Technical Processing Department.

    ERIC Educational Resources Information Center

    Kershner, Lois M.

    The amount of change resulting from the implementation of the Bibliographic Automation of Large Library Operations using a Time-sharing System (BALLOTS) is analyzed, in terms of (1) physical room arrangement, (2) work procedure, and (3) organizational structure. Also considered is the factor of amount of time the new system has been in use.…

  8. An Earth-System Approach to Understanding the Deepwater Horizon Oil Spill

    ERIC Educational Resources Information Center

    Robeck, Edward

    2011-01-01

    The Deepwater Horizon explosion on April 20, 2010, and the subsequent release of oil into the Gulf of Mexico created an ecological disaster of immense proportions. The estimates of the amounts of oil, whether for the amount released per day or the total amount of oil disgorged from the well, call on numbers so large they defy the capacity of most…

  9. Cloud/climate sensitivity experiments

    NASA Technical Reports Server (NTRS)

    Roads, J. O.; Vallis, G. K.; Remer, L.

    1982-01-01

    A study of the relationships between large-scale cloud fields and large scale circulation patterns is presented. The basic tool is a multi-level numerical model comprising conservation equations for temperature, water vapor and cloud water and appropriate parameterizations for evaporation, condensation, precipitation and radiative feedbacks. Incorporating an equation for cloud water in a large-scale model is somewhat novel and allows the formation and advection of clouds to be treated explicitly. The model is run on a two-dimensional, vertical-horizontal grid with constant winds. It is shown that cloud cover increases with decreased eddy vertical velocity, decreased horizontal advection, decreased atmospheric temperature, increased surface temperature, and decreased precipitation efficiency. The cloud field is found to be well correlated with the relative humidity field except at the highest levels. When radiative feedbacks are incorporated and the temperature increased by increasing CO2 content, cloud amounts decrease at upper-levels or equivalently cloud top height falls. This reduces the temperature response, especially at upper levels, compared with an experiment in which cloud cover is fixed.

  10. On the unseasonal flooding over the Central United States during December 2015 and January 2016

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Villarini, Gabriele

    2017-11-01

    The unseasonal winter heavy rainfall and flooding that occurred during December 2015-January 2016 had large socio-economic impacts for the central United States. Here we examine the climatic conditions that led to the observed extreme precipitation, and compare and contrast them with the 1982/1983 and 2011/2012 winters. The large precipitation amounts associated with the 1982/1983 and 2015/2016 winter flooding were linked to the strongly positive North Atlantic Oscillation (NAO), with large moisture transported from the Gulf of Mexico. The anomalous upper-level trough in the 1982- and 2015- Decembers over the western United States was also favorable for strong precipitation by leading the cold front over the central United States. In contrast, the extremely positive NAO in December 2011 did not lead to heavy rainfall and flooding because the Azores High center shifted too far westward (like a blocking high) preventing moisture from moving towards the central and southeastern United States.

  11. Monitoring diffuse volcanic degassing during volcanic unrests: the case of Campi Flegrei (Italy)

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Chiodini, Giovanni; Avino, Rosario; Bagnato, Emanuela; Caliro, Stefano; Frondini, Francesco; Lelli, Matteo; Rosiello, Angelo

    2017-04-01

    Hydrothermal activity at Solfatara of Pozzuoli (Campi Flegrei caldera, Italy) results on a large area of hot soils, diffuse CO2 degassing and numerous fumaroles, releasing at the surface large amounts of gasses and thermal energy. Solfatara is one of the first sites of the world where the techniques for measuring and interpreting soil CO2 diffuse degassing were developed during 1990's and, more recently, it has become a sort of natural laboratory for testing new types of measurements of the CO2 fluxes from hydrothermal sites. The results of 30 diffuse CO2 flux surveys performed at Solfatara from 1998 to 2016 are presented and discussed. CO2 soil fluxes were measured over an area of about 1.2  1.2 km including the Solfatara crater and the hydrothermal site of Pisciarelli using the accumulation chamber technique. Each survey consisted in a number of CO2 flux measurements varying from 372 to 583 resulting in a total of 13158 measurements. This data set is one of the largest dataset ever made in the world on a single degassing volcanic-hydrothermal system. It is particularly relevant in the frame of volcanological sciences because it was acquired during a long period of unrest at Campi Flegrei caldera and because Solfatara release an amount of CO2 comparable to that released by medium-large volcanic plumes. Statistical and geostatistical elaborations of CO2 flux data allowed to characterise the sources of soil diffuse degassing, to define the extent of the area interested by the release of hydrothermal CO2 (Solfatara DDS) and to quantify the total amount of released CO2. During the last eighteen years relevant variations affected Solfatara degassing, and in particular the "background" CO2 emission , the extent of DDS and the total CO2 output, that may reflect variations in the subterraneous gas plume feeding the Solfatara and Pisciarelli emissions. In fact, the most relevant variations in Solfatara diffuse degassing well correlates with steam condensation and temperature increase affecting the Solfatara system resulting from repeated inputs of magmatic fluids into the hydrothermal systems as suggested by Chiodini et al., (2015; 2016; 2017) and show a long-term increase on the amount of released CO2 that accompanies the ongoing unrest of Campi Flegrei caldera.

  12. Microwave absorption properties of Ni/(C, silicides) nanocapsules

    PubMed Central

    2012-01-01

    The microwave absorption properties of Ni/(C, silicides) nanocapsules prepared by an arc discharge method have been studied. The composition and the microstructure of the Ni/(C, silicides) nanocapsules were determined by means of X-ray diffraction, X-ray photoelectric spectroscopy, and transmission electron microscope observations. Silicides, in the forms of SiOx and SiC, mainly exist in the shells of the nanocapsules and result in a large amount of defects at the ‘core/shell’ interfaces as well as in the shells. The complex permittivity and microwave absorption properties of the Ni/(C, silicides) nanocapsules are improved by the doped silicides. Compared with those of Ni/C nanocapsules, the positions of maximum absorption peaks of the Ni/(C, silicides) nanocapsules exhibit large red shifts. An electric dipole model is proposed to explain this red shift phenomenon. PMID:22548846

  13. Fine structure of spectral properties for random correlation matrices: An application to financial markets

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Alfarano, Simone; Scalas, Enrico

    2011-07-01

    We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.

  14. Phylogenetic search through partial tree mixing

    PubMed Central

    2012-01-01

    Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449

  15. Reanalysis of Mariner 9 UV spectrometer data for ozone, cloud, and dust abundances, and their interaction over climate timescales

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee

    1992-01-01

    Research activities to date are discussed. Selected Mariner 9 UV spectra were obtained. Radiative transfer models were updated and then exercised to simulate spectra. Simulated and observed spectra compare favorably. It is noted that large amounts of ozone are currently not retrieved with reflectance spectroscopy, raising large doubts about earlier published ozone abundances. As these published abundances have been used as a benchmark for all theoretical photochemical models of Mars, this deserves further exploration. Three manuscripts were published, and one is in review. Papers were presented and published at three conferences, and are planned for five more conferences in the next six months. The research plan for the next reporting period is discussed and involves continuing studies of reflectance spectroscopy, further examination of Mariner 9 data, and climate change studies of ozone.

  16. Large-scale dialysis of sample lipids

    USGS Publications Warehouse

    Meadows, Jill; Tillitt, Donald E.; Huckins, James; Schroeder, D.

    1993-01-01

    The use of a semipermeable membrane device (SPMD) for dialysis in an organic solvent phase is an efficient alternative approach to separation of contaminants from large amounts of lipid (up to 50 grams or more) prior to organic chemical analysis. Passive separation of contaminants can be accomplished with a minimum of equipment and a comparatively small volume of solvent. This study examines the effects of factors such as dialytic solvent, lipid type, dialytic solvent:lipid volume ratio, dialysis time, and temperature on the performance of polyethylene SPMDs during lipid-contaminant separations. The experimental conditions for maximal recoveries of organochlorine pesticides and polychlorinated biphenyls with minimal lipid carryover are determined for the examined variables. When the dialytic procedure is optimized, very satisfactory and highly reproducible analyte recoveries can be obtained in a few days while separating > 90% of the lipid material in a single operation.

  17. A prototype splitter apparatus for dividing large catches of small fish

    USGS Publications Warehouse

    Stapanian, Martin A.; Edwards, William H.

    2012-01-01

    Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.

  18. The Coriolis Program.

    ERIC Educational Resources Information Center

    Lissaman, P. B. S.

    1979-01-01

    Detailed are the history, development, and future objectives of the Coriolis program, a project designed to place large turbine units in the Florida Current that would generate large amounts of electric power. (BT)

  19. Cutting Edge: Protection by Antiviral Memory CD8 T Cells Requires Rapidly Produced Antigen in Large Amounts.

    PubMed

    Remakus, Sanda; Ma, Xueying; Tang, Lingjuan; Xu, Ren-Huan; Knudson, Cory; Melo-Silva, Carolina R; Rubio, Daniel; Kuo, Yin-Ming; Andrews, Andrew; Sigal, Luis J

    2018-05-15

    Numerous attempts to produce antiviral vaccines by harnessing memory CD8 T cells have failed. A barrier to progress is that we do not know what makes an Ag a viable target of protective CD8 T cell memory. We found that in mice susceptible to lethal mousepox (the mouse homolog of human smallpox), a dendritic cell vaccine that induced memory CD8 T cells fully protected mice when the infecting virus produced Ag in large quantities and with rapid kinetics. Protection did not occur when the Ag was produced in low amounts, even with rapid kinetics, and protection was only partial when the Ag was produced in large quantities but with slow kinetics. Hence, the amount and timing of Ag expression appear to be key determinants of memory CD8 T cell antiviral protective immunity. These findings may have important implications for vaccine design. Copyright © 2018 by The American Association of Immunologists, Inc.

  20. Profiling Oman education data using data mining approach

    NASA Astrophysics Data System (ADS)

    Alawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2017-10-01

    Nowadays, with a large amount of data generated by many application services in different learning fields has led to the new challenges in education field. Education portal is an important system that leads to a better development of education field. This research paper presents an innovative data mining techniques to understand and summarizes the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". This research embarks into performing student profiling of the Oman student database. This study utilized the k-means clustering technique to determine the students' profiles. An amount of 42484-student records from Sultanate of Oman has been extracted for this study. The findings of this study show the practicality of clustering technique to investigating student's profiles. Allowing for a better understanding of student's behavior and their academic performance. Oman Education Portal contain a large amounts of user activity and interaction data. Analyses of this large data can be meaningful for educator to improve the student performance level and recognize students who needed additional attention.

  1. Information Management System Supporting a Multiple Property Survey Program with Legacy Radioactive Contamination.

    PubMed

    Stager, Ron; Chambers, Douglas; Wiatzka, Gerd; Dupre, Monica; Callough, Micah; Benson, John; Santiago, Erwin; van Veen, Walter

    2017-04-01

    The Port Hope Area Initiative is a project mandated and funded by the Government of Canada to remediate properties with legacy low-level radioactive waste contamination in the Town of Port Hope, Ontario. The management and use of large amounts of data from surveys of some 4800 properties is a significant task critical to the success of the project. A large amount of information is generated through the surveys, including scheduling individual field visits to the properties, capture of field data laboratory sample tracking, QA/QC, property report generation and project management reporting. Web-mapping tools were used to track and display temporal progress of various tasks and facilitated consideration of spatial associations of contamination levels. The IM system facilitated the management and integrity of the large amounts of information collected, evaluation of spatial associations, automated report reproduction and consistent application and traceable execution for this project.x. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Shielding analyses for repetitive high energy pulsed power accelerators

    NASA Astrophysics Data System (ADS)

    Jow, H. N.; Rao, D. V.

    Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.

  3. Testing and Validation of the Dynamic Inertia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  4. Complex Networks in Different Languages: A Study of an Emergent Multilingual Encyclopedia

    NASA Astrophysics Data System (ADS)

    Pembe, F. Canan; Bingol, Haluk

    There is an increasing interest to the study of complex networks in an interdisciplinary way. Language, as a complex network, has been a part of this study due to its importance in human life. Moreover, the Internet has also been at the center of this study by making access to large amounts of information possible. With these ideas in mind, this work aims to evaluate conceptual networks in different languages with the data from a large and open source of information in the Internet, namely Wikipedia. As an evolving multilingual encyclopedia that can be edited by any Internet user, Wikipedia is a good example of an emergent complex system. In this paper, different from previous work on conceptual networks which usually concentrated on single languages, we concentrate on possible ways to compare the usages of different languages and possibly the underlying cultures. This also involves the analysis of local network properties around certain coneepts in different languages. For an initial evaluation, the concept "family" is used to compare the English and German Wikipedias. Although, the work is currently at the beginning, the results are promising.

  5. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  6. Investigation of mesoscale precipitation processes in the Carolinas using a radar-based climatology

    NASA Astrophysics Data System (ADS)

    Boyles, Ryan Patrick

    The complex topography, shoreline, soils, and land use patterns makes the Carolinas a unique location to study mesoscale processes. Using gage-calibrated radar estimates and a series of numerical model simulations, warm season mesoscale precipitation patterns are analyzed over the Carolinas. Gage-calibrated radar precipitation estimates are compared with surface gage observations. Stage IV estimates generally compared better than Stage II estimates, but some Stage II and Stage IV estimates have gross errors during autumn, winter, and spring seasons. Analysis of days when sea breeze is observed suggests that sea breeze induced precipitation occurs on nearly 40% of days in June, July, and August, but only 18% in May and 6% of days in April. Precipitation on days with sea breeze convection can contribute to over 50% of seasonal precipitation. Rainfall associated with sea breeze is generally maximized along east-facing shores 10-20 km inland, and minimized along south-facing shores in North Carolina. The shape of the shoreline along Cape Fear is associated with a local precipitation maximum that may be caused by the convergence of two sea breeze fronts from the south and east shores. Differential heating associated with contrasting soils along the Carolina Sandhills is suggested as a mechanism for enhancement in local precipitation. A high-resolution summer precipitation climatology suggests that precipitation is enhanced along the Sandhills region in both wet and dry years. Analysis of four numerical simulations suggests that contrasts in soils over the Carolinas Sandhills dominates over vegetation contrasts to produce heat flux gradients and a convergence zone along the sand-to-clay transition. Orographically induced precipitation is consistently observed in the summer, and appears to be isolated along windward slopes at 20km--40km from the ridge line. Amounts over external ridges are generally 50-100% higher than amounts observed over the foothills. Precipitation amounts over interior ridges and valleys are lower than observed on exterior ridges and are similar to values observed over the foothills. When compared with Stage IV estimates, the PRISM (Precipitation-elevation Regressions on Independent Slopes Model) method for estimating precipitation in complex terrain appears to largely over-estimate precipitation amounts over the interior ridges.

  7. Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters

    NASA Technical Reports Server (NTRS)

    Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott

    2013-01-01

    A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.

  8. Comparative genome-wide polymorphic microsatellite markers in Antarctic penguins through next generation sequencing

    PubMed Central

    Vianna, Juliana A.; Noll, Daly; Mura-Jornet, Isidora; Valenzuela-Guerra, Paulina; González-Acuña, Daniel; Navarro, Cristell; Loyola, David E.; Dantas, Gisele P. M.

    2017-01-01

    Abstract Microsatellites are valuable molecular markers for evolutionary and ecological studies. Next generation sequencing is responsible for the increasing number of microsatellites for non-model species. Penguins of the Pygoscelis genus are comprised of three species: Adélie (P. adeliae), Chinstrap (P. antarcticus) and Gentoo penguin (P. papua), all distributed around Antarctica and the sub-Antarctic. The species have been affected differently by climate change, and the use of microsatellite markers will be crucial to monitor population dynamics. We characterized a large set of genome-wide microsatellites and evaluated polymorphisms in all three species. SOLiD reads were generated from the libraries of each species, identifying a large amount of microsatellite loci: 33,677, 35,265 and 42,057 for P. adeliae, P. antarcticus and P. papua, respectively. A large number of dinucleotide (66,139), trinucleotide (29,490) and tetranucleotide (11,849) microsatellites are described. Microsatellite abundance, diversity and orthology were characterized in penguin genomes. We evaluated polymorphisms in 170 tetranucleotide loci, obtaining 34 polymorphic loci in at least one species and 15 polymorphic loci in all three species, which allow to perform comparative studies. Polymorphic markers presented here enable a number of ecological, population, individual identification, parentage and evolutionary studies of Pygoscelis, with potential use in other penguin species. PMID:28898354

  9. A comparative study on phyllosphere nitrogen fixation by newly isolated Corynebacterium sp. & Flavobacterium sp. and their potentialities as biofertilizer.

    PubMed

    Giri, S; Pati, B R

    2004-01-01

    A number of nitrogen fixing bacteria has been isolated from forest phyllosphere on the basis of nitrogenase activity. Among them two best isolates are selected and identified as Corynebacterium sp. AN1 & Flavobacterium sp. TK2 able to reduce 88 and 132 n mol of acetylene (10(8)cells(-1)h(-1)) respectively. They were grown in large amount and sprayed on the phyllosphere of maize plants as a substitute for nitrogenous fertilizer. Marked improvements in growth and total nitrogen content of the plant have been observed by the application of these nitrogen-fixing bacteria. An average 30-37% increase in yield was obtained, which is nearer to chemical fertilizer treatment. Comparatively better effect was obtained by application of Flavobacterium sp.

  10. Estimation of sediment yield from subsequent expanded landslides after heavy rainfalls : a case study in central Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Koshimizu, K.; Uchida, T.

    2015-12-01

    Initial large-scale sediment yield caused by heavy rainfall or major storms have made a strong impression on us. Previous studies focusing on landslide management investigated the initial sediment movement and its mechanism. However, integrated management of catchment-scale sediment movements requires estimating the sediment yield, which is produced by the subsequent expanded landslides due to rainfall, in addition to the initial landslide movement. This study presents a quantitative analysis of expanded landslides by surveying the Shukushubetsu River basin, at the foot of the Hidaka mountain range in central Hokkaido, Japan. This area recorded heavy rainfall in 2003, reaching a maximum daily precipitation of 388 mm. We extracted the expanded landslides from 2003 to 2008 using aerial photographs taken over the river area. In particular, we calculated the probability of expansion for each landslide, the ratio of the landslide area in 2008 as compared with that in 2003, and the amount of the expanded landslide area corresponding to the initial landslide area. As a result, it is estimated 24% about probability of expansion for each landslide. In addition, each expanded landslide area is smaller than the initial landslide area. Furthermore, the amount of each expanded landslide area in 2008 is approximately 7% of their landslide area in 2003. Therefore, the sediment yield from subsequent expanded landslides is equal to or slightly greater than the sediment yield in a typical base flow. Thus, we concluded that the amount of sediment yield from subsequent expanded landslides is lower than that of initial large-scale sediment yield caused by a heavy rainfall in terms of effect on management of catchment-scale sediment movement.

  11. The effects of moderately high temperature on zeaxanthin accumulation and decay.

    PubMed

    Zhang, Ru; Kramer, David M; Cruz, Jeffrey A; Struck, Kimberly R; Sharkey, Thomas D

    2011-09-01

    Moderately high temperature reduces photosynthetic capacities of leaves with large effects on thylakoid reactions of photosynthesis, including xanthophyll conversion in the lipid phase of the thylakoid membrane. In previous studies, we have found that leaf temperature of 40°C increased zeaxanthin accumulation in dark-adapted, intact tobacco leaves following a brief illumination, but did not change the amount of zeaxanthin in light-adatped leaves. To investigate heat effects on zeaxanthin accumulation and decay, zeaxanthin level was monitored optically in dark-adapted, intact tobacco and Arabidopsis thaliana leaves at either 23 or 40°C under 45-min illumination. Heated leaves had more zeaxanthin following 3-min light but had less or comparable amounts of zeaxanthin by the end of 45 min of illumination. Zeaxanthin accumulated faster at light initiation and decayed faster upon darkening in leaves at 40°C than leaves at 23°C, indicating that heat increased the activities of both violaxanthin de-epoxidase (VDE) and zeaxanthin epoxidase (ZE). In addition, our optical measurement demonstrated in vivo that weak light enhances zeaxanthin decay relative to darkness in intact leaves of tobacco and Arabidopsis, confirming previous observations in isolated spinach chloroplasts. However, the maximum rate of decay is similar for weak light and darkness, and we used the maximum rate of decay following darkness as a measure of the rate of ZE during steady-state light. A simulation indicated that high temperature should cause a large shift in the pH dependence of the amount of zeaxanthin in leaves because of differential effects on VDE and ZE. This allows for the reduction in ΔpH caused by heat to be offset by increased VDE activity relative to ZE.

  12. Validation of a simplified food frequency questionnaire for the assessment of dietary habits in Iranian adults: Isfahan Healthy Heart Program, Iran.

    PubMed

    Mohammadifard, Noushin; Sajjadi, Firouzeh; Maghroun, Maryam; Alikhasi, Hassan; Nilforoushzadeh, Farzaneh; Sarrafzadegan, Nizal

    2015-03-01

    Dietary assessment is the first step of dietary modification in community-based interventional programs. This study was performed to validate a simple food frequency questionnaire (SFFQ) for assessment of selected food items in epidemiological studies with a large sample size as well as community trails. This validation study was carried out on 264 healthy adults aged ≥ 41 years old living in 3 district central of Iran, including Isfahan, Najafabad, and Arak. Selected food intakes were assessed using a 48-item food frequency questionnaire (FFQ). The FFQ was interviewer-administered, which was completed twice; at the beginning of the study and 2 weeks thereafter. The validity of this SFFQ was examined compared to estimated amount by single 24 h dietary recall and 2 days dietary record. Validation of the FFQ was determined using Spearman correlation coefficients between daily frequency consumption of food groups as assessed by the FFQ and the qualitative amount of daily food groups intake accessed by dietary reference method was applied to evaluate validity. Intraclass correlation coefficients (ICC) were used to determine the reproducibility. Spearman correlation coefficient between the estimated amount of food groups intake by examined and reference methods ranged from 0.105 (P = 0.378) in pickles to 0.48 (P < 0.001) in plant protein. ICC for reproducibility of FFQ were between 0.47-0.69 in different food groups (P < 0.001). The designed SFFQ has a good relative validity and reproducibility for assessment of selected food groups intake. Thus, it can serve as a valid tool in epidemiological studies and clinical trial with large participants.

  13. Soil Inorganic Carbon Formation: Can Parent Material Overcome Climate?

    NASA Astrophysics Data System (ADS)

    Stanbery, C.; Will, R. M.; Seyfried, M. S.; Benner, S. G.; Flores, A. N.; Guilinger, J.; Lohse, K. A.; Good, A.; Black, C.; Pierce, J. L.

    2014-12-01

    Soil carbon is the third largest carbon reservoir and is composed of both organic and inorganic constituents. However, the storage and flux of soil carbon within the global carbon cycle are not fully understood. While organic carbon is often the focus of research, the factors controlling the formation and dissolution of soil inorganic carbon (SIC) are complex. Climate is largely accepted as the primary control on SIC, but the effects of soil parent material are less clear. We hypothesize that effects of parent material are significant and that SIC accumulation will be greater in soils formed from basalts than granites due to the finer textured soils and more abundant calcium and magnesium cations. This research is being conducted in the Reynolds Creek Experimental Watershed (RCEW) in southwestern Idaho. The watershed is an ideal location because it has a range of gradients in precipitation (250 mm to 1200 mm), ecology (sagebrush steppe to juniper), and parent materials (a wide array of igneous and sedimentary rock types) over a relatively small area. Approximately 20 soil profiles will be excavated throughout the watershed and will capture the effects of differing precipitation amounts and parent material on soil characteristics. Several samples at each site will be collected for analysis of SIC content and grain size distribution using a pressure calcimeter and hydrometers, respectively. Initial field data suggests that soils formed over basalts have a higher concentration of SIC than those on granitic material. If precipitation is the only control on SIC, we would expect to see comparable amounts in soils formed on both rock types within the same precipitation zone. However, field observations suggest that for all but the driest sites, soils formed over granite had no SIC detected while basalt soils with comparable precipitation had measurable amounts of SIC. Grain size distribution appears to be a large control on SIC as the sandier, granitic soils promote deeper percolation. This ongoing research will clarify the processes involved in SIC formation and identify the situations where it is an atmospheric source or sink.

  14. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    NASA Astrophysics Data System (ADS)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  15. Secondary scintillation yield from GEM and THGEM gaseous electron multipliers for direct dark matter search

    NASA Astrophysics Data System (ADS)

    Monteiro, C. M. B.; Fernandes, L. M. P.; Veloso, J. F. C. A.; Oliveira, C. A. B.; dos Santos, J. M. F.

    2012-07-01

    The search for alternatives to PMTs as photosensors in optical TPCs for rare event detection has significantly increased in the last few years. In particular, in view of the next generation large volume detectors, the use of photosensors with lower natural radioactivity, such as large area APDs or GM-APDs, with the additional possibility of sparse surface coverage, triggered the intense study of secondary scintillation production in micropattern electron multipliers, such as GEMs and THGEMs, as alternatives to the commonly used uniform electric field region between two parallel meshes. The much higher scintillation output obtained from the electron avalanches in such microstructures presents an advantage in those situations. The accurate knowledge of the amount of such scintillation is important for correct detector simulation and optimization. It will also serve as a benchmark for software tools developed and/or under development for the calculation of the amount of such scintillation.The secondary scintillation yield, or electroluminescence yield, in the electron avalanches of GEMs and THGEMs operating in gaseous xenon and argon has been determined for different gas pressures. At 1 bar, THGEMs deliver electroluminescence yields that are more than one order of magnitude higher when compared to those achieved in GEMs and two orders of magnitude when compared to those achieved in a uniform field gap. The THGEM electroluminescence yield presents a faster decrease with pressure when comparing to the GEM electroluminescence yield, reaching similar values to what is achieved in GEMs for xenon pressures of 2.5 bar, but still one order of magnitude higher than that produced in a uniform field gap. Another exception is the GEM operating in argon, which presents an electroluminescence yield similar to that produced in a uniform electric field gap, while the THGEM achieves yields that are more than one order of magnitude higher.

  16. Evaluation of the Impacts of Marine Salts and Asian Dust on the Forested Yakushima Island Ecosystem, a World Natural Heritage Site in Japan.

    PubMed

    Nakano, Takanori; Yokoo, Yoriko; Okumura, Masao; Jean, Seo-Ryong; Satake, Kenichi

    2012-11-01

    To elucidate the influence of airborne materials on the ecosystem of Japan's Yakushima Island, we determined the elemental compositions and Sr and Nd isotope ratios in streamwater, soils, vegetation, and rocks. Streamwater had high Na and Cl contents, low Ca and HCO(3) contents, and Na/Cl and Mg/Cl ratios close to those of seawater, but it had low pH (5.4 to 7.1), a higher Ca/Cl ratio than seawater, and distinct (87)Sr/(86)Sr ratios that depended on the bedrock type. The proportions of rain-derived cations in streamwater, estimated by assuming that Cl was derived from sea salt aerosols, averaged 81 % for Na, 83 % for Mg, 36 % for K, 32 % for Ca, and 33 % for Sr. The Sr value was comparable to the 28 % estimated by comparing Sr isotope ratios between rain and granite bedrock. The soils are depleted in Ca, Na, P, and Sr compared with the parent materials. At Yotsuse in the northwestern side, plants and the soil pool have (87)Sr/(86)Sr ratios similar to that of rainwater with a high sea salt component. In contrast, the Sr and Nd isotope ratios of soil minerals in the A and B horizons approach those of silicate minerals in northern China's loess soils. The soil Ca and P depletion results largely from chemical weathering of plagioclase and of small amounts of apatite and calcite in granitic rocks. This suggests that Yakushima's ecosystem is affected by large amounts of acidic precipitation with a high sea salt component, which leaches Ca and its proxy (Sr) from bedrock into streams, and by Asian dust-derived apatite, which is an important source of P in base cation-depleted soils.

  17. Novel Bioreactor Platform for Scalable Cardiomyogenic Differentiation from Pluripotent Stem Cell-Derived Embryoid Bodies.

    PubMed

    Rungarunlert, Sasitorn; Ferreira, Joao N; Dinnyes, Andras

    2016-01-01

    Generation of cardiomyocytes from pluripotent stem cells (PSCs) is a common and valuable approach to produce large amount of cells for various applications, including assays and models for drug development, cell-based therapies, and tissue engineering. All these applications would benefit from a reliable bioreactor-based methodology to consistently generate homogenous PSC-derived embryoid bodies (EBs) at a large scale, which can further undergo cardiomyogenic differentiation. The goal of this chapter is to describe a scalable method to consistently generate large amount of homogeneous and synchronized EBs from PSCs. This method utilizes a slow-turning lateral vessel bioreactor to direct the EB formation and their subsequent cardiomyogenic lineage differentiation.

  18. Hydrocyclone/Filter for Concentrating Biomarkers from Soil

    NASA Technical Reports Server (NTRS)

    Ponce, Adrian; Obenhuber, Donald

    2008-01-01

    The hydrocyclone-filtration extractor (HFE), now undergoing development, is a simple, robust apparatus for processing large amounts of soil to extract trace amounts of microorganisms, soluble organic compounds, and other biomarkers from soil and to concentrate the extracts in amounts sufficient to enable such traditional assays as cell culturing, deoxyribonucleic acid (DNA) analysis, and isotope analysis. Originally intended for incorporation into a suite of instruments for detecting signs of life on Mars, the HFE could also be used on Earth for similar purposes, including detecting trace amounts of biomarkers or chemical wastes in soils.

  19. Antitumor efficacy and intratumoral distribution of SN-38 from polymeric depots in brain tumor model

    PubMed Central

    Vejjasilpa, Ketpat; Manaspon, Chawan; Larbcharoensub, Noppadol; Boongird, Atthaporn; Hongeng, Suradej; Israsena, Nipan

    2015-01-01

    We investigate antitumor efficacy and 2D and 3D intratumoral distribution of 7-ethyl-10-hydroxycamptothecin (SN-38) from polymeric depots inside U-87MG xenograft tumor model in nude mice. Results showed that polymeric depots could be used to administer and controlled release of a large amount of SN-38 directly to the brain tumor model. SN-38 released from depots suppressed tumor growth, where the extent of suppression greatly depended on doses and the number of depot injections. Tumor suppression of SN-38 from depots was three-fold higher in animals which received double injections of depots at high dose (9.7 mg of SN-38) compared to single injection (2.2 mg). H&E staining of tumor sections showed that the area of tumor cell death/survival of the former group was two-fold higher than those of the latter group. Fluorescence imaging based on self-fluorescent property of SN-38 was used to evaluate the intratumoral distribution of this drug compared to histological results. The linear correlation between fluorescence intensity and the amount of SN-38 allowed quantitative determination of SN-38 in tumor tissues. Results clearly showed direct correlation between the amount of SN-38 in tumor sections and cancer cell death. Moreover, 3D reconstruction representing the distribution of SN-38 in tumors was obtained. Results from this study suggest the rationale for intratumoral drug administration and release of drugs inside tumor, which is necessary to design drug delivery systems with efficient antitumor activity. PMID:26080460

  20. PARENTS’ UNDERSTANDING OF INFORMATION REGARDING THEIR CHILD’S POSTOPERATIVE PAIN MANAGEMENT

    PubMed Central

    Tait, Alan R.; Voepel-Lewis, Terri; Snyder, Robin M.; Malviya, Shobha

    2009-01-01

    Objectives Unlike information provided for research, information disclosed to patients for treatment or procedures is largely unregulated and, as such, there is likely considerable variability in the type and amount of disclosure. This study was designed to examine the nature of information provided to parents regarding options for postoperative pain control and their understanding thereof. Methods 187 parents of children scheduled to undergo a surgical procedure requiring inpatient postoperative pain control completed questionnaires that elicited information regarding their perceptions and understanding of, and satisfaction with, information regarding postoperative pain management. Results Results showed that there was considerable variability in the content and amount of information provided to parents based on the method of postoperative pain control provided. Parents whose child received Patient Controlled Analgesia (PCA) were given significantly (P< 0.025) more information on the risks and benefits compared to those receiving Nurse Controlled or intravenous-prn (NCA or IV) analgesia. Approximately one third of parents had no understanding of the risks associated with postoperative pain management. Parents who received pain information preoperatively and who were given information regarding the risks and benefits had improved understanding compared to parents who received no or minimal information (P< 0.001). Furthermore, information that was deemed unclear or insufficient resulted in decreased parental understanding. Discussion These results demonstrate the variability in the type and amount of information provided to parents regarding their child’s postoperative pain control and reinforce the importance of clear and full disclosure of pain information, particularly with respect to the risks and benefits. PMID:18716495

  1. Drinking High Amounts of Alcohol as a Short-Term Mating Strategy: The Impact of Short-Term Mating Motivations on Young Adults' Drinking Behavior.

    PubMed

    Vincke, Eveline

    2017-01-01

    Previous research indicates that drinking large quantities of alcohol could function as a short-term mating strategy for young adults in mating situations. However, no study investigated whether this is actually the case. Therefore, in this article, the link between short-term mating motivations and drinking high amounts of alcohol is tested. First, a survey study ( N = 345) confirmed that young adults who engage in binge drinking are more short-term oriented in their mating strategy than young adults who never engage in binge drinking. Also, the more short-term-oriented young adults were in their mating strategy, the more often binge drinking behavior was conducted. In addition, an experimental study ( N = 229) empirically verified that short-term mating motivations increase young adults' drinking behavior, more so than long-term mating motivations. Results of the experiment clearly showed that young men and young women are triggered to drink more alcoholic beverages in a short-term mating situation compared to a long-term mating situation. Furthermore, the mating situation also affected young adults' perception of drinking behavior. Young adults in a short-term mating context perceived a higher amount of alcoholic beverages as heavy drinking compared to peers in a long-term mating context. These findings confirm that a high alcohol consumption functions as a short-term mating strategy for both young men and young women. Insights gained from this article might be of interest to institutions aimed at targeting youth alcohol (ab)use.

  2. Application study of Bio-FGD based on environmental safety during the coal combustion

    NASA Astrophysics Data System (ADS)

    Zhang, Pin

    2018-05-01

    Coal combustion produces a large amount of acidic gas, which is the main cause of acid rain and other natural disasters. Flue Gas Desulfurization (FGD) is a necessary requirement for clean coal combustion. Compared with the traditional chemical desulfurization technology, biological desulfurization has the advantages of low operating cost, without secondary pollution, low carbon emission and the additional economic benefits. The process and structure of BioDeSOx which as one of Bio-FGD technology is introduced. The major factors that influent BioDeSOx Bio- FGD system is the pH, oxidation reduction potential (-300 MV to -400MV), electrical conductivity, the adding amount of nutrient and temperature (30°C-40°C). Taking the Bio- FGD project of Yixing xielian thermal power plant as an example, the BioDeSOx technology was applied in this project. The environmental and economic benefits of the project were greater than the traditional desulfurization technology. With the continuous improvement of environmental safety standards, Bio- FGD technology will have broad application prospects.

  3. Effect of simultaneous exposure to occupational noise and cigarette smoke on binaural hearing impairment.

    PubMed

    Mohammadi, Saber; Mazhari, Mohammad Mahdi; Mehrparvar, Amir Houshang; Attarchi, Mir Saeed

    2010-01-01

    In recent years, it has been postulated that cigarette smoking can aggravate noise-induced hearing loss. In this study, we aimed to assess the effect of concurrent exposure to cigarette smoke and occupational noise on binaural hearing impairment (BHI). In an analytic study on the workers of a large wagon manufacturing company in 2007, 622 male workers (252 smokers and 370 non-smokers, matched for other variables) participated and their BHI was compared. BHI was significantly higher in smokers than in non-smokers (odds ratio= 5.6, P < 0.001, 95% CI =3.4-9.4). Logistic regression confirmed this significant difference as well, and showed a direct relationship between the amount of BHI and pack/years of smoking. Cigarette smoking accompanied by exposure to workplace noise may play a role in causing binaural hearing impairment, so giving up or decreasing the amount of smoking may prevent or at least delay binaural hearing impairment, and eventually reduce its compensation costs.

  4. Observations of Highly Variable Deuterium in the Martian Upper Atmosphere

    NASA Astrophysics Data System (ADS)

    Clarke, John T.; Mayyasi-Matta, Majd A.; Bhattacharyya, Dolon; Chaufray, Jean-Yves; Chaffin, Michael S.; Deighan, Justin; Schneider, Nicholas M.; Jain, Sonal; Jakosky, Bruce

    2017-10-01

    One of the key pieces of evidence for historic high levels of water on Mars is the present elevated ratio of deuterium/hydrogen (D/H) in near-surface water. This can be explained by the loss of large amounts of water into space, with the lighter H atoms escaping faster than D atoms. Understanding the specific physical processes and controlling factors behind the present escape of H and D is the key objective of the MAVEN IUVS echelle channel. This knowledge can then be applied to an accurate extrapolation back in time to understand the water history of Mars. Observations of D in the martian upper atmosphere over the first martian year of the MAVEN mission have shown highly variable amounts of D, with a short-lived maximum just after perihelion and during southern summer. The timing and nature of this increase provide constraints on its possible origin. These results will be presented and compared with other measurements of the upper atmosphere of Mars.

  5. Modular cell biology: retroactivity and insulation

    PubMed Central

    Del Vecchio, Domitilla; Ninfa, Alexander J; Sontag, Eduardo D

    2008-01-01

    Modularity plays a fundamental role in the prediction of the behavior of a system from the behavior of its components, guaranteeing that the properties of individual components do not change upon interconnection. Just as electrical, hydraulic, and other physical systems often do not display modularity, nor do many biochemical systems, and specifically, genetic networks. Here, we study the effect of interconnections on the input–output dynamic characteristics of transcriptional components, focusing on a property, which we call ‘retroactivity', that plays a role analogous to non-zero output impedance in electrical systems. In transcriptional networks, retroactivity is large when the amount of transcription factor is comparable to, or smaller than, the amount of promoter-binding sites, or when the affinity of such binding sites is high. To attenuate the effect of retroactivity, we propose a feedback mechanism inspired by the design of amplifiers in electronics. We introduce, in particular, a mechanism based on a phosphorylation–dephosphorylation cycle. This mechanism enjoys a remarkable insulation property, due to the fast timescales of the phosphorylation and dephosphorylation reactions. PMID:18277378

  6. Minimising generation of acid whey during Greek yoghurt manufacturing.

    PubMed

    Uduwerella, Gangani; Chandrapala, Jayani; Vasiljevic, Todor

    2017-08-01

    Greek yoghurt, a popular dairy product, generates large amounts of acid whey as a by-product during manufacturing. Post-processing treatment of this stream presents one of the main concerns for the industry. The objective of this study was to manipulate initial milk total solids content (15, 20 or 23 g/100 g) by addition of milk protein concentrate, thus reducing whey expulsion. Such an adjustment was investigated from the technological standpoint including starter culture performance, chemical and physical properties of manufactured Greek yoghurt and generated acid whey. A comparison was made to commercially available products. Increasing protein content in regular yoghurt reduced the amount of acid whey during whey draining. This protein fortification also enhanced the Lb. bulgaricus growth rate and proteolytic activity. Best structural properties including higher gel strength and lower syneresis were observed in the Greek yoghurt produced with 20 g/100 g initial milk total solid compared to manufactured or commercially available products, while acid whey generation was lowered due to lower drainage requirement.

  7. Identification and characterization of genes responsible for biosynthesis of kojic acid, an industrially important compound from Aspergillus oryzae.

    PubMed

    Terabayashi, Yasunobu; Sano, Motoaki; Yamane, Noriko; Marui, Junichiro; Tamano, Koichi; Sagara, Junichi; Dohmoto, Mitsuko; Oda, Ken; Ohshima, Eiji; Tachibana, Kuniharu; Higa, Yoshitaka; Ohashi, Shinichi; Koike, Hideaki; Machida, Masayuki

    2010-12-01

    Kojic acid is produced in large amounts by Aspergillus oryzae as a secondary metabolite and is widely used in the cosmetic industry. Glucose can be converted to kojic acid, perhaps by only a few steps, but no genes for the conversion have thus far been revealed. Using a DNA microarray, gene expression profiles under three pairs of conditions significantly affecting kojic acid production were compared. All genes were ranked using an index parameter reflecting both high amounts of transcription and a high induction ratio under producing conditions. After disruption of nine candidate genes selected from the top of the list, two genes of unknown function were found to be responsible for kojic acid biosynthesis, one having an oxidoreductase motif and the other a transporter motif. These two genes are closely associated in the genome, showing typical characteristics of genes involved in secondary metabolism. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Investigation of plastic debris ingestion by four species of sea turtles collected as bycatch in pelagic Pacific longline fisheries

    USGS Publications Warehouse

    Clukey, Katherine E.; Lepczyk, Christopher A.; Balazs, George H.; Work, Thierry M.; Lynch, Jennifer M.

    2017-01-01

    Ingestion of marine debris is an established threat to sea turtles. The amount, type, color and location of ingested plastics in the gastrointestinal tracts of 55 sea turtles from Pacific longline fisheries from 2012 to 2016 were quantified, and compared across species, turtle length, body condition, sex, capture location, season and year. Six approaches for quantifying amounts of ingested plastic strongly correlated with one another and included: number of pieces, mass, volume and surface area of plastics, ratio of plastic mass to body mass, and percentage of the mass of gut contents consisting of plastic. All olive ridley (n = 37), 90% of green (n = 10), 80% of loggerhead (n = 5) and 0% of leatherback (n = 3) turtles had ingested plastic; green turtles ingested significantly more than olive ridleys. Most debris was in the large intestines. No adverse health impacts (intestinal lesions, blockage, or poor body condition) due directly to plastic ingestion were noted.

  9. Domain-independent information extraction in unstructured text

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irwin, N.H.

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness whenmore » compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.« less

  10. BANNER: an executable survey of advances in biomedical named entity recognition.

    PubMed

    Leaman, Robert; Gonzalez, Graciela

    2008-01-01

    There has been an increasing amount of research on biomedical named entity recognition, the most basic text extraction problem, resulting in significant progress by different research teams around the world. This has created a need for a freely-available, open source system implementing the advances described in the literature. In this paper we present BANNER, an open-source, executable survey of advances in biomedical named entity recognition, intended to serve as a benchmark for the field. BANNER is implemented in Java as a machine-learning system based on conditional random fields and includes a wide survey of the best techniques recently described in the literature. It is designed to maximize domain independence by not employing brittle semantic features or rule-based processing steps, and achieves significantly better performance than existing baseline systems. It is therefore useful to developers as an extensible NER implementation, to researchers as a standard for comparing innovative techniques, and to biologists requiring the ability to find novel entities in large amounts of text.

  11. Occurrence and estimation of trans-resveratrol in one-year-old canes from seven major Chinese grape producing regions.

    PubMed

    Zhang, Ang; Fang, Yulin; Li, Xuan; Meng, Jiangfei; Wang, Hua; Li, Hua; Zhang, Zhenwen; Guo, Zhijun

    2011-03-31

    The concentration of trans-resveratrol in 165 grape cane samples from three major grape production regions and four large distribution centers of Chinese wild Vitis species were determined by reversed-phase high-performance liquid chromatography (HPLC). Among the different genotype groups and purpose of uses, cultivars of V. vinifera had much higher amounts of trans-resveratrol than did the cultivars of both V. labrusca or V. labrusca and V. vinifera hybrids, and within the V. vinifera species, significantly higher amounts of trans-resveratrol were found in wine grapes compared to table ones. No significant differences were observed between V. labrusca and its hybrids from crosses with V. vinifera, and between red cultivars and white ones (P < 0.05 or P < 0.01). The contents of trans-resveratrol, as a normal constituent occurring in grape canes, in Chinese wild species of V. amurensis, V. pentagona, and V. davidii from their native habitats were also relatively high.

  12. Fog collecting biomimetic surfaces: Influence of microstructure and wettability.

    PubMed

    Azad, M A K; Ellerbrok, D; Barthlott, W; Koch, K

    2015-01-19

    We analyzed the fog collection efficiency of three different sets of samples: replica (with and without microstructures), copper wire (smooth and microgrooved) and polyolefin mesh (hydrophilic, superhydrophilic and hydrophobic). The collection efficiency of the samples was compared in each set separately to investigate the influence of microstructures and/or the wettability of the surfaces on fog collection. Based on the controlled experimental conditions chosen here large differences in the efficiency were found. We found that microstructured plant replica samples collected 2-3 times higher amounts of water than that of unstructured (smooth) samples. Copper wire samples showed similar results. Moreover, microgrooved wires had a faster dripping of water droplets than that of smooth wires. The superhydrophilic mesh tested here was proved more efficient than any other mesh samples with different wettability. The amount of collected fog by superhydrophilic mesh was about 5 times higher than that of hydrophilic (untreated) mesh and was about 2 times higher than that of hydrophobic mesh.

  13. Swarm motility inhibitory and antioxidant activities of pomegranate peel processed under three drying conditions.

    PubMed

    John, K M Maria; Bhagwat, Arvind A; Luthria, Devanand L

    2017-11-15

    During processing of ready-to-eat fresh fruits, large amounts of peel and seeds are discarded as waste. Pomegranate (Punicagranatum) peels contain high amounts of bioactive compounds which inhibit migration of Salmonella on wet surfaces. The metabolic distribution of bioactives in pomegranate peel, inner membrane, and edible aril portion was investigated under three different drying conditions along with the anti-swarming activity against Citrobacter rodentium. Based on the multivariate analysis, 29 metabolites discriminated the pomegranate peel, inner membrane, and edible aril portion, as well as the three different drying methods. Punicalagins (∼38.6-50.3mg/g) were detected in higher quantities in all fractions as compared to ellagic acid (∼0.1-3.2mg/g) and punicalins (∼0-2.4mg/g). The bioactivity (antioxidant, anti-swarming) and phenolics content was significantly higher in peels than the edible aril portion. Natural anti-swarming agents from food waste may have promising potential for controlling food borne pathogens. Published by Elsevier Ltd.

  14. Investigation of plastic debris ingestion by four species of sea turtles collected as bycatch in pelagic Pacific longline fisheries.

    PubMed

    Clukey, Katharine E; Lepczyk, Christopher A; Balazs, George H; Work, Thierry M; Lynch, Jennifer M

    2017-07-15

    Ingestion of marine debris is an established threat to sea turtles. The amount, type, color and location of ingested plastics in the gastrointestinal tracts of 55 sea turtles from Pacific longline fisheries from 2012 to 2016 were quantified, and compared across species, turtle length, body condition, sex, capture location, season and year. Six approaches for quantifying amounts of ingested plastic strongly correlated with one another and included: number of pieces, mass, volume and surface area of plastics, ratio of plastic mass to body mass, and percentage of the mass of gut contents consisting of plastic. All olive ridley (n=37), 90% of green (n=10), 80% of loggerhead (n=5) and 0% of leatherback (n=3) turtles had ingested plastic; green turtles ingested significantly more than olive ridleys. Most debris was in the large intestines. No adverse health impacts (intestinal lesions, blockage, or poor body condition) due directly to plastic ingestion were noted. Copyright © 2017. Published by Elsevier Ltd.

  15. Trace metals in Bermuda rainwater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jickells, T.D.; Knap, A.H.; Church, T.M.

    1984-02-20

    The concentration of Cd, Cu, Fe, Mn, Ni, Pb, and Zn have been measured in Bermuda rainwater. Factor analysis indicates that Fe, Mn, and Pb have similar to acidic components derived from North America. The other metals all behave simiarly but differently to the acides. Sea salt, even after allowances for fractionation, apparently contributes minor amounts of Cu, Pb, and Zn and uncertain amounts of Fe, Mn, and Cd to Atlantic Ocean precipitation. Wash out ratios, calculated from this data along with earlier measurements of atmospheric trace metal concentration on Bermuda, are of the same order as those reported frommore » other remote ocean areas. The wet depositional fluxes of Cu, Ni, Pb, and Zn to the western Atlantic Ocean are significant compared to measured oceanic flux rates. However, the wet depositional fluxes of Fe and Mn to this area are relatively small, suggesting additional inputs, while an excess wet depositional flux of Cd suggests large-scale atmospheric recycling of this element.« less

  16. Effects of multiple-scale driving on turbulence statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Hyunju; Cho, Jungyeon, E-mail: hyunju527@gmail.com, E-mail: jcho@cnu.ac.kr

    2014-01-01

    Turbulence is ubiquitous in astrophysical fluids such as the interstellar medium and the intracluster medium. In turbulence studies, it is customary to assume that fluid is driven on a single scale. However, in astrophysical fluids, there can be many different driving mechanisms that act on different scales. If there are multiple energy-injection scales, the process of energy cascade and turbulence dynamo will be different compared with the case of the single energy-injection scale. In this work, we perform three-dimensional incompressible/compressible magnetohydrodynamic turbulence simulations. We drive turbulence in Fourier space in two wavenumber ranges, 2≤k≤√12 (large scale) and 15 ≲ kmore » ≲ 26 (small scale). We inject different amount of energy in each range by changing the amplitude of forcing in the range. We present the time evolution of the kinetic and magnetic energy densities and discuss the turbulence dynamo in the presence of energy injections at two scales. We show how kinetic, magnetic, and density spectra are affected by the two-scale energy injections and we discuss the observational implications. In the case ε {sub L} < ε {sub S}, where ε {sub L} and ε {sub S} are energy-injection rates at the large and small scales, respectively, our results show that even a tiny amount of large-scale energy injection can significantly change the properties of turbulence. On the other hand, when ε {sub L} ≳ ε {sub S}, the small-scale driving does not influence the turbulence statistics much unless ε {sub L} ∼ ε {sub S}.« less

  17. Who should conduct ethnobotanical studies? Effects of different interviewers in the case of the Chácobo Ethnobotany project, Beni, Bolivia.

    PubMed

    Paniagua-Zambrana, Narel Y; Bussmann, Rainer W; Hart, Robbie E; Moya-Huanca, Araceli L; Ortiz-Soria, Gere; Ortiz-Vaca, Milton; Ortiz-Álvarez, David; Soria-Morán, Jorge; Soria-Morán, María; Chávez, Saúl; Chávez-Moreno, Bertha; Chávez-Moreno, Gualberto; Roca, Oscar; Siripi, Erlin

    2018-01-26

    That the answers elicited through interviews may be influenced by the knowledge of the interviewer is accepted across disciplines. However, in ethnobotany, there is little evidence to quantitatively assess what impact this effect may have. We use the results of a large study of traditional ecological knowledge (TEK) of plant use of the Chácobo and Pacahuara of Beni, Bolivia, to explore the effects of interviewer identity and knowledge upon the elicited plant species and uses. The Chácobo are a Panoan speaking tribe of about 1000 members (300+ adults) in Beni, Bolivia. Researchers have collected anthropological and ethnobotanical data from the Chácobo for more than a century. Here, we present a complete ethnobotanical inventory of the entire adult Chácobo population, with interviews and plant collection conducted directly by Chácobo counterparts, with a focus on the effects caused by external interviewers. Within this large study, with a unified training for interviewers, we did find that different interviewers did elicit different knowledge sets, that some interviewers were more likely to elicit knowledge similar to their own, and that participants interviewed multiple times often gave information as different as that from two randomly chosen participants. Despite this, we did not find this effect to be overwhelming-the amount of knowledge an interviewer reported on the research subject had comparatively little effect on the amount of knowledge that interviewer recorded from others, and even those interviewers who tended to elicit similar answers from participants also elicited a large percentage of novel information.

  18. What happens in hospitals does not stay in hospitals: antibiotic-resistant bacteria in hospital wastewater systems.

    PubMed

    Hocquet, D; Muller, A; Bertrand, X

    2016-08-01

    Hospitals are hotspots for antimicrobial-resistant bacteria (ARB) and play a major role in both their emergence and spread. Large numbers of these ARB will be ejected from hospitals via wastewater systems. In this review, we present quantitative and qualitative data of extended-spectrum β-lactamase (ESBL)-producing Escherichia coli, vancomycin-resistant enterococci and Pseudomonas aeruginosa in hospital wastewaters compared to community wastewaters. We also discuss the fate of these ARB in wastewater treatment plants and in the downstream environment. Published studies have shown that hospital effluents contain ARB, the burden of these bacteria being dependent on their local prevalence. The large amounts of antimicrobials rejected in wastewater exert a continuous selective pressure. Only a few countries recommend the primary treatment of hospital effluents before their discharge into the main wastewater flow for treatment in municipal wastewater treatment plants. Despite the lack of conclusive data, some studies suggest that treatment could favour the ARB, notably ESBL-producing E. coli. Moreover, treatment plants are described as hotspots for the transfer of antibiotic resistance genes between bacterial species. Consequently, large amounts of ARB are released in the environment, but it is unclear whether this release contributes to the global epidemiology of these pathogens. It is reasonable, nevertheless, to postulate that it plays a role in the worldwide progression of antibiotic resistance. Antimicrobial resistance should now be seen as an 'environmental pollutant', and new wastewater treatment processes must be assessed for their capability in eliminating ARB, especially from hospital effluents. Copyright © 2016. Published by Elsevier Ltd.

  19. A combinatory approach for analysis of protein sets in barley sieve-tube samples using EDTA-facilitated exudation and aphid stylectomy.

    PubMed

    Gaupels, Frank; Knauer, Torsten; van Bel, Aart J E

    2008-01-01

    This study investigated advantages and drawbacks of two sieve-tube sap sampling methods for comparison of phloem proteins in powdery mildew-infested vs. non-infested Hordeum vulgare plants. In one approach, sieve tube sap was collected by stylectomy. Aphid stylets were cut and immediately covered with silicon oil to prevent any contamination or modification of exudates. In this way, a maximum of 1muL pure phloem sap could be obtained per hour. Interestingly, after pathogen infection exudation from microcauterized stylets was reduced to less than 40% of control plants, suggesting that powdery mildew induced sieve tube-occlusion mechanisms. In contrast to the laborious stylectomy, facilitated exudation using EDTA to prevent calcium-mediated callose formation is quick and easy with a large volume yield. After two-dimensional (2D) electrophoresis, a digital overlay of the protein sets extracted from EDTA solutions and stylet exudates showed that some major spots were the same with both sampling techniques. However, EDTA exudates also contained large amounts of contaminative proteins of unknown origin. A combinatory approach may be most favourable for studies in which the protein composition of phloem sap is compared between control and pathogen-infected plants. Facilitated exudation may be applied for subtractive identification of differentially expressed proteins by 2D/mass spectrometry, which requires large amounts of protein. A reference gel loaded with pure phloem sap from stylectomy may be useful for confirmation of phloem origin of candidate spots by digital overlay. The method provides a novel opportunity to study differential expression of phloem proteins in monocotyledonous plant species.

  20. Interannual kinetics (2010-2013) of large wood in a river corridor exposed to a 50-year flood event and fluvial ice dynamics

    NASA Astrophysics Data System (ADS)

    Boivin, Maxime; Buffin-Bélanger, Thomas; Piégay, Hervé

    2017-02-01

    Semi-alluvial rivers of the Gaspé Peninsula, Québec, are prone to produce and transport vast quantities of large wood (LW). The high rate of lateral erosion owing to high energy flows and noncohesive banks is the main process leading to the recruitment of large wood, which in turn initiates complex patterns of wood accumulation and reentrainment within the active channel. The delta of the Saint-Jean River (SJR) has accumulated large annual wood fluxes since 1960 that culminated in a wood raft of > 3-km in length in 2014. To document the kinetics of large wood on the main channel of SJR, four annual surveys were carried out from 2010 to 2013 to locate and describe > 1000 large wood jams (LWJ) and 2000 large wood individuals (LWI) along a 60-km river section. Airborne and ground photo/video images were used to estimate the wood volume introduced by lateral erosion and to identify local geomorphic conditions that control wood mobility and deposits. Video camera analysis allowed the examination of transport rates from three hydrometeorological events for specific river sections. Results indicate that the volume of LW recruited between 2010 and 2013 represents 57% of the total LW production over the 2004-2013 period. Volumes of wood deposited along the 60-km section were four times higher in 2013 than in 2010. Increases in wood amount occurred mainly in upper alluvial sections of the river, whereas decreases were observed in the semi-alluvial middle sections. Observations suggest that the 50-year flood event of 2010 produced large amounts of LW that were only partly exported out of the basin so that a significant amount was still available for subsequent floods. Large wood storage continued after this flood until a similar flood or an ice-breakup event could remobilise these LW accumulations into the river corridor. Ice-jam floods transport large amounts of wood during events with fairly low flow but do not contribute significantly to recruitment rates (ca. 10 to 30% early). It is fairly probable that the wood export peak observed in 2012 at the river mouth, where no flood occurred and which is similar to the 1-in 10-year flood of 2010, is mainly linked to such ice-break events that occurred in March 2012.

  1. Source-Receptor Relationship Analysis of the Atmospheric Deposition of PAHs Subject to Long-Range Transport in Northeast Asia.

    PubMed

    Inomata, Yayoi; Kajino, Mizuo; Sato, Keiichi; Kurokawa, Junichi; Tang, Ning; Ohara, Toshimasa; Hayakawa, Kazuichi; Ueda, Hiromasa

    2017-07-18

    The source-receptor relationship analysis of PAH deposition in Northeast Asia was investigated using an Eulerian regional-scale aerosol chemical transport model. Dry deposition (DD) of PAH was controlled by wind flow patterns, whereas wet deposition (WD) depended on precipitation in addition to wind flow patterns. The contribution of WD was approximately 50-90% of the total deposition, except during winter in Northern China (NCHN) and Eastern Russia (ERUS) because of the low amount of precipitation. The amount of PAH deposition showed clear seasonal variation and was high in winter and low in summer in downwind (South Korea, Japan) and oceanic-receptor regions. In the downwind region, the contributions from NCHN (WD 28-52%; DD 54-55%) and Central China (CCHN) (WD 43-65%; DD 33-38%) were large in winter, whereas self-contributions (WD 20-51%; DD 79-81%) were relatively high in summer. In the oceanic-receptor region, the deposition amount decreased with distance from the Asian continent. The amount of DD was strongly influenced by emissions from neighboring domains. The contributions of WD from NCHN (16-20%) and CCHN (28-35%) were large. The large contributions from China in summer to the downwind region were linked to vertical transport of PAHs over the Asian continent associated with convection.

  2. Dynamic travel information personalized and delivered to your cell phone : addendum.

    DOT National Transportation Integrated Search

    2011-03-01

    Real-time travel information must reach a significant amount of travelers to create a large amount of travel behavior change. For this project, since the TRAC-IT mobile phone application is used to monitor user context in terms of location, the mobil...

  3. New insights into liquid chromatography for more eco-friendly analysis of pharmaceuticals.

    PubMed

    Shaaban, Heba

    2016-10-01

    Greening the analytical methods used for analysis of pharmaceuticals has been receiving great interest aimed at eliminating or minimizing the amount of organic solvents consumed daily worldwide without loss in chromatographic performance. Traditional analytical LC techniques employed in pharmaceutical analysis consume tremendous amounts of hazardous solvents and consequently generate large amounts of waste. The monetary and ecological impact of using large amounts of solvents and waste disposal motivated the analytical community to search for alternatives to replace polluting analytical methodologies with clean ones. In this context, implementing the principles of green analytical chemistry (GAC) in analytical laboratories is highly desired. This review gives a comprehensive overview on different green LC pathways for implementing GAC principles in analytical laboratories and focuses on evaluating the greenness of LC analytical procedures. This review presents green LC approaches for eco-friendly analysis of pharmaceuticals in industrial, biological, and environmental matrices. Graphical Abstract Green pathways of liquid chromatography for more eco-friendly analysis of pharmaceuticals.

  4. Long-term Effects of Organic Waste Fertilizers on Soil Structure, Tracer Transport, and Leaching of Colloids.

    PubMed

    Lekfeldt, Jonas Duus Stevens; Kjaergaard, Charlotte; Magid, Jakob

    2017-07-01

    Organic waste fertilizers have previously been observed to significantly affect soil organic carbon (SOC) content and soil structure. However, the effect of organic waste fertilizers on colloid dispersibility and leaching of colloids from topsoil has not yet been studied extensively. We investigated how the repeated application of different types of agricultural (liquid cattle slurry and solid cattle manure) and urban waste fertilizers (sewage sludge and composted organic household waste) affected soil physical properties, colloid dispersion from aggregates, tracer transport, and colloid leaching from intact soil cores. Total porosity was positively correlated with SOC content. Yearly applications of sewage sludge increased absolute microporosity (pores <30 μm) and decreased relative macroporosity (pores >30 μm) compared with the unfertilized control, whereas organic household waste compost fertilization increased both total porosity and the absolute porosity in all pore size classes (though not significant for 100-600 μm). Treatments receiving large amounts of organic fertilizers exhibited significantly lower levels of dispersible colloids compared with an unfertilized control and a treatment that had received moderate applications of cattle slurry. The content of water-dispersible colloids could not be explained by a single factor, but differences in SOC content, electrical conductivity, and sodium adsorption ratio were important factors. Moreover, we found that the fertilizer treatments did not significantly affect the solute transport properties of the topsoil. Finally, we found that the leaching of soil colloids was significantly decreased in treatments that had received large amounts of organic waste fertilizers, and we ascribe this primarily to treatment-induced differences in effluent electrical conductivity during leaching. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  5. Characterization of large-pore polymeric supports for use in perfusion biochromatography.

    PubMed

    Whitney, D; McCoy, M; Gordon, N; Afeyan, N

    1998-05-22

    Perfusion chromatography is uniquely characterized by the flow of a portion of the column eluent directly through the resin in the packed bed. The benefits of this phenomenon and some of the properties of perfusive resins have been described before, and can be summarized as enhanced mass transport to interior binding sites. Here we extend the understanding of this phenomenon by comparing resins with different pore size distributions. Resins are chosen to give approximately the same specific pore volumes (as shown in the characterization section) but the varying contribution of large pores is used to control the amount of liquid flowing through the beads. POROS R1 has the largest contribution of throughpores, and therefore the greatest intraparticle flow. POROS R2 has a lower contribution of throughpores, and a higher surface area coming from a greater population of diffusive pores, but still shows significant mass transport enhancements relative to a purely diffusive control. Oligo R3 is dominated by a high population of diffusive pores, and is used comparatively as a non-perfusive resin. Although the pore size distribution can be engineered to control mass transport rates, the resulting surface area is not the only means by which binding capacity can be controlled. Surface coatings are employed to increase binding capacity without fundamentally altering the mass transport properties. Models are used to describe the amount of flow transecting the beads, and comparisons of coated resins to uncoated (polystyrene) resins leads to the conclusion that these coatings do not obstruct the throughpore structures. This is an important conclusion since the binding capacity of the coated product, in some cases, is shown to be over 10-fold higher than the precursor polystyrene scaffold (i.e., POROS R1 or POROS R2).

  6. Adaptive semi-supervised recursive tree partitioning: The ART towards large scale patient indexing in personalized healthcare.

    PubMed

    Wang, Fei

    2015-06-01

    With the rapid development of information technologies, tremendous amount of data became readily available in various application domains. This big data era presents challenges to many conventional data analytics research directions including data capture, storage, search, sharing, analysis, and visualization. It is no surprise to see that the success of next-generation healthcare systems heavily relies on the effective utilization of gigantic amounts of medical data. The ability of analyzing big data in modern healthcare systems plays a vital role in the improvement of the quality of care delivery. Specifically, patient similarity evaluation aims at estimating the clinical affinity and diagnostic proximity of patients. As one of the successful data driven techniques adopted in healthcare systems, patient similarity evaluation plays a fundamental role in many healthcare research areas such as prognosis, risk assessment, and comparative effectiveness analysis. However, existing algorithms for patient similarity evaluation are inefficient in handling massive patient data. In this paper, we propose an Adaptive Semi-Supervised Recursive Tree Partitioning (ART) framework for large scale patient indexing such that the patients with similar clinical or diagnostic patterns can be correctly and efficiently retrieved. The framework is designed for semi-supervised settings since it is crucial to leverage experts' supervision knowledge in medical scenario, which are fairly limited compared to the available data. Starting from the proposed ART framework, we will discuss several specific instantiations and validate them on both benchmark and real world healthcare data. Our results show that with the ART framework, the patients can be efficiently and effectively indexed in the sense that (1) similarity patients can be retrieved in a very short time; (2) the retrieval performance can beat the state-of-the art indexing methods. Copyright © 2015. Published by Elsevier Inc.

  7. Spatial and Temporal Patterns of Throughfall Amounts and Solutes in a Tropical Montane Forest - Comparisons with Findings From Lowland Rain Forests

    NASA Astrophysics Data System (ADS)

    Zimmermann, A.

    2007-05-01

    The diverse tree species composition, irregular shaped tree crowns and a multi-layered forest structure affect the redistribution of rainfall in lower montane rain forests. In addition, abundant epiphyte biomass and associated canopy humus influence spatial patterns of throughfall. The spatial variability of throughfall amounts controls spatial patterns of solute concentrations and deposition. Moreover, the living and dead biomass interacts with the rainwater during the passage through the canopy and creates a chemical variability of its own. Since spatial and temporal patterns are intimately linked, the analysis of temporal solute concentration dynamics is an important step to understand the emerging spatial patterns. I hypothesized that: (1) the spatial variability of volumes and chemical composition of throughfall is particularly high compared with other forests because of the high biodiversity and epiphytism, (2) the temporal stability of the spatial pattern is high because of stable structures in the canopy (e.g. large epiphytes) that show only minor changes during the short term observation period, and (3) the element concentrations decrease with increasing rainfall because of exhausting element pools in the canopy. The study area at 1950 m above sea level is located in the south Ecuadorian Andes far away from anthropogenic emission sources and marine influences. Rain and throughfall were collected from August to October 2005 on an event and within-event basis for five precipitation periods and analyzed for pH, K, Na, Ca, Mg, NH4+, Cl-, NO3-, PO43-, TN, TP and TOC. Throughfall amounts and most of the solutes showed a high spatial variability, thereby the variability of H+, K, Ca, Mg, Cl- and NO3- exceeded those from a Brazilian tropical rain forest. The temporal persistence of the spatial patterns was high for throughfall amounts and varied depending on the solute. Highly persistent time stability patterns were detected for K, Mg and TOC concentrations. Time stability patterns of solute deposition were somewhat weaker than for concentrations for most of the solutes. Epiphytes strongly affected time stability patterns in that collectors situated below thick moss mats or arboreal bromeliads were in large part responsible for the extreme persistence with low throughfall amounts and high ion concentrations (H+ showed low concentrations). Rainfall solute concentrations were low compared with a variety of other tropical lowland and montane forest sites and showed a small temporal variability during the study period for both between and within-event dynamics, respectively. Throughfall solute concentrations were more within the range when compared with other sites and showed highly variable within-event dynamics. For most of the solutes, within-event concentrations did not reach low, constant concentrations in later event stages, rather concentrations fluctuated (e.g. Cl-) or increased (e.g. K and TOC). The within-event throughfall solute concentration dynamics in this lower montane rain forest contrast to recent observations from lowland tropical rain forests in Panama and Brazil. The observed within-event patterns are attributed (1) to the influence of epiphytes and associated canopy humus, and (2) to low rainfall intensities.

  8. Seawater strontium isotopes, acid rain, and the Cretaceous-Tertiary boundary

    NASA Technical Reports Server (NTRS)

    Macdougall, J. D.

    1988-01-01

    A large bolide impact at the end of the Cretaceous would have produced significant amounts of nitrogen oxides by shock heating of the atmosphere. The resulting acid precipitation would have increased continental weathering greatly and could be an explanation for the observed high ratio of strontium-87 to strontium-86 in seawater at about this time, due to the dissolution of large amounts of strontium from the continental crust. Spikes to high values in the seawater strontium isotope record at other times may reflect similar episodes.

  9. 26 CFR 49.4251-4 - Prepaid telephone cards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...), the face amount of the PTC is treated as an amount paid for communications services and that amount is... amount of communications services as the PTC to which it is being compared. Dollar card means a PTC the...) means a card or similar arrangement that permits its holder to obtain a fixed amount of communications...

  10. Diet and Co-ecology of Pleistocene Short-Faced Bears and Brown Bears in Eastern Beringia

    NASA Astrophysics Data System (ADS)

    Matheus, Paul E.

    1995-11-01

    Carbon and nitrogen stable isotope analysis of fossil bone collagen reveals that Pleistocene short-faced bears ( Arctodus simus) of Beringia were highly carnivorous, while contemporaneous brown bears ( Ursus arctos) had highly variable diets that included varying amounts of terrestrial vegetation, salmon, and small amounts of terrestrial meat. A reconsideration of the short-faced bear's highly derived morphology indicates that they foraged as scavengers of widely dispersed large mammal carcasses and were simultaneously designed both for highly efficient locomotion and for intimidating other large carnivores. This allowed Arctodus to forage economically over a large home range and seek out, procure, and defend carcasses from other large carnivores. The isotope data and this reconstruction of Arctodus' foraging behavior refute the hypothesis that competition from brown bears was a significant factor in the extinction of short-faced bears.

  11. The Development and Microstructure Analysis of High Strength Steel Plate NVE36 for Large Heat Input Welding

    NASA Astrophysics Data System (ADS)

    Peng, Zhang; Liangfa, Xie; Ming, Wei; Jianli, Li

    In the shipbuilding industry, the welding efficiency of the ship plate not only has a great effect on the construction cost of the ship, but also affects the construction speed and determines the delivery cycle. The steel plate used for large heat input welding was developed sufficiently. In this paper, the composition of the steel with a small amount of Nb, Ti and large amount of Mn had been designed in micro-alloyed route. The content of C and the carbon equivalent were also designed to a low level. The technology of oxide metallurgy was used during the smelting process of the steel. The rolling technology of TMCP was controlled at a low rolling temperature and ultra-fast cooling technology was used, for the purpose of controlling the transformation of the microstructure. The microstructure of the steel plate was controlled to be the mixed microstructure of low carbon bainite and ferrite. Large amount of oxide particles dispersed in the microstructure of steel, which had a positive effects on the mechanical property and welding performance of the steel. The mechanical property of the steel plate was excellent and the value of longitudinal Akv at -60 °C is more than 200 J. The toughness of WM and HAZ were excellent after the steel plate was welded with a large heat input of 100-250 kJ/cm. The steel plate processed by mentioned above can meet the requirement of large heat input welding.

  12. A population-based, case–control study of green tea consumption and leukemia risk in southwestern Taiwan

    PubMed Central

    Yu, Chu-Ling; Liu, Chen-Yu; Wang, Su-Fen; Pan, Pi-Chen; Wu, Ming-Tsang; Ho, Chi-Kung; Lo, Yu-Shing; Li, Yi; Christiani, David C.

    2011-01-01

    Objective This study investigated the association between green tea consumption and leukemia. Methods A total of 252 cases (90.3% response) and 637 controls (53.4% response) were enrolled. Controls were matched for cases on age and gender. Information was collected on participants’ living habits, including tea consumption. Green tea was used as a standard to estimate the total amount of individual catechin consumption. We stratified individual consumption of catechins into four levels. Conditional logistic regression models were fit to subjects aged 0–15 and 16–29 years to evaluate separate associations between leukemia and catechin consumption. Results A significant inverse association between green tea consumption and leukemia risk was found in individuals aged 16–29 years, whereas no significant association was found in the younger age groups. For the older group with higher amounts of tea consumption (>550 units of catechins), the adjusted odds ratio (OR) compared with the group without tea consumption was 0.47 [95% confidence interval (CI) = 0.23–0.97]. After we adjusted for smoking status and medical irradiation exposure, the overall OR for all participants was 0.49 (95% CI = 0.27–0.91), indicating an inverse relation between large amounts of catechins and leukemia. Conclusion Drinking sufficient amounts of tea, especially green tea, which contains more catechins than oolong tea and black tea, may reduce the risk of leukemia. PMID:18752033

  13. CONSTRUCTED WETLANDS VS. RETENTION POND BMPS: MESOCOSM STUDIES FOR IMPROVED POLLUTANT MANAGEMENT IN URBAN STORMWATER TREATMENT

    EPA Science Inventory

    Increased urbanization has increased the amount of directly connected impervious area that results in large quantities of stormwater runoff. This runoff can contribute significant amounts of debris and pollutants to receiving waters. Urban watershed managers often incorporate b...

  14. Bringing home the trash: do colony-based differences in foraging distribution lead to increased plastic ingestion in Laysan albatrosses?

    PubMed

    Young, Lindsay C; Vanderlip, Cynthia; Duffy, David C; Afanasyev, Vsevolod; Shaffer, Scott A

    2009-10-28

    When searching for prey, animals should maximize energetic gain, while minimizing energy expenditure by altering their movements relative to prey availability. However, with increasing amounts of marine debris, what once may have been 'optimal' foraging strategies for top marine predators, are leading to sub-optimal diets comprised in large part of plastic. Indeed, the highly vagile Laysan albatross (Phoebastria immutabilis) which forages throughout the North Pacific, are well known for their tendency to ingest plastic. Here we examine whether Laysan albatrosses nesting on Kure Atoll and Oahu Island, 2,150 km apart, experience different levels of plastic ingestion. Twenty two geolocators were deployed on breeding adults for up to two years. Regurgitated boluses of undigestable material were also collected from chicks at each site to compare the amount of plastic vs. natural foods. Chicks from Kure Atoll were fed almost ten times the amount of plastic compared to chicks from Oahu despite boluses from both colonies having similar amounts of natural food. Tracking data indicated that adults from either colony did not have core overlapping distributions during the early half of the breeding period and that adults from Kure had a greater overlap with the putative range of the Western Garbage Patch corroborating our observation of higher plastic loads at this colony. At-sea distributions also varied throughout the year suggesting that Laysan albatrosses either adjusted their foraging behavior according to constraints on time away from the nest or to variation in resources. However, in the non-breeding season, distributional overlap was greater indicating that the energy required to reach the foraging grounds was less important than the total energy available. These results demonstrate how a marine predator that is not dispersal limited alters its foraging strategy throughout the reproductive cycle to maximize energetic gain and how this has led to differences in plastic ingestion.

  15. Quality Indicators for Human Milk Use in Very Low Birthweight Infants: Are We Measuring What We Should be Measuring?

    PubMed Central

    Bigger, Harold R.; Fogg, Louis J.; Patel, Aloka; Johnson, Tricia; Engstrom, Janet L.; Meier, Paula P.

    2014-01-01

    Objective The objective of this study was to compare the currently used human milk (HM) quality indicators that measure whether very low birthweight (VLBW; <1500 g birthweight) infants “ever” received HM and whether they were still receiving HM at discharge from the neonatal intensive care unit (NICU) to the actual amount and timing of HM received. Study Design This study used data from a large NIH-funded cohort study and calculated whether VLBW infants ever received HM (HM-Ever) and of these infants, the percentage who were still receiving HM at NICU discharge (HM-DC). Then, the HM-DC indicator (exclusive, partial and none) was compared with the amount and timing of HM feedings received by these same infants. Results Of the 291 VLBW infants who met inclusion criteria, 285 received some HM (HM-Ever = 98%). At NICU discharge (HM-DC), 24.2%, 15.1% and 60.7% were receiving exclusive, partial and no HM, respectively. Of the 60.7% infants with no HM-DC, some had received higher amounts of HM during the NICU hospitalization than infants categorized as exclusive and partial for HM-DC. Of the infants with no HM-DC, 76.8% and 59.7% had received exclusive HM during the Days 1–14 and Days 1–28 exposure periods, respectively. Conclusion The average daily dose (HM-DD; in mL/kg/d) and cumulative percentage (HM-PCT; as % of cumulative enteral intake) of HM feedings were sufficient to significantly reduce the risk of multiple morbidities, including late onset sepsis, necrotizing enterocolitis, neurocognitive delay and rehospitalization, in the majority of the VLBW infants who were discharged with no HM-DC. Quality indicators that focus on the amount and timing of HM feedings in the NICU should be added to the HM-Ever and HM-DC measures. PMID:24526005

  16. Evaluation of Fluoride Retention Due to Most Commonly Consumed Estuarine Fishes Among Fish Consuming Population of Andhra Pradesh as a Contributing Factor to Dental Fluorosis: A Cross-Sectional Study

    PubMed Central

    Ganta, Shravani; Nagaraj, Anup; Pareek, Sonia; Sidiq, Mohsin; Singh, Kushpal; Vishnani, Preeti

    2015-01-01

    Background Fluoride in drinking water is known for both beneficial and detrimental effects on health. The principal sources of fluoride include water, some species of vegetation, certain edible marine animals, dust and industrial processes. The purpose of this study was to evaluate the fluoride retention of most commonly consumed estuarine fishes among fish consuming population of Andhra Pradesh. Materials and Methods A cross-sectional study was conducted to evaluate the amount of fluoride retention due to ten most commonly consumed estuarine fishes as a contributing factor to Fluorosis by SPADNS Spectrophotometric method. The presence and severity of dental fluorosis among fish consuming population was recorded using Community Fluorosis Index. Statistical analysis was done using MedCalc v12.2.1.0 software. Results For Sea water fishes, the fluoride levels in bone were maximum in Indian Sardine (4.22 ppm). Amongst the river water fishes, the fluoride levels in bone were maximum in Catla (1.51 ppm). Also, the mean total fluoride concentrations of all the river fishes in skin, muscle and bone were less (0.86 ppm) as compared to the sea water fishes (2.59 ppm). It was unveiled that sea fishes accumulate relatively large amounts of Fluoride as compared to the river water fishes. The mean Community Fluorosis Index was found to be 1.06 amongst a sampled fish consuming population. Evaluation by Community Index for Dental fluorosis (CFI) suggested that fluorosis is of medium public health importance. Conclusion It was analysed that bone tends to accumulate more amount of fluoride followed by muscle and skin which might be due to the increased permeability and chemical trapping of fluoride inside the tissues. The amount of fluoride present in the fishes is directly related to the severity of fluorosis amongst fish consuming population, suggesting fishes as a contributing factor to fluorosis depending upon the dietary consumption. PMID:26266208

  17. Evaluation of Fluoride Retention Due to Most Commonly Consumed Estuarine Fishes Among Fish Consuming Population of Andhra Pradesh as a Contributing Factor to Dental Fluorosis: A Cross-Sectional Study.

    PubMed

    Ganta, Shravani; Yousuf, Asif; Nagaraj, Anup; Pareek, Sonia; Sidiq, Mohsin; Singh, Kushpal; Vishnani, Preeti

    2015-06-01

    Fluoride in drinking water is known for both beneficial and detrimental effects on health. The principal sources of fluoride include water, some species of vegetation, certain edible marine animals, dust and industrial processes. The purpose of this study was to evaluate the fluoride retention of most commonly consumed estuarine fishes among fish consuming population of Andhra Pradesh. A cross-sectional study was conducted to evaluate the amount of fluoride retention due to ten most commonly consumed estuarine fishes as a contributing factor to Fluorosis by SPADNS Spectrophotometric method. The presence and severity of dental fluorosis among fish consuming population was recorded using Community Fluorosis Index. Statistical analysis was done using MedCalc v12.2.1.0 software. For Sea water fishes, the fluoride levels in bone were maximum in Indian Sardine (4.22 ppm). Amongst the river water fishes, the fluoride levels in bone were maximum in Catla (1.51 ppm). Also, the mean total fluoride concentrations of all the river fishes in skin, muscle and bone were less (0.86 ppm) as compared to the sea water fishes (2.59 ppm). It was unveiled that sea fishes accumulate relatively large amounts of Fluoride as compared to the river water fishes. The mean Community Fluorosis Index was found to be 1.06 amongst a sampled fish consuming population. Evaluation by Community Index for Dental fluorosis (CFI) suggested that fluorosis is of medium public health importance. It was analysed that bone tends to accumulate more amount of fluoride followed by muscle and skin which might be due to the increased permeability and chemical trapping of fluoride inside the tissues. The amount of fluoride present in the fishes is directly related to the severity of fluorosis amongst fish consuming population, suggesting fishes as a contributing factor to fluorosis depending upon the dietary consumption.

  18. Bringing Home the Trash: Do Colony-Based Differences in Foraging Distribution Lead to Increased Plastic Ingestion in Laysan Albatrosses?

    PubMed Central

    Young, Lindsay C.; Vanderlip, Cynthia; Duffy, David C.; Afanasyev, Vsevolod; Shaffer, Scott A.

    2009-01-01

    When searching for prey, animals should maximize energetic gain, while minimizing energy expenditure by altering their movements relative to prey availability. However, with increasing amounts of marine debris, what once may have been ‘optimal’ foraging strategies for top marine predators, are leading to sub-optimal diets comprised in large part of plastic. Indeed, the highly vagile Laysan albatross (Phoebastria immutabilis) which forages throughout the North Pacific, are well known for their tendency to ingest plastic. Here we examine whether Laysan albatrosses nesting on Kure Atoll and Oahu Island, 2,150 km apart, experience different levels of plastic ingestion. Twenty two geolocators were deployed on breeding adults for up to two years. Regurgitated boluses of undigestable material were also collected from chicks at each site to compare the amount of plastic vs. natural foods. Chicks from Kure Atoll were fed almost ten times the amount of plastic compared to chicks from Oahu despite boluses from both colonies having similar amounts of natural food. Tracking data indicated that adults from either colony did not have core overlapping distributions during the early half of the breeding period and that adults from Kure had a greater overlap with the putative range of the Western Garbage Patch corroborating our observation of higher plastic loads at this colony. At-sea distributions also varied throughout the year suggesting that Laysan albatrosses either adjusted their foraging behavior according to constraints on time away from the nest or to variation in resources. However, in the non-breeding season, distributional overlap was greater indicating that the energy required to reach the foraging grounds was less important than the total energy available. These results demonstrate how a marine predator that is not dispersal limited alters its foraging strategy throughout the reproductive cycle to maximize energetic gain and how this has led to differences in plastic ingestion. PMID:19862322

  19. Determining and validating the effective snow grain size and pollution amount from satellite measurements in polar regions

    NASA Astrophysics Data System (ADS)

    Heygster, Georg; Wiebe, Heidrun; Zege, Eleonora; Aoki, Teruo; Kokhanovsky, Alexander; Katsev, I. L.; Prikhach, Alexander; Malinka, A. V.; Grudo, J. O.

    Sea ice is part of the cryosphere, besides the ice sheets, ice shelves, and glaciers. Compared to the other components, it is small in volume but large in area. Snow on top of the sea ice is even less in mass, but strongly influences the albedo of the sea ice, and thus the local radiative balance which plays an essential role for the albedo feedback process. The albedo of snow does not have a constant value, but depends on the grain size (smaller grains have higher albedo) and the amount of pollution like soot and in fewer cases dust which both lower the albedo significantly. Our retrievals are based on an algorithm that uses optical satellite observations to calculate the size of the snow grains and its pollution, the Snow Grain Size and Pollution amount (SGSP) algorithm (Zege et al. 2009) Here we present the algorithm and its operational implementation, based on MODIS data, to calculate the snow grain size and pollution amount in near real time, and a destriping procedure. The resulting data are used for a validation study by comparing them to in situ data taken at several places near Hokkaido (Japan), Barrow (Alaska, USA) between 2002 and 2005 and in Antarctica in 2003. While each single set of observations, in the Arctic and in the Antarctic, shows encouraging correlations, the regression lines between in situ and satellite retrievals of the snow grain size are quite different, with slopes of 1.01 (Arctic and Japan) and 0.44 (Antarctica). The discrepancy remains unresolved, emphasizing the need for more in situ observations for validation. Among the potential reasons for the discrepancy are the different kinds of in situ measured snow grain sizes. The crystal size was measured in the Arctic (Barrow) and Japan (Hokkaido) using a lens and optical methods have been used in Antarctica.

  20. Assessing the uncertainty of soil moisture impacts on convective precipitation using a new ensemble approach

    NASA Astrophysics Data System (ADS)

    Henneberg, Olga; Ament, Felix; Grützun, Verena

    2018-05-01

    Soil moisture amount and distribution control evapotranspiration and thus impact the occurrence of convective precipitation. Many recent model studies demonstrate that changes in initial soil moisture content result in modified convective precipitation. However, to quantify the resulting precipitation changes, the chaotic behavior of the atmospheric system needs to be considered. Slight changes in the simulation setup, such as the chosen model domain, also result in modifications to the simulated precipitation field. This causes an uncertainty due to stochastic variability, which can be large compared to effects caused by soil moisture variations. By shifting the model domain, we estimate the uncertainty of the model results. Our novel uncertainty estimate includes 10 simulations with shifted model boundaries and is compared to the effects on precipitation caused by variations in soil moisture amount and local distribution. With this approach, the influence of soil moisture amount and distribution on convective precipitation is quantified. Deviations in simulated precipitation can only be attributed to soil moisture impacts if the systematic effects of soil moisture modifications are larger than the inherent simulation uncertainty at the convection-resolving scale. We performed seven experiments with modified soil moisture amount or distribution to address the effect of soil moisture on precipitation. Each of the experiments consists of 10 ensemble members using the deep convection-resolving COSMO model with a grid spacing of 2.8 km. Only in experiments with very strong modification in soil moisture do precipitation changes exceed the model spread in amplitude, location or structure. These changes are caused by a 50 % soil moisture increase in either the whole or part of the model domain or by drying the whole model domain. Increasing or decreasing soil moisture both predominantly results in reduced precipitation rates. Replacing the soil moisture with realistic fields from different days has an insignificant influence on precipitation. The findings of this study underline the need for uncertainty estimates in soil moisture studies based on convection-resolving models.

  1. Diagnostic evaluation of the Community Earth System Model in simulating mineral dust emission with insight into large-scale dust storm mobilization in the Middle East and North Africa (MENA)

    NASA Astrophysics Data System (ADS)

    Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.

    2016-06-01

    Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.

  2. Comparative study on nutrient depletion-induced lipidome adaptations in Staphylococcus haemolyticus and Staphylococcus epidermidis.

    PubMed

    Luo, Yu; Javed, Muhammad Afzal; Deneer, Harry

    2018-02-05

    Staphylococcus species are emerging opportunistic pathogens that cause outbreaks of hospital and community-acquired infections. Some of these bacteria such as methicillin-resistant Staphylococcus aureus (MRSA) are difficult to treat due to their resistance to multiple antibiotics. We carried out a comparative study on the lipidome adaptations in response to starvation in the two most common coagulase-negative Staphylococcus species: a S. epidermidis strain sensitive to ampicillin and erythromycin and a S. haemolyticus strain resistant to both. The predominant fatty acid composition in glycerolipids was (17:0-15:0) in both bacteria. During the exponential phase, the two bacterial lipidomes were similar. Both were dominated by diacylglycerol (DAG), phosphatidylglycerol (PG), lysyl-phosphatidylglycerol (Lysyl-PG) and Diglucosyl-diacylglycerol (DGDG). Alanyl-PG was detected in small amounts in both bacterial lipids. N-succinyl-lysyl-PG was detected only in S. haemolyticus, while lysyl-DAG only in S. epidermidis. As the two bacteria entered stationary phase, both lipidomes became essentially nitrogen-free. Both bacteria accumulated large amounts of free fatty acids. Strikingly, the lipidome of S. epidermidis became dominated by cardiolipin (CL), while that of S. haemolyticus was simplified to DGDG and PG. The S. epidermidis strain also produced acyl-phosphatidylglycerol (APG) in the stationary phase.

  3. Application of PLE for the determination of essential oil components from Thymus vulgaris L.

    PubMed

    Dawidowicz, Andrzej L; Rado, Ewelina; Wianowska, Dorota; Mardarowicz, Marek; Gawdzik, Jan

    2008-08-15

    Essential plants, due to their long presence in human history, their status in culinary arts, their use in medicine and perfume manufacture, belong to frequently examined stock materials in scientific and industrial laboratories. Because of a large number of freshly cut, dried or frozen plant samples requiring the determination of essential oil amount and composition, a fast, safe, simple, efficient and highly automatic sample preparation method is needed. Five sample preparation methods (steam distillation, extraction in the Soxhlet apparatus, supercritical fluid extraction, solid phase microextraction and pressurized liquid extraction) used for the isolation of aroma-active components from Thymus vulgaris L. are compared in the paper. The methods are mainly discussed with regard to the recovery of components which typically exist in essential oil isolated by steam distillation. According to the obtained data, PLE is the most efficient sample preparation method in determining the essential oil from the thyme herb. Although co-extraction of non-volatile ingredients is the main drawback of this method, it is characterized by the highest yield of essential oil components and the shortest extraction time required. Moreover, the relative peak amounts of essential components revealed by PLE are comparable with those obtained by steam distillation, which is recognized as standard sample preparation method for the analysis of essential oils in aromatic plants.

  4. Ultrastructural Alterations of Von Economo Neurons in the Anterior Cingulate Cortex in Schizophrenia.

    PubMed

    Krause, Martin; Theiss, Carsten; Brüne, Martin

    2017-11-01

    Von Economo neurons (VENs) are large bipolar projection neurons mainly located in layer Vb of anterior cingulate cortex (ACC) and anterior insula. Both regions are involved in cognitive and emotional procedures and are functionally and anatomically altered in schizophrenia. Although the detailed function of VEN remains unclear, it has been suggested that these neurons are involved in the pathomechanism of schizophrenia. Here, we were interested in the question whether or not the VEN of schizophrenia patients would show abnormalities at the ultrastructural level. Accordingly, we examined the amount of lysosomal aggregations of the VEN in post-mortem tissue of patients with schizophrenia, bipolar disorder and psychologically unaffected individuals, and compared the findings with aggregations in adjacent pyramidal cells in layer Vb of the ACC. VEN of patients with schizophrenia, and to a lesser degree individuals with bipolar disorder contained significantly more lysosomal aggregations compared with tissue from unaffected controls. Specifically, the larger amount of lysosomal aggregations in schizophrenia seemed to be selective for VEN, with no differences occurring in pyramidal cells. These findings may indicate that the VEN of schizophrenia patients are selectively vulnerable to neuronal damage. Anat Rec, 2017. © 2017 Wiley Periodicals, Inc. Anat Rec, 300:2017-2024, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. When Less is More: Like Humans, Chimpanzees (Pan troglodytes) Misperceive Food Amounts Based on Plate Size

    PubMed Central

    Parrish, Audrey E.; Beran, Michael J.

    2013-01-01

    We investigated whether chimpanzees (Pan troglodytes) misperceived food portion sizes depending upon the context in which they were presented, something that often affects how much humans serve themselves and subsequently consume. Chimpanzees judged same-sized and smaller food portions to be larger in amount when presented on a small plate compared to an equal or larger food portion presented on a large plate, and did so despite clearly being able to tell the difference in portions when plate size was identical. These results are consistent with data from the human literature in which people misperceive food portion sizes as a function of plate size. This misperception is attributed to the Delboeuf illusion which occurs when the size of a central item is misperceived on the basis of its surrounding context. These results demonstrate a cross-species shared visual misperception of portion size that affects choice behavior, here in a nonhuman species for which there is little experience with tests that involve choosing between food amounts on dinnerware. The biases resulting in this form of misperception of food portions appear to have a deep-rooted evolutionary history which we share with, at minimum, our closest living nonhuman relative, the chimpanzee. PMID:23949698

  6. Musical training, individual differences and the cocktail party problem.

    PubMed

    Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M; Best, Virginia; Kidd, Gerald; Patel, Aniruddh D

    2015-06-26

    Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.

  7. Musical training, individual differences and the cocktail party problem

    PubMed Central

    Swaminathan, Jayaganesh; Mason, Christine R.; Streeter, Timothy M.; Best, Virginia; Kidd, Jr, Gerald; Patel, Aniruddh D.

    2015-01-01

    Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical ‘cocktail party problem’ in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of ‘informational masking’ (IM) while keeping the amount of ‘energetic masking’ (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker “cocktail party” environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced “speech-in-noise” perception by musicians. PMID:26112910

  8. The use of waste materials for concrete production in construction applications

    NASA Astrophysics Data System (ADS)

    Teara, Ashraf; Shu Ing, Doh; Tam, Vivian WY

    2018-04-01

    To sustain the environment, it is crucial to find solutions to deal with waste, pollution, depletion and degradation resources. In construction, large amounts of concrete from buildings’ demolitions made up 30-40 % of total wastes. Expensive dumping cost, landfill taxes and limited disposal sites give chance to develop recycled concrete. Recycled aggregates were used for reconstructing damaged infrastructures and roads after World War II. However, recycled concrete consists fly ash, slag and recycled aggregate, is not widely used because of its poor quality compared with ordinary concrete. This research investigates the possibility of using recycled concrete in construction applications as normal concrete. Methods include varying proportion of replacing natural aggregate by recycled aggregate, and the substitute of cement by associated slag cement with fly ash. The study reveals that slag and fly ash are effective supplementary elements in improving the properties of the concrete with cement. But, without cement, these two elements do not play an important role in improving the properties. Also, slag is more useful than fly ash if its amount does not go higher than 50%. Moreover, recycled aggregate contributes positively to the concrete mixture, in terms of compression strength. Finally, concrete strength increases when the amount of the RA augments, related to either the high quality of RA or the method of mixing, or both.

  9. Partitioning Tracer Test for Detection, Estimation, and Remediation Performance Assessment of Subsurface Nonaqueous Phase Liquids

    NASA Astrophysics Data System (ADS)

    Jin, Minquan; Delshad, Mojdeh; Dwarakanath, Varadarajan; McKinney, Daene C.; Pope, Gary A.; Sepehrnoori, Kamy; Tilburg, Charles E.; Jackson, Richard E.

    1995-05-01

    In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypothetical two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer tests results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations.

  10. Partitioning tracer test for detection, estimation, and remediation performance assessment of subsurface nonaqueous phase liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, M.; Delshad, M.; Dwarakanath, V.

    1995-05-01

    In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypotheticalmore » two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer test results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations. 43 refs., 10 figs., 1 tab.« less

  11. Molecular analysis of a phytohemagglutinin-defective cultivar of Phaseolus vulgaris L.

    PubMed

    Vitale, A; Ceriotti, A; Bollini, R

    1985-10-01

    The seeds of Phaseolus vulgaris cv. Pinto III are known to lack detectable amounts of phytohemagglutinin (PHA) and to accumulate very reduced levels of PHA mRNA compared with normal cultivars. Using PHA complementary-DNA clones and monospecific antibodies we analyzed cv. Pinto III genomic DNA and cotyledonary proteins synthesized both in vitro and in vivo. We detected genomic DNA sequences that hybridize with complementary-DNA clones for the two different classes of PHA polypeptides (PHA-E and PHA-L), at levels comparable to a normal bean cultivar. This indicates that the cv. Pinto III phenotype is not the result of a large deletion of the PHA structural genes. Messenger RNA isolated from cv. Pinto III developing cotyledons synthesizes in vitro very small amounts of a protein which is recognized by antibodies specific for PHA, and gives, on sodium dodecyl sulfate-polyacrylamide gel electrophoresis, a single band with molecular weight similar but not identical to that of PHA-L polypeptides. This protein is also synthesized in vivo at a very reduced level, less than 1% compared with PHA in normal cultivars, and has mitogenic activity comparable to that of the PHA-L subunit, while it shows very weak erythroagglutinating activity. The initial steps in the synthesis and processing of this protein are identical to those already identified for PHA polypeptides. The cv. Pinto III protein could be either a PHA-L polypeptide whose synthesis is not affected by the mutation or a PHA-like lectin present normally at low levels in P. vulgaris.

  12. Comparing the accuracy and precision of three techniques used for estimating missing landmarks when reconstructing fossil hominin crania.

    PubMed

    Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James

    2009-09-01

    Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.

  13. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    NASA Astrophysics Data System (ADS)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  14. Fibroblast responses and antibacterial activity of Cu and Zn co-doped TiO2 for percutaneous implants

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Guo, Jiaqi; Yan, Ting; Han, Yong

    2018-03-01

    In order to enhance skin integration and antibacterial activity of Ti percutaneous implants, microporous TiO2 coatings co-doped with different doses of Cu2+ and Zn2+ were directly fabricated on Ti via micro-arc oxidation (MAO). The structures of coatings were investigated; the behaviors of fibroblasts (L-929) as well as the response of Staphylococcus aureus (S. aureus) were evaluated. During the MAO process, a large number of micro-arc discharges forming on Ti performed as penetrating channels; O2-, Ca2+, Zn2+, Cu2+ and PO43- delivered via the channels, giving rise to the formation of doped TiO2. Surface characteristics including phase component, topography, surface roughness and wettability were almost the same for different coatings, whereas, the amount of Cu doped in TiO2 decreased with the increased Zn amount. Compared with Cu single-doped TiO2 (0.77 Wt% Cu), the co-doped with appropriate amounts of Cu and Zn, for example, 0.55 Wt% Cu and 2.53 Wt% Zn, further improved proliferation of L-929, facilitated fibroblasts to switch to fibrotic phenotype, and enhanced synthesis of collagen I as well as the extracellular collagen secretion; the antibacterial properties including contact-killing and release-killing were also enhanced. By analyzing the relationship of Cu/Zn amount in TiO2 and the behaviors of L-929 and S. aureus, it can be deduced that when the doped Zn is in a low dose (<1.79 Wt%), the behaviors of L-929 and S. aureus are sensitive to the reduced amount of Cu2+, whereas, Zn2+ plays a key role in accelerating fibroblast functions and reducing S. aureus when its dose obviously increases from 2.63 to 6.47 Wt%.

  15. Differential host growth regulation by the solitary endoparasitoid, Meteorus pulchricornis in two hosts of greatly differing mass.

    PubMed

    Harvey, Jeffrey A; Sano, Takeshi; Tanaka, Toshiharu

    2010-09-01

    Solitary koinobiont endoparasitoids generally reduce the growth of their hosts by a significant amount compared with healthy larvae. Here, we compared the development and host usage strategies of the solitary koinobiont endoparasitoid, Meteorus pulchricornis, when developing in larvae of a large host species (Mythimna separata) and a much smaller host species (Plutella xylostella). Caterpillars of M. separata were parasitized as L2 and P. xylostella as L3, when they weighed approximately 2mg. The growth of parasitized M. separata larvae was reduced by almost 95% compared with controls, whereas parasitized P. xylostella larvae grew some 30% larger than controls. Still, adult wasps emerging from M. separata larvae were almost twice as large as wasps emerging from P. xylostella larvae, had larger egg loads after 5 days and produced more progeny. Survival to eclosion was also higher on M. separata than on P. xylostella, although parasitoids developed significantly faster when developing on P. xylostella. Our results provide evidence that koinobionts are able to differentially regulate the growth of different host species. However, there are clearly also limitations in the ability of parasitoids to regulate phenotypic host traits when size differences between different host species are as extreme as demonstrated here.

  16. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  17. Different Amounts of DNA in Newborn Cells of Escherichia coli Preclude a Role for the Chromosome in Size Control According to the "Adder" Model.

    PubMed

    Huls, Peter G; Vischer, Norbert O E; Woldringh, Conrad L

    2018-01-01

    According to the recently-revived adder model for cell size control, newborn cells of Escherichia coli will grow and divide after having added a constant size or length, ΔL , irrespective of their size at birth. Assuming exponential elongation, this implies that large newborns will divide earlier than small ones. The molecular basis for the constant size increment is still unknown. As DNA replication and cell growth are coordinated, the constant ΔL could be based on duplication of an equal amount of DNA, ΔG , present in newborn cells. To test this idea, we measured amounts of DNA and lengths of nucleoids in DAPI-stained cells growing in batch culture at slow and fast rates. Deeply-constricted cells were divided in two subpopulations of longer and shorter lengths than average; these were considered to represent large and small prospective daughter cells, respectively. While at slow growth, large and small prospective daughter cells contained similar amounts of DNA, fast growing cells with multiforked replicating chromosomes, showed a significantly higher amount of DNA (20%) in the larger cells. This observation precludes the hypothesis that Δ L is based on the synthesis of a constant ΔG . Growth curves were constructed for siblings generated by asymmetric division and growing according to the adder model. Under the assumption that all cells at the same growth rate exhibit the same time between initiation of DNA replication and cell division (i.e., constant C+D -period), the constructions predict that initiation occurs at different sizes ( Li ) and that, at fast growth, large newborn cells transiently contain more DNA than small newborns, in accordance with the observations. Because the state of segregation, measured as the distance between separated nucleoids, was found to be more advanced in larger deeply-constricted cells, we propose that in larger newborns nucleoid separation occurs faster and at a shorter length, allowing them to divide earlier. We propose a composite model in which both differential initiation and segregation leads to an adder-like behavior of large and small newborn cells.

  18. Heat Sinking, Cross Talk, and Temperature Stability for Large, Close-Packed Arrays of Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Imoto, Naoko; Bandler, SImon; Brekosky, Regis; Chervenak, James; Figueroa-Felicano, Enectali; Finkbeiner, Frederick; Kelley, Richard; Kilbourne, Caroline; Porter, Frederick; Sadleir, Jack; hide

    2007-01-01

    We are developing large, close-packed arrays of x-ray transition-edge sensor (TES) microcalorimeters. In such a device, sufficient heat sinking is important to to minimize thermal cross talk between pixels and to stabilize the bath temperature for all pixels. We have measured cross talk on out 8 x 8 arrays and studied the shape and amount of thermal crosstalk as a function of pixel location and efficiency of electrothermal feedback. In this presentation, we will compare measurements made on arrays with and without a backside, heat-sinking copper layer, as well as results of devices on silicon-nitride membranes and on solid substrates, and we will discuss the implications for energy resolution and maximum count rate. We will also discuss the dependence of pulse height upon bath temperature, and the measured and required stability of the bath temperature.

  19. Linking salmon aquaculture synergies and trade-offs on ecosystem services to human wellbeing constituents.

    PubMed

    Outeiro, Luis; Villasante, Sebastian

    2013-12-01

    Salmon aquaculture has emerged as a successful economic industry generating high economic revenues to invest in the development of Chiloe region, Southern Chile. However, salmon aquaculture also consumes a substantial amount of ecosystem services, and the direct and indirect impacts on human wellbeing are still unknown and unexplored. This paper identifies the synergies and trade-offs caused by the salmon industry on a range of ecosystem services. The results show that large economic benefits due to the increase of provisioning ecosystem services are also causing a reduction on regulating and cultural services. Despite the improvement on average income and poverty levels experienced in communities closely associated with the sector, this progress is not large enough and social welfare did not improve substantially over the last decade. The rest of human wellbeing constituents in Chiloe region have not changed significantly compared to the development in the rest of the country.

  20. Gasification Reaction Characteristics of Ferro-Coke at Elevated Temperatures

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Jian-liang; Gao, Bing

    2017-01-01

    In this paper, the effects of temperature and atmosphere on the gasification reaction of ferro-coke were investigated in consideration of the actual blast furnace conditions. Besides, the microstructure of the cokes was observed by scanning electron microscope (SEM). It is found that the weight loss of ferro-coke during the gasification reaction is significantly enhanced in the case of increasing either the reaction temperature or the CO2 concentration. Furthermore, compared with the normal type of metallurgical coke, ferro-coke exhibits a higher weight loss when they are gasified at the same temperature or under the same atmosphere. As to the microstructure, inside the reacted ferro-coke are a large amount of pores. Contrary to the normal coke, the proportions of the large-size pores and the through holes are greatly increased after gasification, giving rise to thinner pore walls and hence a degradation in coke strength after reaction (CSR).

  1. Observational evidence for enhanced magnetic activity of superflare stars.

    PubMed

    Karoff, Christoffer; Knudsen, Mads Faurschou; De Cat, Peter; Bonanno, Alfio; Fogtmann-Schulz, Alexandra; Fu, Jianning; Frasca, Antonio; Inceoglu, Fadil; Olsen, Jesper; Zhang, Yong; Hou, Yonghui; Wang, Yuefei; Shi, Jianrong; Zhang, Wei

    2016-03-24

    Superflares are large explosive events on stellar surfaces one to six orders-of-magnitude larger than the largest flares observed on the Sun throughout the space age. Due to the huge amount of energy released in these superflares, it has been speculated if the underlying mechanism is the same as for solar flares, which are caused by magnetic reconnection in the solar corona. Here, we analyse observations made with the LAMOST telescope of 5,648 solar-like stars, including 48 superflare stars. These observations show that superflare stars are generally characterized by larger chromospheric emissions than other stars, including the Sun. However, superflare stars with activity levels lower than, or comparable to, the Sun do exist, suggesting that solar flares and superflares most likely share the same origin. The very large ensemble of solar-like stars included in this study enables detailed and robust estimates of the relation between chromospheric activity and the occurrence of superflares.

  2. Observational evidence for enhanced magnetic activity of superflare stars

    PubMed Central

    Karoff, Christoffer; Knudsen, Mads Faurschou; De Cat, Peter; Bonanno, Alfio; Fogtmann-Schulz, Alexandra; Fu, Jianning; Frasca, Antonio; Inceoglu, Fadil; Olsen, Jesper; Zhang, Yong; Hou, Yonghui; Wang, Yuefei; Shi, Jianrong; Zhang, Wei

    2016-01-01

    Superflares are large explosive events on stellar surfaces one to six orders-of-magnitude larger than the largest flares observed on the Sun throughout the space age. Due to the huge amount of energy released in these superflares, it has been speculated if the underlying mechanism is the same as for solar flares, which are caused by magnetic reconnection in the solar corona. Here, we analyse observations made with the LAMOST telescope of 5,648 solar-like stars, including 48 superflare stars. These observations show that superflare stars are generally characterized by larger chromospheric emissions than other stars, including the Sun. However, superflare stars with activity levels lower than, or comparable to, the Sun do exist, suggesting that solar flares and superflares most likely share the same origin. The very large ensemble of solar-like stars included in this study enables detailed and robust estimates of the relation between chromospheric activity and the occurrence of superflares. PMID:27009381

  3. Multiplexed analysis of protein-ligand interactions by fluorescence anisotropy in a microfluidic platform.

    PubMed

    Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F

    2014-10-07

    Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.

  4. [Antibacterial prevention of suppurative complications after operations on the large intestine].

    PubMed

    Kuzin, M I; Pomelov, V S; Vandiaev, G K; Ialgashev, T Ia; Blatun, L A

    1983-05-01

    The data on comparative study of complications after operations on the large intestine are presented. During the preoperative period, 62 patients of the control group were treated with phthalylsulfathiazole, nevigramon and nystatin. Thirty-nine patients of the test group were treated with metronidazole and kanamycin monosulfate. Kanamycin monosulfate was used 3 days before the operation in a dose of 0.5 g orally 4 times a day whereas metronidazole in a dose of 0.5 g 3 times a day. The last doses of the drugs were administered 4-5 hours before the operation. After the operations the patients were treated with kanamycin sulfate for 3-5 days in a daily dose of 2 g intramuscularly. The number of the postoperative suppurative complications decreased from 22 to 5 per cent. No lethal outcomes were registered in the test group. The number of lethal outcomes in the control group amounted to 8 per cent.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less

  6. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  7. Collapse of axion stars

    DOE PAGES

    Eby, Joshua; Leembruggen, Madelyn; Suranyi, Peter; ...

    2016-12-15

    Axion stars, gravitationally bound states of low-energy axion particles, have a maximum mass allowed by gravitational stability. Weakly bound states obtaining this maximum mass have sufficiently large radii such that they are dilute, and as a result, they are well described by a leading-order expansion of the axion potential. Here, heavier states are susceptible to gravitational collapse. Inclusion of higher-order interactions, present in the full potential, can give qualitatively different results in the analysis of collapsing heavy states, as compared to the leading-order expansion. In this work, we find that collapsing axion stars are stabilized by repulsive interactions present inmore » the full potential, providing evidence that such objects do not form black holes. In the last moments of collapse, the binding energy of the axion star grows rapidly, and we provide evidence that a large amount of its energy is lost through rapid emission of relativistic axions.« less

  8. Innovative Double Bypass Engine for Increased Performance

    NASA Astrophysics Data System (ADS)

    Manoharan, Sanjivan

    Engines continue to grow in size to meet the current thrust requirements of the civil aerospace industry. Large engines pose significant transportation problems and require them to be split in order to be shipped. Thus, large amounts of time have been spent in researching methods to increase thrust capabilities while maintaining a reasonable engine size. Unfortunately, much of this research has been focused on increasing the performance and efficiencies of individual components while limited research has been done on innovative engine configurations. This thesis focuses on an innovative engine configuration, the High Double Bypass Engine, aimed at increasing fuel efficiency and thrust while maintaining a competitive fan diameter and engine length. The 1-D analysis was done in Excel and then compared to the results from Numerical Propulsion Simulation System (NPSS) software and were found to be within 4% error. Flow performance characteristics were also determined and validated against their criteria.

  9. Setting up a Rayleigh Scattering Based Flow Measuring System in a Large Nozzle Testing Facility

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Gomez, Carlos R.

    2002-01-01

    A molecular Rayleigh scattering based air density measurement system has been built in a large nozzle testing facility at NASA Glenn Research Center. The technique depends on the light scattering by gas molecules present in air; no artificial seeding is required. Light from a single mode, continuous wave laser was transmitted to the nozzle facility by optical fiber, and light scattered by gas molecules, at various points along the laser beam, is collected and measured by photon-counting electronics. By placing the laser beam and collection optics on synchronized traversing units, the point measurement technique is made effective for surveying density variation over a cross-section of the nozzle plume. Various difficulties associated with dust particles, stray light, high noise level and vibration are discussed. Finally, a limited amount of data from an underexpanded jet are presented and compared with expected variations to validate the technique.

  10. Lithium wall conditioning by high frequency pellet injection in RFX-mod

    NASA Astrophysics Data System (ADS)

    Innocente, P.; Mansfield, D. K.; Roquemore, A. L.; Agostini, M.; Barison, S.; Canton, A.; Carraro, L.; Cavazzana, R.; De Masi, G.; Fassina, A.; Fiameni, S.; Grando, L.; Rais, B.; Rossetto, F.; Scarin, P.

    2015-08-01

    In the RFX-mod reversed field pinch experiment, lithium wall conditioning has been tested with multiple scopes: to improve density control, to reduce impurities and to increase energy and particle confinement time. Large single lithium pellet injection, lithium capillary-pore system and lithium evaporation has been used for lithiumization. The last two methods, which presently provide the best results in tokamak devices, have limited applicability in the RFX-mod device due to the magnetic field characteristics and geometrical constraints. On the other side, the first mentioned technique did not allow injecting large amount of lithium. To improve the deposition, recently in RFX-mod small lithium multi-pellets injection has been tested. In this paper we compare lithium multi-pellets injection to the other techniques. Multi-pellets gave more uniform Li deposition than evaporator, but provided similar effects on plasma parameters, showing that further optimizations are required.

  11. The comparison and analysis of extracting video key frame

    NASA Astrophysics Data System (ADS)

    Ouyang, S. Z.; Zhong, L.; Luo, R. Q.

    2018-05-01

    Video key frame extraction is an important part of the large data processing. Based on the previous work in key frame extraction, we summarized four important key frame extraction algorithms, and these methods are largely developed by comparing the differences between each of two frames. If the difference exceeds a threshold value, take the corresponding frame as two different keyframes. After the research, the key frame extraction based on the amount of mutual trust is proposed, the introduction of information entropy, by selecting the appropriate threshold values into the initial class, and finally take a similar mean mutual information as a candidate key frame. On this paper, several algorithms is used to extract the key frame of tunnel traffic videos. Then, with the analysis to the experimental results and comparisons between the pros and cons of these algorithms, the basis of practical applications is well provided.

  12. Volcanic Aerosol Radiative Properties

    NASA Technical Reports Server (NTRS)

    Lacis, Andrew

    2015-01-01

    Large sporadic volcanic eruptions inject large amounts of sulfur bearing gases into the stratosphere which then get photochemically converted to sulfuric acid aerosol droplets that exert a radiative cooling effect on the global climate system lasting for several years.

  13. [Blue-light induced expression of S-adenosy-L-homocysteine hydrolase-like gene in Mucor amphibiorum RCS1].

    PubMed

    Gao, Ya; Wang, Shu; Fu, Mingjia; Zhong, Guolin

    2013-09-04

    To determine blue-light induced expression of S-adenosyl-L-homocysteine hydrolase-like (sahhl) gene in fungus Mucor amphibiorum RCS1. In the random process of PCR, a sequence of 555 bp was obtained from M. amphibiorum RCS1. The 555 bp sequence was labeled with digoxin to prepare the probe for northern hybridization. By northern hybridization, the transcription of sahhl gene was analyzed in M. amphibiorum RCS1 mycelia culture process from darkness to blue light to darkness. Simultaneously real-time PCR method was used to the sahhl gene expression analysis. Compared with the sequence of sahh gene from Homo sapiens, Mus musculus and some fungi species, a high homology of the 555 bp sequence was confirmed. Therefore, the preliminary confirmation has supported that the 555 bp sequence should be sahhl gene from M. amphibiorum RCS1. Under the dark pre-culture in 24 h, a large amounts of transcript of sahhl gene in the mycelia can be detected by northern hybridization and real-time PCR in the condition of 24 h blue light. But a large amounts of transcript of sahhl gene were not found in other detection for the dark pre-culture of 48 h, even though M. amphibiorum RCS1 mycelia were induced by blue light. Blue light can induce the expression of sahhl gene in the vigorous growth of M. amphibiorum RCS1 mycelia.

  14. Feeding strategy, nitrogen cycling, and profitability of dairy farms.

    PubMed

    Rotz, C A; Satter, L D; Mertens, D R; Muck, R E

    1999-12-01

    On a typical dairy farm today, large amounts of N are imported as feed supplements and fertilizer. If this N is not recycled through crop growth, it can lead to large losses to the atmosphere and ground water. More efficient use of protein feed supplements can potentially reduce the import of N in feeds, excretion of N in manure, and losses to the environment. A simulation study with a dairy farm model (DAFOSYM) illustrated that more efficient feeding and use of protein supplements increased farm profit and reduced N loss from the farm. Compared to soybean meal as the sole protein supplement, use of soybean meal along with a less rumen degradable protein feed reduced volatile N loss by 13 to 34 kg/ha of cropland with a small reduction in N leaching loss (about 1 kg/ha). Using the more expensive but less degradable protein supplement along with soybean meal improved net return by $46 to $69/cow per year, dependent on other management strategies of the farm. Environmental and economic benefits from more efficient supplementation of protein were generally greater with more animals per unit of land, higher milk production, more sandy soils, or a daily manure hauling strategy. Relatively less benefit was obtained when either alfalfa or corn silage was the sole forage on the farm or when relatively high amounts of forage were used in animal rations.

  15. Distribution of a pelagic tunicate, Salpa fusiformis in warm surface current of the eastern Korean waters and its impingement on cooling water intakes of Uljin nuclear power plant.

    PubMed

    Chae, Jinho; Choi, Hyun Woo; Lee, Woo Jin; Kim, Dongsung; Lee, Jae Hac

    2008-07-01

    Impingement of a large amount of gelatinous plankton, Salpa fusiformis on the seawater intake system-screens in a nuclear power plant at Uljin was firstly recorded on 18th June 2003. Whole amount of the clogged animals was estimated were presumptively at 295 tons and the shortage of cooling seawater supply by the animal clogging caused 38% of decrease in generation capability of the power plant. Zooplankton collection with a multiple towing net during the day and at night from 5 to 6 June 2003 included various gelatinous zooplanktons known to be warm water species such as salps and siphonophores. Comparatively larger species, Salpa fusiformis occupied 25.4% in individual density among the gelatinous plankton and showed surface distribution in the depth shallower than thermocline, performing little diel vertical migration. Temperature, salinity and satellite data also showed warm surface current predominated over the southern coastal region near the power plant in June. The results suggested that warm surface current occasionally extended into the neritic region may transfer S. fusiformis, to the waters off the power plant. The environmental factors and their relation to ecobiology of the large quantity of salpa population that are being sucked into the intake channel of the power plant are discussed.

  16. Hydrogen Production by Steam Reforming of Liquefied Natural Gas (LNG) Over Nickel-Phosphorus-Alumina Xerogel Catalyst Prepared by a Carbon-Templating Epoxide-Driven Sol-Gel Method.

    PubMed

    Bang, Yongju; Park, Seungwon; Han, Seung Ju; Yoo, Jaekyeong; Choi, Jung Ho; Kang, Tae Hun; Lee, Jinwon; Song, In Kyu

    2016-05-01

    A nickel-phosphorus-alumina xerogel catalyst was prepared by a carbon-templating epoxide-driven sol-gel method (denoted as CNPA catalyst), and it was applied to the hydrogen production by steam reforming of liquefied natural gas (LNG). For comparison, a nickel-phosphorus-alumina xerogel catalyst was also prepared by a similar method in the absence of carbon template (denoted as NPA catalyst). The effect of carbon template addition on the physicochemical properties and catalytic activities of the catalysts in the steam reforming of LNG was investigated. Both CNPA and NPA catalysts showed excellent textural properties with well-developed mesoporous structure. However, CNPA catalyst retained a more reducible nickel aluminate phase than NPA catalyst. XRD analysis of the reduced CNPA and NPA catalysts revealed that nickel sintering on the CNPA catalyst was suppressed compared to that on the NPA catalyst. From H2-TPD and CH4-TPD measurements of the reduced CNPA and NPA catalysts, it was also revealed that CNPA catalyst with large amount of hydrogen uptake and strong hydrogen-binding sites showed larger amount of methane adsorption than NPA catalyst. In the hydrogen production by steam reforming of LNG, CNPA catalyst with large methane adsorption capacity showed a better catalytic activity than NPA catalyst.

  17. Aspartame and sucrose produce a similar increase in the plasma phenylalanine to large neutral amino acid ratio in healthy subjects.

    PubMed

    Burns, T S; Stargel, W W; Tschanz, C; Kotsonis, F N; Hurwitz, A

    1991-01-01

    Aspartame (L-aspartyl-L-phenylalanine methyl ester) consumption has been postulated to increase brain phenylalanine levels by increasing the molar ratio of the plasma phenylalanine concentration to the sum of the plasma concentrations of the other large neutral amino acids (Phe/LNAA). Dietary manipulations with carbohydrate or protein can also produce changes in the Phe/LNAA value. To compare the effects of aspartame and carbohydrate on Phe/LNAA, beverages sweetened with aspartame, sucrose, and aspartame plus sucrose, and unsweetened beverage were ingested by 8 healthy, fasted subjects in a randomized, four-way crossover design. The beverages were sweetened with an amount of aspartame (500 mg) and/or sucrose (100 g) approximately equivalent to that used to sweeten 1 liter of soft drink. The baseline-corrected plasma Phe/LNAA values did not differ significantly following ingestion of aspartame or sucrose. Following aspartame alone, the high mean ratio increased 26% over baseline 1 h after ingestion. Following sucrose alone, the high mean ratio increased 19% at 2.5 h. Sucrose increased the Phe/LNAA value due to an insulin-mediated decrease in the plasma LNAA, while aspartame increased the ratio by increasing the plasma Phe concentration. These findings indicate that similar increases in plasma Phe/LNAA occur when healthy, fasting subjects ingest amounts of equivalent sweetness of sucrose or aspartame.

  18. Solute-Filled Syringe For Formulating Intravenous Solution

    NASA Technical Reports Server (NTRS)

    Owens, Jim; Bindokas, AL; Dudar, Tom; Finley, Mike; Scharf, Mike

    1993-01-01

    Prefilled syringe contains premeasured amount of solute in powder or concentrate form used to deliver solute to sterile interior of large-volume parenteral (LVP) bag. Predetermined amount of sterile water also added to LVP bag through sterilizing filter, and mixed with contents of syringe, yielding sterile intravenous solution of specified concentration.

  19. An Alkalophilic Bacillus sp. Produces 2-Phenylethylamine

    PubMed Central

    Hamasaki, Nobuko; Shirai, Shinji; Niitsu, Masaru; Kakinuma, Katsumi; Oshima, Tairo

    1993-01-01

    A large amount of 2-phenylethylamine was produced in cells of alkalophilic Bacillus sp. strain YN-2000. This amine is secreted in the medium during the cell growth. The amounts of 2-phenylethylamine in both cells and medium change upon changing the pH of the medium. PMID:16349025

  20. Effects of corn processing, particle size, and diet form on performance of calves in bedded pens.

    PubMed

    Bateman, H G; Hill, T M; Aldrich, J M; Schlotterbeck, R L

    2009-02-01

    In a series of 5 trials, Holstein calves from zero to 12 wk old were housed in pens bedded with straw and fed diets to evaluate physical form of starters containing different processed corn on calf performance. Starters were formulated to have similar ingredient and nutrient compositions. Calves, initially less than 1 wk old, were housed in individual pens through 8 wk and weaned at 6 wk in trial 1 and at 4 wk in trials 2 and 3. In trials 4 and 5, calves initially 8 wk old were housed in group pens (6 calves/pen) from 8 to 12 wk. Trial 1 compared feeding calves a pelleted versus textured starter. Trial 2 compared feeding calves a textured starter versus feeding half meal starter with half textured starter. Trial 3 compared feeding calves textured starters containing whole, steam-flaked, or dry rolled corn. Trial 4 compared feeding calves textured starters containing steam-flaked versus dry rolled corn. Trial 5 compared feeding calves textured starters containing whole or dry rolled corn. Measurements included average daily gain (ADG), starter intake, feed efficiency, hip width change, body condition score change, fecal scores, and medical treatments. Physical form of starter feed did not affect any measurements in trials 1, 3, 4, and 5. In trial 2, calves fed starters manufactured with large amounts of fines had 11% less feed intake and 6% slower ADG than calves fed a textured starter. When starters contained similar ingredient and nutrient contents, manufacturing processes did not affect calf performance unless the diet contained a significant amount of fines, which reduced intake and ADG.

  1. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    NASA Astrophysics Data System (ADS)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of climate change on crop productivity in a watershed. The first was carried out by the large-scale crop model alone. The second was carried out by the integrated model of the large-scale crop model and the H08 model. The former projected that changes in temperature and precipitation due to future climate change would give rise to increasing the water stress in crops. Nevertheless, the latter projected that the increasing amount of agricultural water resources in the watershed would supply sufficient amount of water for irrigation, consequently reduce the water stress. The integrated model demonstrated the importance of taking into account the water circulation in watershed when predicting the regional crop production.

  2. Low Reynolds number numerical solutions of chaotic flow

    NASA Technical Reports Server (NTRS)

    Pulliam, Thomas H.

    1989-01-01

    Numerical computations of two-dimensional flow past an airfoil at low Mach number, large angle of attack, and low Reynolds number are reported which show a sequence of flow states leading from single-period vortex shedding to chaos via the period-doubling mechanism. Analysis of the flow in terms of phase diagrams, Poincare sections, and flowfield variables are used to substantiate these results. The critical Reynolds number for the period-doubling bifurcations is shown to be sensitive to mesh refinement and the influence of large amounts of numerical dissipation. In extreme cases, large amounts of added dissipation can delay or completely eliminate the chaotic response. The effect of artificial dissipation at these low Reynolds numbers is to produce a new effective Reynolds number for the computations.

  3. Social modeling effects on young women's breakfast intake.

    PubMed

    Hermans, Roel C J; Herman, C Peter; Larsen, Junilla K; Engels, Rutger C M E

    2010-12-01

    Numerous studies have shown that the presence of others influences young women's food intake. They eat more when the other eats more, and eat less when the other eats less. However, most of these studies have focused on snack situations. The present study assesses the degree to which young women model the breakfast intake of a same-sex peer in a semi-naturalistic setting. The study took place in a laboratory setting at the Radboud University Nijmegen, the Netherlands, during the period January to April 2009. After completing three cover tasks, normal-weight participants (n=57) spent a 20-minute break with a peer who ate a large amount or a small amount of breakfast or no breakfast at all. The participants' total amount of energy consumed (in kilocalories) during the break was measured. An analysis of variance was used to examine whether young women modeled the breakfast intake of same-sex peers. Results indicate a main effect of breakfast condition, F(2,54)=8.44; P<0.01. Participants exposed to a peer eating nothing ate less than did participants exposed to a peer eating a small amount (d=0.85) or large amount of breakfast (d=1.23). Intake in the Small-Breakfast condition did not differ substantially from intake in the Large-Breakfast condition. The findings from the present study provide evidence that modeling effects of food intake are weaker in eating contexts in which scripts or routines guide an individual's eating behavior. Copyright © 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  4. Nanoscale welding aerosol sensing based on whispering gallery modes in a cylindrical silica resonator.

    PubMed

    Lee, Aram; Mills, Thomas; Xu, Yong

    2015-03-23

    We report an experimental technique where one uses a standard silica fiber as a cylindrical whispering gallery mode (WGM) resonator to sense airborne nanoscale aerosols produced by electric arc welding. We find that the accumulation of aerosols on the resonator surface induces a measurable red-shift in resonance frequency, and establish an empirical relation that links the magnitude of resonance shift with the amount of aerosol deposition. The WGM quality factors, by contrast, do not decrease significantly, even for samples with a large percentage of surface area covered by aerosols. Our experimental results are discussed and compared with existing literature on WGM-based nanoparticle sensing.

  5. Preliminary results on interstellar reddening as deduced from filter photometry

    NASA Technical Reports Server (NTRS)

    Laget, M.

    1972-01-01

    Filter photometry has been used to derive the interstellar reddening law from stars through the study of a single spectral type, B0. The deficiency in the far ultraviolet flux of a supergiant relative to a main sequence star is compared with the difference in the flux distribution due to a change of one spectral class. Individual interstellar reddening curves show the general feature reported by Stecher (1969) and by Bless and Savage (1970). There is a large amount of scatter in the far ultraviolet which may be partially due to a real difference in interstellar extinction and partially due to observational inaccuracy.

  6. Fiber-Optic Linear Displacement Sensor Based On Matched Interference Filters

    NASA Astrophysics Data System (ADS)

    Fuhr, Peter L.; Feener, Heidi C.; Spillman, William B.

    1990-02-01

    A fiber optic linear displacement sensor has been developed in which a pair of matched interference filters are used to encode linear position on a broadband optical signal as relative intensity variations. As the filters are displaced, the optical beam illuminates varying amounts of each filter. Determination of the relative intensities at each filter pairs' passband is based on measurements acquired with matching filters and photodetectors. Source power variation induced errors are minimized by basing determination of linear position on signal Visibility. A theoretical prediction of the sensor's performance is developed and compared with experiments performed in the near IR spectral region using large core multimode optical fiber.

  7. Status of wind-energy conversion

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Savino, J. M.

    1973-01-01

    The utilization of wind energy is technically feasible as evidenced by the many past demonstrations of wind generators. The cost of energy from the wind has been high compared to fossil fuel systems; a sustained development effort is needed to obtain economical systems. The variability of the wind makes it an unreliable source on a short term basis. However, the effects of this variability can be reduced by storage systems or connecting wind generators to: (1) fossil fuel systems; (2) hydroelectric systems; or (3) dispersing them throughout a large grid network. Wind energy appears to have the potential to meet a significant amount of our energy needs.

  8. Storage hierarchies and multimedia file servers

    NASA Astrophysics Data System (ADS)

    Wullert, John R.; Von Lehman, Ann C.

    1994-11-01

    A variety of multimedia and video services have been proposed and investigated, including services such as video-on-demand, distance learning, home shopping, and telecommuting. These services tend to rely on high-datarate communications and most have a corresponding need for a large amount of storage with high data rates and short access times. For some services, it has been predicted that the cost of storage will be significant compared to the cost of switching and transmission in a broadband network. This paper discusses architectures of a variety of multimedia and video services, with an emphasis on the relationship between technological considerations of the storage heirarchy to support these services and service architectures.

  9. An Approach for Removing Redundant Data from RFID Data Streams

    PubMed Central

    Mahdin, Hairulnizam; Abawajy, Jemal

    2011-01-01

    Radio frequency identification (RFID) systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches. PMID:22163730

  10. Spectroscopic identification of dichlorobenzyl radicals: Jet-cooled 2,3-dichlorobenzyl radical

    NASA Astrophysics Data System (ADS)

    Chae, Sang Youl; Yoon, Young Wook; Lee, Sang Kuk

    2015-07-01

    The vibronically excited but jet-cooled 2,3-dichlorobenzyl radical was generated from the corona discharge of precursor 2,3-dichlorotoluene seeded in a large amount of carrier gas He using a pinhole-type glass nozzle. From an analysis of the visible vibronic emission spectrum observed, we obtained the electronic energy of the D1 → D0 transition and vibrational mode frequencies in the D0 state of the 2,3-dichlorobenzyl radical by comparing the observation with the results of ab initio calculations. In addition, we discussed substituent effect of Cls on electronic transition energy in terms of substituent orientation for the first time.

  11. Evaluation of nutraceutical and antinutritional properties in barnyard and finger millet varieties grown in Himalayan region.

    PubMed

    Panwar, Priyankar; Dubey, Ashutosh; Verma, A K

    2016-06-01

    Five elite varieties of barnyard (Echinochloa frumentacea) and finger (Eleusine coracana) growing at northwestern Himalaya were investigated for nutraceutical and antinutritional properties. Barnyard millet contained higher amount of crude fiber, total dietary fiber, tryptophan content, total carotenoids, α-tocopherol compared to the finger millet whereas the finger millet contains higher amount of methionine and ascorbic acid as compared to the barnyard millet. The secondary metabolites of biological functions were analyzed and found that barnyard millet contained the higher amount of polyphenols, tannins and ortho-dihydroxy phenol content compared to finger millet. Among antinutitional compounds barnyard millet contained lower phytic acid content compare to finger millet whereas no significant difference in trypsin inhibition activity of barnyard millet and finger millet varieties were found. Barnyard millet contained higher acid phosphatase, α-galactosidase and α-amylase inhibitor activity compared to finger millet. Finger millet seeds contained about 10-13 folds higher calcium content and double amount of manganese content in comparison to barnyard millet seeds. Present study suggests that barnyard millet varieties studied under present investigation were found nutritionally superior compared to finger millet varieties.

  12. Amounts and activity concentrations of radioactive wastes from the cleanup of large areas contaminated in nuclear accidents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehto, J.; Ikaeheimonen, T.K.; Salbu, B.

    The fallout from a major nuclear accident at a nuclear plant may result in a wide-scale contamination of the environment. Cleanup of contaminated areas is of special importance if these areas are populated or cultivated. All cleanup measures generate high amounts of radioactive waste, which have to be treated and disposed of in a safe manner. Scenarios assessing the amounts and activity concentrations of radioactive wastes for various cleanup measures after severe nuclear accidents have been worked out for urban, forest and agricultural areas. These scenarios are based on contamination levels and ares of contaminated lands from a model accident,more » which simulates a worst case accident at a nuclear power plant. Amounts and activity concentrations of cleanup wastes are not only dependent on the contamination levels and areas of affected lands, but also on the type of deposition, wet or dry, on the time between the deposition and the cleanup work, on the season, at which the deposition took place, and finally on the level of cleanup work. In this study practically all types of cleanup wastes were considered, whether or not the corresponding cleanup measures are cost-effective or justified. All cleanup measures are shown to create large amounts of radioactive wastes, but the amounts, as well as the activity concentrations vary widely from case to case.« less

  13. Gigwa-Genotype investigator for genome-wide analyses.

    PubMed

    Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre

    2016-06-06

    Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.

  14. Faster growth in warmer winters for large trees in a Mediterranean-climate ecosystem

    Treesearch

    Seth W. Bigelow; Michael J. Papaik; Caroline Caum; Malcolm P. North

    2014-01-01

    Large trees (>76 cm breast-height diameter) are vital components of Sierra Nevada/Cascades mixed-conifer ecosystems because of their fire resistance, ability to sequester large amounts of carbon, and role as preferred habitat for sensitive species such as the California spotted owl. To investigate the likely performance of large trees in a rapidly changing...

  15. Increase in ozone due to the use of biodiesel fuel rather than diesel fuel.

    PubMed

    Thang, Phan Quang; Muto, Yusuke; Maeda, Yasuaki; Trung, Nguyen Quang; Itano, Yasuyuki; Takenaka, Norimichi

    2016-09-01

    The consumption of fuel by vehicles emits nitrogen oxides (NOx) and non-methane hydrocarbons (NMHCs) into the atmosphere, which are important ozone precursors. Ozone is formed as a secondary pollutant via photochemical processes and is not emitted directly into the atmosphere. In this paper, the ozone increase resulting from the use of biodiesel and diesel fuels was investigated, and the different ozone formation trends were experimentally evaluated. Known amounts of exhaust gas from a power generator operated using biodiesel and diesel fuels were added to ambient air. The quality of the ambient air, such as the initial NMHC and NOx concentrations, and the irradiation intensity have an effect on the ozone levels. When 30 cm(3) of biodiesel fuel exhaust gas (BFEG) or diesel fuel exhausted gas (DFEG) was added to 18 dm(3) of ambient air, the highest ratios of ozone increase from BFEG compared with DFEG in Japan and Vietnam were 31.2 and 42.8%, respectively, and the maximum ozone increases resulting from DFEG and BFEG compared with the ambient air in Japan were 17.4 and 26.4 ppb, respectively. The ozone increase resulting from the use of BFEG was large and significant compared to that from DFEG under all experimental conditions. The ozone concentration increased as the amount of added exhaust gas increased. The ozone increase from the Jatropha-BFEG was slightly higher than that from waste cooking oil-BFEG. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Ontogenetic and static allometry in the human face: contrasting Khoisan and Inuit.

    PubMed

    Freidline, Sarah E; Gunz, Philipp; Hublin, Jean-Jacques

    2015-09-01

    Regional differences in modern human facial features are present at birth, and ontogenetic allometry contributes to variation in adults. However, details regarding differential rates of growth and timing among regional groups are lacking. We explore ontogenetic and static allometry in a cross-sectional sample spanning Africa, Europe and North America, and evaluate tempo and mode in two regional groups with very different adult facial morphology, the Khoisan and Inuit. Semilandmark geometric morphometric methods, multivariate statistics and growth simulations were used to quantify and compare patterns of facial growth and development. Regional-specific facial morphology develops early in ontogeny. The Inuit has the most distinct morphology and exhibits heterochronic differences in development compared to other regional groups. Allometric patterns differ during early postnatal development, when significant increases in size are coupled with large amounts of shape changes. All regional groups share a common adult static allometric trajectory, which can be attributed to sexual dimorphism, and the corresponding allometric shape changes resemble developmental patterns during later ontogeny. The amount and pattern of growth and development may not be shared between regional groups, indicating that a certain degree of flexibility is allowed for in order to achieve adult size. In early postnatal development the face is less constrained compared to other parts of the cranium allowing for greater evolvability. The early development of region-specific facial features combined with heterochronic differences in timing or rate of growth, reflected in differences in facial size, suggest different patterns of postnatal growth. © 2015 Wiley Periodicals, Inc.

  17. Urbanization and agricultural land loss in India: comparing satellite estimates with census data.

    PubMed

    Pandey, Bhartendu; Seto, Karen C

    2015-01-15

    We examine the impacts of urbanization on agricultural land loss in India from 2001 to 2010. We combined a hierarchical classification approach with econometric time series analysis to reconstruct land-cover change histories using time series MODIS 250 m VI images composited at 16-day intervals and night time lights (NTL) data. We compared estimates of agricultural land loss using satellite data with agricultural census data. Our analysis highlights six key results. First, agricultural land loss is occurring around smaller cities more than around bigger cities. Second, from 2001 to 2010, each state lost less than 1% of its total geographical area due to agriculture to urban expansion. Third, the northeastern states experienced the least amount of agricultural land loss. Fourth, agricultural land loss is largely in states and districts which have a larger number of operational or approved SEZs. Fifth, urban conversion of agricultural land is concentrated in a few districts and states with high rates of economic growth. Sixth, agricultural land loss is predominantly in states with higher agricultural land suitability compared to other states. Although the total area of agricultural land lost to urban expansion has been relatively low, our results show that since 2006, the amount of agricultural land converted has been increasing steadily. Given that the preponderance of India's urban population growth has yet to occur, the results suggest an increase in the conversion of agricultural land going into the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    PubMed

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  19. Analyzing large scale genomic data on the cloud with Sparkhit

    PubMed Central

    Huang, Liren; Krüger, Jan

    2018-01-01

    Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074

  20. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  1. A Cerebellar-model Associative Memory as a Generalized Random-access Memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1989-01-01

    A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.

  2. Occupational cancer in the European part of the Commonwealth of Independent States.

    PubMed Central

    Bulbulyan, M A; Boffetta, P

    1999-01-01

    Precise information on the number of workers currently exposed to carcinogens in the Commonwealth of Independent States (CIS) is lacking. However, the large number of workers employed in high-risk industries such as the chemical and metal industries suggests that the number of workers potentially exposed to carcinogens may be large. In the CIS, women account for almost 50% of the industrial work force. Although no precise data are available on the number of cancers caused by occupational exposures, indirect evidence suggests that the magnitude of the problem is comparable to that observed in Western Europe, representing some 20,000 cases per year. The large number of women employed in the past and at present in industries that create potential exposure to carcinogens is a special characteristic of the CIS. In recent years an increasing amount of high-quality research has been conducted on occupational cancer in the CIS; there is, however, room for further improvement. International training programs should be established, and funds from international research and development programs should be devoted to this area. In recent years, following privatization of many large-scale industries, access to employment and exposure data is becoming increasingly difficult. PMID:10350512

  3. Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics

    NASA Astrophysics Data System (ADS)

    Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal

    2017-12-01

    Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.

  4. A microfluidic array for high-content screening at whole-organism resolution

    NASA Astrophysics Data System (ADS)

    Migliozzi, D.; Cornaglia, M.; Mouchiroud, L.; Auwerx, J.; Gijs, M. A. M.

    2018-02-01

    A main step for the development and the validation of medical drugs is the screening on whole organisms, which gives the systemic information that is missing when using cellular models. Among the organisms of choice, Caenorhabditis elegansis a soil worm which catches the interest of researchers who study systemic physiopathology (e.g. metabolic and neurodegenerative diseases) because: (1) its large genetic homology with humans supports translational analysis; (2) worms are much easier to handle and grow in large amounts compared to rodents, for which (3) the costs and (4) the ethical concerns are substantial.C. elegansis therefore well suited for large screens, dose-response analysis and target-discovery involving an entire organism. We have developed and tested a microfluidic array for high-content screening, enabling the selection of small populations of its first larval stage in many separated chambers divided into channels for multiplexed screens. With automated protocols for feeding, drug administration and image acquisition, our chip enables the study of the nematodes throughout their entire lifespan. By using a paralyzing agent and a mitochondrial-stress inducer as case studies, we have demonstrated large field-of-view motility analysis, and worm-segmentation/signal-detection for mode-of-action quantification with genetically-encoded fluorescence reporters.

  5. MaRaCluster: A Fragment Rarity Metric for Clustering Fragment Spectra in Shotgun Proteomics.

    PubMed

    The, Matthew; Käll, Lukas

    2016-03-04

    Shotgun proteomics experiments generate large amounts of fragment spectra as primary data, normally with high redundancy between and within experiments. Here, we have devised a clustering technique to identify fragment spectra stemming from the same species of peptide. This is a powerful alternative method to traditional search engines for analyzing spectra, specifically useful for larger scale mass spectrometry studies. As an aid in this process, we propose a distance calculation relying on the rarity of experimental fragment peaks, following the intuition that peaks shared by only a few spectra offer more evidence than peaks shared by a large number of spectra. We used this distance calculation and a complete-linkage scheme to cluster data from a recent large-scale mass spectrometry-based study. The clusterings produced by our method have up to 40% more identified peptides for their consensus spectra compared to those produced by the previous state-of-the-art method. We see that our method would advance the construction of spectral libraries as well as serve as a tool for mining large sets of fragment spectra. The source code and Ubuntu binary packages are available at https://github.com/statisticalbiotechnology/maracluster (under an Apache 2.0 license).

  6. Plasma issues associated with the use of electrodynamic tethers

    NASA Technical Reports Server (NTRS)

    Hastings, D. E.

    1986-01-01

    The use of an electrodynamic tether to generate power or thrust on the space station raises important plasma issues associted with the current flow. In addition to the issue of current closure through the space station, high power tethers (equal to or greater than tens of kilowatts) require the use of plasma contactors to enhance the current flow. They will generate large amounts of electrostatic turbulence in the vicinity of the space station. This is because the contactors work best when a large amount of current driven turbulence is excited. Current work is reviewed and future directions suggested.

  7. Models of resource planning during formation of calendar construction plans for erection of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Pocebneva, Irina; Belousov, Vadim; Fateeva, Irina

    2018-03-01

    This article provides a methodical description of resource-time analysis for a wide range of requirements imposed for resource consumption processes in scheduling tasks during the construction of high-rise buildings and facilities. The core of the proposed approach and is the resource models being determined. The generalized network models are the elements of those models, the amount of which can be too large to carry out the analysis of each element. Therefore, the problem is to approximate the original resource model by simpler time models, when their amount is not very large.

  8. Numerical experiments on short-term meteorological effects on solar variability

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.; Hansen, J. E.; Stone, P. H.; Quirk, W. J.; Lacis, A. A.

    1975-01-01

    A set of numerical experiments was conducted to test the short-range sensitivity of a large atmospheric general circulation model to changes in solar constant and ozone amount. On the basis of the results of 12-day sets of integrations with very large variations in these parameters, it is concluded that realistic variations would produce insignificant meteorological effects. Any causal relationships between solar variability and weather, for time scales of two weeks or less, rely upon changes in parameters other than solar constant or ozone amounts, or upon mechanisms not yet incorporated in the model.

  9. A New Approach for Validating Satellite Estimates of Soil Moisture Using Large-Scale Precipitation: Comparing AMSR-E Products

    NASA Astrophysics Data System (ADS)

    Tuttle, S. E.; Salvucci, G.

    2012-12-01

    Soil moisture influences many hydrological processes in the water and energy cycles, such as runoff generation, groundwater recharge, and evapotranspiration, and thus is important for climate modeling, water resources management, agriculture, and civil engineering. Large-scale estimates of soil moisture are produced almost exclusively from remote sensing, while validation of remotely sensed soil moisture has relied heavily on ground truthing, which is at an inherently smaller scale. Here we present a complementary method to determine the information content in different soil moisture products using only large-scale precipitation data (i.e. without modeling). This study builds on the work of Salvucci [2001], Saleem and Salvucci [2002], and Sun et al. [2011], in which precipitation was conditionally averaged according to soil moisture level, resulting in moisture-outflow curves that estimate the dependence of drainage, runoff, and evapotranspiration on soil moisture (i.e. sigmoidal relations that reflect stressed evapotranspiration for dry soils, roughly constant flux equal to potential evaporation minus capillary rise for moderately dry soils, and rapid drainage for very wet soils). We postulate that high quality satellite estimates of soil moisture, using large-scale precipitation data, will yield similar sigmoidal moisture-outflow curves to those that have been observed at field sites, while poor quality estimates will yield flatter, less informative curves that explain less of the precipitation variability. Following this logic, gridded ¼ degree NLDAS precipitation data were compared to three AMSR-E derived soil moisture products (VUA-NASA, or LPRM [Owe et al., 2001], NSIDC [Njoku et al., 2003], and NSIDC-LSP [Jones & Kimball, 2011]) for a period of nine years (2001-2010) across the contiguous United States. Gaps in the daily soil moisture data were filled using a multiple regression model reliant on past and future soil moisture and precipitation, and soil moisture was then converted to a ranked wetness index, in order to reconcile the wide range and magnitude of the soil moisture products. Generalized linear models were employed to fit a polynomial model to precipitation, given wetness index. Various measures of fit (e.g. log likelihood) were used to judge the amount of information in each soil moisture product, as indicated by the amount of precipitation variability explained by the fitted model. Using these methods, regional patterns appear in soil moisture product performance.

  10. Human-Machine Cooperation in Large-Scale Multimedia Retrieval: A Survey

    ERIC Educational Resources Information Center

    Shirahama, Kimiaki; Grzegorzek, Marcin; Indurkhya, Bipin

    2015-01-01

    "Large-Scale Multimedia Retrieval" (LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more…

  11. Large Groups in the Boundary Waters Canoe Area - Their Numbers, Characteristics, and Impact

    Treesearch

    David W. Lime

    1972-01-01

    The impact of "large" parties in the BWCA is discussed in terms of their effect on the resource and on the experience of other visitors. The amount of use by large groups and the visitors most likely to be affected by a reduction in party size limit are described.

  12. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. 75 FR 54059 - Extension of Filing Accommodation for Static Pool Information in Filings With Respect to Asset...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... information could include a significant amount of statistical information that would be difficult to file... required static pool information. Given the large amount of statistical information involved, commentators....; and 18 U.S.C. 1350. * * * * * 2. Amend Sec. 232.312 paragraph (a) introductory text by removing...

  14. EFFECT OF X RADIATION ON THE AMOUNT OF PROPERDINE SERUM IN THE RAT (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verain, A.; Despaux, E.; Verain, A.

    1958-01-01

    The effect of x radiation on the amount of properdine in rat serum was studied in vivo and in vitro. White rats were submitted to 1000 r, and the amount of properdine before and after irradiation was determined. Serum in vitro was irradiatcd with 1 to 2 Mr. The results showed a rapid, almost constant, deerease of the properdine in the irradiated rat. This effect was found in vitro only when large radiation doses were used. (J.S.R.)

  15. Managing Materials and Wastes for Homeland Security Incidents

    EPA Pesticide Factsheets

    To provide information on waste management planning and preparedness before a homeland security incident, including preparing for the large amounts of waste that would need to be managed when an incident occurs, such as a large-scale natural disaster.

  16. Reliability-based optimization design of geosynthetic reinforced road embankment.

    DOT National Transportation Integrated Search

    2014-07-01

    Road embankments are typically large earth structures, the construction of which requires for large amounts of competent fill soil. In order to limit costs, the utilization of geosynthetics in road embankments allows for construction of steep slopes ...

  17. Inter-comparison of precipitable water among reanalyses and its effect on downscaling in the tropics

    NASA Astrophysics Data System (ADS)

    Takahashi, H. G.; Fujita, M.; Hara, M.

    2012-12-01

    This paper compared precipitable water (PW) among four major reanalyses. In addition, we also investigated the effect of the boundary conditions on downscaling in the tropics, using a regional climate model. The spatial pattern of PW in the reanalyses agreed closely with observations. However, the absolute amounts of PW in some reanalyses were very small compared to observations. The discrepancies of the 12-year mean PW in July over the Southeast Asian monsoon region exceeded the inter-annual standard deviation of the PW. There was also a discrepancy in tropical PWs throughout the year, an indication that the problem is not regional, but global. The downscaling experiments were conducted, which were forced by the different four reanalyses. The atmospheric circulation, including monsoon westerlies and various disturbances, was very small among the reanalyses. However, simulated precipitation was only 60 % of observed precipitation, although the dry bias in the boundary conditions was only 6 %. This result indicates that dry bias has large effects on precipitation in downscaling over the tropics. This suggests that a simulated regional climate downscaled from ensemble-mean boundary conditions is quite different from an ensemble-mean regional climate averaged over the several regional ones downscaled from boundary conditions of the ensemble members in the tropics. Downscaled models can provide realistic simulations of regional tropical climates only if the boundary conditions include realistic absolute amounts of PW. Use of boundary conditions that include realistic absolute amounts of PW in downscaling in the tropics is imperative at the present time. This work was partly supported by the Global Environment Research Fund (RFa-1101) of the Ministry of the Environment, Japan.

  18. The status of coral reef ecology research in the Red Sea

    NASA Astrophysics Data System (ADS)

    Berumen, M. L.; Hoey, A. S.; Bass, W. H.; Bouwmeester, J.; Catania, D.; Cochran, J. E. M.; Khalil, M. T.; Miyake, S.; Mughal, M. R.; Spaet, J. L. Y.; Saenz-Agudelo, P.

    2013-09-01

    The Red Sea has long been recognized as a region of high biodiversity and endemism. Despite this diversity and early history of scientific work, our understanding of the ecology of coral reefs in the Red Sea has lagged behind that of other large coral reef systems. We carried out a quantitative assessment of ISI-listed research published from the Red Sea in eight specific topics (apex predators, connectivity, coral bleaching, coral reproductive biology, herbivory, marine protected areas, non-coral invertebrates and reef-associated bacteria) and compared the amount of research conducted in the Red Sea to that from Australia's Great Barrier Reef (GBR) and the Caribbean. On average, for these eight topics, the Red Sea had 1/6th the amount of research compared to the GBR and about 1/8th the amount of the Caribbean. Further, more than 50 % of the published research from the Red Sea originated from the Gulf of Aqaba, a small area (<2 % of the area of the Red Sea) in the far northern Red Sea. We summarize the general state of knowledge in these eight topics and highlight the areas of future research priorities for the Red Sea region. Notably, data that could inform science-based management approaches are badly lacking in most Red Sea countries. The Red Sea, as a geologically "young" sea located in one of the warmest regions of the world, has the potential to provide insight into pressing topics such as speciation processes as well as the capacity of reef systems and organisms to adapt to global climate change. As one of the world's most biodiverse coral reef regions, the Red Sea may yet have a significant role to play in our understanding of coral reef ecology at a global scale.

  19. Efficacy of CM-Wire, M-Wire, and Nickel-Titanium Instruments for Removing Filling Material from Curved Root Canals: A Micro-Computed Tomography Study.

    PubMed

    Rodrigues, Clarissa Teles; Duarte, Marco Antonio Hungaro; de Almeida, Marcela Milanezi; de Andrade, Flaviana Bombarda; Bernardineli, Norberti

    2016-11-01

    The aim of this ex vivo study was to evaluate the removal of filling material after using CM-wire, M-wire, and nickel-titanium instruments in both reciprocating and rotary motions in curved canals. Thirty maxillary lateral incisors were divided into 9 groups according to retreatment procedures: Reciproc R25 followed by Mtwo 40/.04 and ProDesign Logic 50/.01 files; ProDesign R 25/.06 followed by ProDesign Logic 40/.05 and ProDesign Logic 50/.01 files; and Gates-Glidden drills, Hedström files, and K-files up to apical size 30 followed by K-file 40 and K-file 50 up to the working length. Micro-computed tomography scans were performed before and after each reinstrumentation procedure to evaluate root canal filling removal. Statistical analysis was performed with Kruskal-Wallis, Friedman, and Wilcoxon tests (P < .05). No significant differences in filling material removal were found in the 3 groups of teeth. The use of Mtwo and ProDesign Logic 40/.05 rotary files did not enhance filling material removal after the use of reciprocating files. The use of ProDesign Logic 50/.01 files significantly reduced the amount of filling material at the apical levels compared with the use of reciprocating files. Association of reciprocating and rotary files was capable of removing a large amount of filling material in the retreatment of curved canals, irrespective of the type of alloy of the instruments. The use of a ProDesign Logic 50/.01 file for apical preparation significantly reduced the amount of remnant material in the apical portion when compared with reciprocating instruments. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  20. RNA sequencing: current and prospective uses in metabolic research.

    PubMed

    Vikman, Petter; Fadista, Joao; Oskolkov, Nikolay

    2014-10-01

    Previous global RNA analysis was restricted to known transcripts in species with a defined transcriptome. Next generation sequencing has transformed transcriptomics by making it possible to analyse expressed genes with an exon level resolution from any tissue in any species without any a priori knowledge of which genes that are being expressed, splice patterns or their nucleotide sequence. In addition, RNA sequencing is a more sensitive technique compared with microarrays with a larger dynamic range, and it also allows for investigation of imprinting and allele-specific expression. This can be done for a cost that is able to compete with that of a microarray, making RNA sequencing a technique available to most researchers. Therefore RNA sequencing has recently become the state of the art with regards to large-scale RNA investigations and has to a large extent replaced microarrays. The only drawback is the large data amounts produced, which together with the complexity of the data can make a researcher spend far more time on analysis than performing the actual experiment. © 2014 Society for Endocrinology.

Top