Sample records for methods yield consistent

  1. Planting pattern and weed control method influence on yield production of corn (Zea mays L.)

    NASA Astrophysics Data System (ADS)

    Purba, E.; Nasution, D. P.

    2018-02-01

    Field experiment was carried out to evaluate the influence of planting patterns and weed control methods on the growth and yield of corn. The effect of the planting pattern and weed control method was studied in a split plot design. The main plots were that of planting pattern single row (25cm x 60cm), double row (25cm x 25cm x 60cm) and triangle row ( 25cm x 25cm x 25cm). Subplot was that of weed control method consisted five methods namely weed free throughout the growing season, hand weeding, sprayed with glyphosate, sprayed with paraquat, and no weeding.. Result showed that both planting pattern and weed control method did not affect the growth of corn. However, planting pattern and weed control method significantly affected yield production. Yield resulted from double row and triangle planting pattern was 14% and 41% higher, consecutively, than that of single row pattern. The triangle planting pattern combined with any weed control method produced the highest yield production of corn.

  2. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    NASA Technical Reports Server (NTRS)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  3. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  4. 21 CFR 660.51 - Processing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Processing method. (1) The processing method shall be one that has been shown to yield consistently a... be colored green. (3) Only that material which has been fully processed, thoroughly mixed in a single...

  5. A consistent transported PDF model for treating differential molecular diffusion

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  6. Comparison of amino acid digestibility of feedstuffs determined with the precision-fed cecectomized rooster assay and the standardized ileal amino acid digestibility assay.

    PubMed

    Kim, E J; Utterback, P L; Applegate, T J; Parsons, C M

    2011-11-01

    The objective of this study was to evaluate and compare amino acid digestibility of several feedstuffs using 2 commonly accepted methods: the precision-fed cecectomized rooster assay (PFR) and the standardized ileal amino acid assay (SIAAD). Six corn, 6 corn distillers dried grains with or without solubles (DDGS/DDG), one wet distillers grains, one condensed solubles, 2 meat and bone meal (MBM) and a poultry byproduct meal were evaluated. Due to insufficient amounts, the wet distillers grains and condensed solubles were only evaluated in roosters. Standardized amino acid digestibility varied among the feed ingredients and among samples of the same ingredient for both methods. For corn, there were generally no differences in amino acid digestibility between the 2 methods. When differences did occur, there was no consistent pattern among the individual amino acids and methods. Standardized amino acid digestibility was not different between the 2 methods for 4 of the DDG samples; however, the PFR yielded higher digestibility values for a high protein DDG and a conventionally processed DDGS. The PFR yielded higher amino acid digestibility values than the SIAAD for several amino acids in 1 MBM and the poultry byproduct meal, but it yielded lower digestibility values for the other MBM. Overall, there were no consistent differences between methods for amino acid digestibility values. In conclusion, the PFR and SIAAD methods are acceptable for determining amino acid digestibility. However, these procedures do not always yield similar results for all feedstuffs evaluated. Thus, further studies are needed to understand the underlying causes in this variability.

  7. Residual effects of applied chemical fertilisers on growth and seed yields of sunflower (Helianthus annuus cv. high sun 33) after the harvests of initial main crops of maize (Zea mays L.), soybean (Glycine max L.) and sunflower (Helianthus annuus).

    PubMed

    Srisa-ard, K

    2007-03-15

    The experiments consisted of two locations, i.e., the first one was carried out on a growers's upland area at Saraburi Province, Central Plane region of Thailand with the use of Chatturat soil series (Typic Haplustalfs, fine, mixed) and the second experiment was carried out at Suranaree Technology university Experimental Farm, Suranaree Technology University Northeast Thailand with the use of Korat soil series (Oxic Paleustults). The experiments aimed to investigate the effect of residual effects of applied chemical fertilisers on growth and seed yields of sunflower (Helianthus annuus) after the harvests of initial main crops of maize, soybean and sunflower. The experiments consisted of four cultural methods being practiced by growers in both regions. For Methods 1 and 2, each had four fertiliser treatments; Method 3 consisted of two fertiliser treatments and Method 4 was used as a control treatment. The results showed that soil pH, organic matter and nutrients of Korat soil series were most suited soil conditions for growth of sunflower plants, whilst that of Chatturat soil series at Saraburi province was an alkaline soil with a mean value of soil pH of 7.8. Chatturat soil series, in most cases, gave higher amounts of seed yields (1,943.75 kg ha(-1)) than Korat soil series. Residual effects of applied chemical fertilisers to main crops of soybean gave better growth and seed yields of sunflower plants and it is considered to be the first choice. The use of sunflower and maize as main crops gave a second choice for subsequent crop of sunflower.

  8. A network-based approach for semi-quantitative knowledge mining and its application to yield variability

    NASA Astrophysics Data System (ADS)

    Schauberger, Bernhard; Rolinski, Susanne; Müller, Christoph

    2016-12-01

    Variability of crop yields is detrimental for food security. Under climate change its amplitude is likely to increase, thus it is essential to understand the underlying causes and mechanisms. Crop models are the primary tool to project future changes in crop yields under climate change. A systematic overview of drivers and mechanisms of crop yield variability (YV) can thus inform crop model development and facilitate improved understanding of climate change impacts on crop yields. Yet there is a vast body of literature on crop physiology and YV, which makes a prioritization of mechanisms for implementation in models challenging. Therefore this paper takes on a novel approach to systematically mine and organize existing knowledge from the literature. The aim is to identify important mechanisms lacking in models, which can help to set priorities in model improvement. We structure knowledge from the literature in a semi-quantitative network. This network consists of complex interactions between growing conditions, plant physiology and crop yield. We utilize the resulting network structure to assign relative importance to causes of YV and related plant physiological processes. As expected, our findings confirm existing knowledge, in particular on the dominant role of temperature and precipitation, but also highlight other important drivers of YV. More importantly, our method allows for identifying the relevant physiological processes that transmit variability in growing conditions to variability in yield. We can identify explicit targets for the improvement of crop models. The network can additionally guide model development by outlining complex interactions between processes and by easily retrieving quantitative information for each of the 350 interactions. We show the validity of our network method as a structured, consistent and scalable dictionary of literature. The method can easily be applied to many other research fields.

  9. Adjusting site index and age to account for genetic effects in yield equations for loblolly pine

    Treesearch

    Steven A. Knowe; G. Sam Foster

    2010-01-01

    Nine combinations of site index curves and age adjustments methods were evaluated for incorporating genetic effects for open-pollinated loblolly pine (Pinus taeda L.) families. An explicit yield system consisting of dominant height, basal area, and merchantable green weight functions was used to compare the accuracy of predictions associated with...

  10. Specific yield: compilation of specific yields for various materials

    USGS Publications Warehouse

    Johnson, A.I.

    1967-01-01

    Specific yield is defined as the ratio of (1) the volume of water that a saturated rock or soil will yield by gravity to (2) the total volume of the rock or soft. Specific yield is usually expressed as a percentage. The value is not definitive, because the quantity of water that will drain by gravity depends on variables such as duration of drainage, temperature, mineral composition of the water, and various physical characteristics of the rock or soil under consideration. Values of specific yields nevertheless offer a convenient means by which hydrologists can estimate the water-yielding capacities of earth materials and, as such, are very useful in hydrologic studies. The present report consists mostly of direct or modified quotations from many selected reports that present and evaluate methods for determining specific yield, limitations of those methods, and results of the determinations made on a wide variety of rock and soil materials. Although no particular values are recommended in this report, a table summarizes values of specific yield, and their averages, determined for 10 rock textures. The following is an abstract of the table. [Table

  11. The Second Report on the State of the World's Animal Genetic Resources for Food and Agriculture, Part 4, The State of the Art: Box 4A4: A digital enumeration method for collecting phenotypic data for genome association

    USDA-ARS?s Scientific Manuscript database

    Consistent data across animal populations are required to inform genomic science aimed at finding important adaptive genetic variations. The ADAPTMap Digital Phenotype Collection- Prototype Method will yield a new procedure to provide consistent phenotypic data by digital enumeration of categorical ...

  12. The Safe Yield and Climatic Variability: Implications for Groundwater Management.

    PubMed

    Loáiciga, Hugo A

    2017-05-01

    Methods for calculating the safe yield are evaluated in this paper using a high-quality and long historical data set of groundwater recharge, discharge, extraction, and precipitation in a karst aquifer. Consideration is given to the role that climatic variability has on the determination of a climatically representative period with which to evaluate the safe yield. The methods employed to estimate the safe yield are consistent with its definition as a long-term average extraction rate that avoids adverse impacts on groundwater. The safe yield is a useful baseline for groundwater planning; yet, it is herein shown that it is not an operational rule that works well under all climatic conditions. This paper shows that due to the nature of dynamic groundwater processes it may be most appropriate to use an adaptive groundwater management strategy that links groundwater extraction rates to groundwater discharge rates, thus achieving a safe yield that represents an estimated long-term sustainable yield. An example of the calculation of the safe yield of the Edwards Aquifer (Texas) demonstrates that it is about one-half of the average annual recharge. © 2016, National Ground Water Association.

  13. Multi-approach assessment of the spatial distribution of the specific yield: application to the Crau plain aquifer, France

    NASA Astrophysics Data System (ADS)

    Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric

    2018-06-01

    Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.

  14. Multi-approach assessment of the spatial distribution of the specific yield: application to the Crau plain aquifer, France

    NASA Astrophysics Data System (ADS)

    Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric

    2018-03-01

    Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.

    Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less

  16. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  17. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    ERIC Educational Resources Information Center

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  18. Effects of fission yield data in the calculation of antineutrino spectra for U 235 ( n , fission ) at thermal and fast neutron energies

    DOE PAGES

    Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.; ...

    2016-04-01

    Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less

  19. Swab Protocol for Rapid Laboratory Diagnosis of Cutaneous Anthrax

    PubMed Central

    Marston, Chung K.; Bhullar, Vinod; Baker, Daniel; Rahman, Mahmudur; Hossain, M. Jahangir; Chakraborty, Apurba; Khan, Salah Uddin; Hoffmaster, Alex R.

    2012-01-01

    The clinical laboratory diagnosis of cutaneous anthrax is generally established by conventional microbiological methods, such as culture and directly straining smears of clinical specimens. However, these methods rely on recovery of viable Bacillus anthracis cells from swabs of cutaneous lesions and often yield negative results. This study developed a rapid protocol for detection of B. anthracis on clinical swabs. Three types of swabs, flocked-nylon, rayon, and polyester, were evaluated by 3 extraction methods, the swab extraction tube system (SETS), sonication, and vortex. Swabs were spiked with virulent B. anthracis cells, and the methods were compared for their efficiency over time by culture and real-time PCR. Viability testing indicated that the SETS yielded greater recovery of B. anthracis from 1-day-old swabs; however, reduced viability was consistent for the 3 extraction methods after 7 days and nonviability was consistent by 28 days. Real-time PCR analysis showed that the PCR amplification was not impacted by time for any swab extraction method and that the SETS method provided the lowest limit of detection. When evaluated using lesion swabs from cutaneous anthrax outbreaks, the SETS yielded culture-negative, PCR-positive results. This study demonstrated that swab extraction methods differ in their efficiency of recovery of viable B. anthracis cells. Furthermore, the results indicated that culture is not reliable for isolation of B. anthracis from swabs at ≥7 days. Thus, we recommend the use of the SETS method with subsequent testing by culture and real-time PCR for diagnosis of cutaneous anthrax from clinical swabs of cutaneous lesions. PMID:23035192

  20. Evaluation of the CEAS trend and monthly weather data models for soybean yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    The CEAS models evaluated use historic trend and meteorological and agroclimatic variables to forecast soybean yields in Iowa, Illinois, and Indiana. Indicators of yield reliability and current measures of modeled yield reliability were obtained from bootstrap tests on the end of season models. Indicators of yield reliability show that the state models are consistently better than the crop reporting district (CRD) models. One CRD model is especially poor. At the state level, the bias of each model is less than one half quintal/hectare. The standard deviation is between one and two quintals/hectare. The models are adequate in terms of coverage and are to a certain extent consistent with scientific knowledge. Timely yield estimates can be made during the growing season using truncated models. The models are easy to understand and use and are not costly to operate. Other than the specification of values used to determine evapotranspiration, the models are objective. Because the method of variable selection used in the model development is adequately documented, no evaluation can be made of the objectivity and cost of redevelopment of the model.

  1. A Comparison of DNA Extraction Methods using Petunia hybrida Tissues

    PubMed Central

    Tamari, Farshad; Hinkley, Craig S.; Ramprashad, Naderia

    2013-01-01

    Extraction of DNA from plant tissue is often problematic, as many plants contain high levels of secondary metabolites that can interfere with downstream applications, such as the PCR. Removal of these secondary metabolites usually requires further purification of the DNA using organic solvents or other toxic substances. In this study, we have compared two methods of DNA purification: the cetyltrimethylammonium bromide (CTAB) method that uses the ionic detergent hexadecyltrimethylammonium bromide and chloroform-isoamyl alcohol and the Edwards method that uses the anionic detergent SDS and isopropyl alcohol. Our results show that the Edwards method works better than the CTAB method for extracting DNA from tissues of Petunia hybrida. For six of the eight tissues, the Edwards method yielded more DNA than the CTAB method. In four of the tissues, this difference was statistically significant, and the Edwards method yielded 27–80% more DNA than the CTAB method. Among the different tissues tested, we found that buds, 4 days before anthesis, had the highest DNA concentrations and that buds and reproductive tissue, in general, yielded higher DNA concentrations than other tissues. In addition, DNA extracted using the Edwards method was more consistently PCR-amplified than that of CTAB-extracted DNA. Based on these results, we recommend using the Edwards method to extract DNA from plant tissues and to use buds and reproductive structures for highest DNA yields. PMID:23997658

  2. Laboratory Measurements of Photolytic Parameters for Formaldehyde.

    DTIC Science & Technology

    1980-11-01

    dynamic dilution methods. Compressed air stored in steel cylinders, carefully selected to contain carbon monoxide and hydrogen at mixing ratios of...in air has been investi- gated in the laboratory at two temperatures: 300 and 220 K. Quantum yields for the formation of CO and H2 were determined at...procedures in the case of pure formaldehyde gave consistent results. (b) Quantum Yields Mixtures of formaldehyde in air were photolyzed in a

  3. Investigation of Inconsistent ENDF/B-VII.1 Independent and Cumulative Fission Product Yields with Proposed Revisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pigni, M.T., E-mail: pignimt@ornl.gov; Francis, M.W.; Gauld, I.C.

    A recent implementation of ENDF/B-VII.1 independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear schemes in the decay sub-library that are not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that do not agree with the cumulative fission yields in the library as well as with experimental measurements. To address these issues, a comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron-induced fission for {supmore » 235,238}U and {sup 239,241}Pu in order to provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to compare the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. Another important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library for stable and long-lived fission products. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less

  4. [Contrast of Z-Pinch X-Ray Yield Measure Technique].

    PubMed

    Li, Mo; Wang, Liang-ping; Sheng, Liang; Lu, Yi

    2015-03-01

    Resistive bolometer and scintillant detection system are two mainly Z-pinch X-ray yield measure techniques which are based on different diagnostic principles. Contrasting the results from two methods can help with increasing precision of X-ray yield measurement. Experiments with different load material and shape were carried out on the "QiangGuang-I" facility. For Al wire arrays, X-ray yields measured by the two techniques were largely consistent. However, for insulating coating W wire arrays, X-ray yields taken from bolometer changed with load parameters while data from scintillant detection system hardly changed. Simulation and analysis draw conclusions as follows: (1) Scintillant detection system is much more sensitive to X-ray photons with low energy and its spectral response is wider than the resistive bolometer. Thus, results from the former method are always larger than the latter. (2) The responses of the two systems are both flat to Al plasma radiation. Thus, their results are consistent for Al wire array loads. (3) Radiation form planar W wire arrays is mainly composed of sub-keV soft X-ray. X-ray yields measured by the bolometer is supposed to be accurate because of the nickel foil can absorb almost all the soft X-ray. (4) By contrast, using planar W wire arrays, data from scintillant detection system hardly change with load parameters. A possible explanation is that while the distance between wires increases, plasma temperature at stagnation reduces and spectra moves toward the soft X-ray region. Scintillator is much more sensitive to the soft X-ray below 200 eV. Thus, although the total X-ray yield reduces with large diameter load, signal from the scintillant detection system is almost the same. (5) Both Techniques affected by electron beams produced by the loads.

  5. The long-term strength of Europe and its implications for plate-forming processes.

    PubMed

    Pérez-Gussinyé, M; Watts, A B

    2005-07-21

    Field-based geological studies show that continental deformation preferentially occurs in young tectonic provinces rather than in old cratons. This partitioning of deformation suggests that the cratons are stronger than surrounding younger Phanerozoic provinces. However, although Archaean and Phanerozoic lithosphere differ in their thickness and composition, their relative strength is a matter of much debate. One proxy of strength is the effective elastic thickness of the lithosphere, Te. Unfortunately, spatial variations in Te are not well understood, as different methods yield different results. The differences are most apparent in cratons, where the 'Bouguer coherence' method yields large Te values (> 60 km) whereas the 'free-air admittance' method yields low values (< 25 km). Here we present estimates of the variability of Te in Europe using both methods. We show that when they are consistently formulated, both methods yield comparable Te values that correlate with geology, and that the strength of old lithosphere (> or = 1.5 Gyr old) is much larger (mean Te > 60 km) than that of younger lithosphere (mean Te < 30 km). We propose that this strength difference reflects changes in lithospheric plate structure (thickness, geothermal gradient and composition) that result from mantle temperature and volatile content decrease through Earth's history.

  6. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    PubMed

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.

  7. Investigation of inconsistent ENDF/B-VII.1 independent and cumulative fission product yields with proposed revisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pigni, Marco T; Francis, Matthew W; Gauld, Ian C

    A recent implementation of ENDF/B-VII. independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear scheme in the decay sub-library that is not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that are incompatible with the cumulative fission yields in the library, and also with experimental measurements. A comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron induced fission for 235,238U and 239,241Pu in order tomore » provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to evaluate the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. An important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library in the case of stable and long-lived cumulative yields due to the inconsistency of ENDF/B-VII.1 fission p;roduct yield and decay data sub-libraries. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less

  8. Self consistency grouping: a stringent clustering method

    PubMed Central

    2012-01-01

    Background Numerous types of clustering like single linkage and K-means have been widely studied and applied to a variety of scientific problems. However, the existing methods are not readily applicable for the problems that demand high stringency. Methods Our method, self consistency grouping, i.e. SCG, yields clusters whose members are closer in rank to each other than to any member outside the cluster. We do not define a distance metric; we use the best known distance metric and presume that it measures the correct distance. SCG does not impose any restriction on the size or the number of the clusters that it finds. The boundaries of clusters are determined by the inconsistencies in the ranks. In addition to the direct implementation that finds the complete structure of the (sub)clusters we implemented two faster versions. The fastest version is guaranteed to find only the clusters that are not subclusters of any other clusters and the other version yields the same output as the direct implementation but does so more efficiently. Results Our tests have demonstrated that SCG yields very few false positives. This was accomplished by introducing errors in the distance measurement. Clustering of protein domain representatives by structural similarity showed that SCG could recover homologous groups with high precision. Conclusions SCG has potential for finding biological relationships under stringent conditions. PMID:23320864

  9. Oxidation of Benzoin to Benzil Using Alumina-Supported Active MnO2

    NASA Astrophysics Data System (ADS)

    Crouch, R. David; Holden, Michael S.; Burger, Jennifer S.

    2001-07-01

    The use of alumina-supported active MnO2 to oxidize benzoin to benzil is described. The advantages of this reagent include ease of handling and separation from the product and lower toxicity than previously reported supported oxidizing agents. The product is purified by elution through a simple chromatography column consisting of a silica gel-packed Pasteur pipet. Students' yields are comparable to yields from other reported oxidation methods.

  10. Temperature Increase Reduces Global Yields of Major Crops in Four Independent Estimates

    NASA Technical Reports Server (NTRS)

    Zhao, Chuang; Liu, Bing; Piao, Shilong; Wang, Xuhui; Lobell, David B.; Huang, Yao; Huang, Mengtian; Yao, Yitong; Bassu, Simona; Ciais, Philippe; hide

    2017-01-01

    Wheat, rice, maize, and soybean provide two-thirds of human caloric intake. Assessing the impact of global temperature increase on production of these crops is therefore critical to maintaining global food supply, but different studies have yielded different results. Here, we investigated the impacts of temperature on yields of the four crops by compiling extensive published results from four analytical methods: global grid-based and local point-based models, statistical regressions, and field-warming experiments. Results from the different methods consistently showed negative temperature impacts on crop yield at the global scale, generally underpinned by similar impacts at country and site scales. Without CO2 fertilization, effective adaptation, and genetic improvement, each degree-Celsius increase in global mean temperature would, on average, reduce global yields of wheat by 6.0%, rice by 3.2%, maize by 7.4%, and soybean by 3.1%. Results are highly heterogeneous across crops and geographical areas, with some positive impact estimates. Multi-method analyses improved the confidence in assessments of future climate impacts on global major crops and suggest crop- and region-specific adaptation strategies to ensure food security for an increasing world population.

  11. Effect of chemical and mechanical weed control on cassava yield, soil quality and erosion under cassava cropping system

    NASA Astrophysics Data System (ADS)

    Islami, Titiek; Wisnubroto, Erwin; Utomo, Wani

    2016-04-01

    Three years field experiments were conducted to study the effect of chemical and mechanical weed control on soil quality and erosion under cassava cropping system. The experiment were conducted at University Brawijaya field experimental station, Jatikerto, Malang, Indonesia. The experiments were carried out from 2011 - 2014. The treatments consist of three cropping system (cassava mono culture; cassava + maize intercropping and cassava + peanut intercropping), and two weed control method (chemical and mechanical methods). The experimental result showed that the yield of cassava first year and second year did not influenced by weed control method and cropping system. However, the third year yield of cassava was influence by weed control method and cropping system. The cassava yield planted in cassava + maize intercropping system with chemical weed control methods was only 24 t/ha, which lower compared to other treatments, even with that of the same cropping system used mechanical weed control. The highest cassava yield in third year was obtained by cassava + peanuts cropping system with mechanical weed control method. After three years experiment, the soil of cassava monoculture system with chemical weed control method possessed the lowest soil organic matter, and soil aggregate stability. During three years of cropping soil erosion in chemical weed control method, especially on cassava monoculture, was higher compared to mechanical weed control method. The soil loss from chemical control method were 40 t/ha, 44 t/ha and 54 t/ha for the first, second and third year crop. The soil loss from mechanical weed control method for the same years was: 36 t/ha, 36 t/ha and 38 t/ha. Key words: herbicide, intercropping, soil organic matter, aggregate stability.

  12. Comparison of the DNA extraction methods for polymerase chain reaction amplification from formalin-fixed and paraffin-embedded tissues.

    PubMed

    Sato, Y; Sugie, R; Tsuchiya, B; Kameya, T; Natori, M; Mukai, K

    2001-12-01

    To obtain an adequate quality and quantity of DNA from formalin-fixed and paraffin-embedded tissue, six different DNA extraction methods were compared. Four methods used deparaffinization by xylene followed by proteinase K digestion and phenol-chloroform extraction. The temperature of the different steps was changed to obtain higher yields and improved quality of extracted DNA. The remaining two methods used microwave heating for deparaffinization. The best DNA extraction method consisted of deparaffinization by microwave irradiation, protein digestion with proteinase K at 48 degrees C overnight, and no further purification steps. By this method, the highest DNA yield was obtained and the amplification of a 989-base pair beta-globin gene fragment was achieved. Furthermore, DNA extracted by means of this procedure from five gastric carcinomas was successfully used for single strand conformation polymorphism and direct sequencing assays of the beta-catenin gene. Because the microwave-based DNA extraction method presented here is simple, has a lower contamination risk, and results in a higher yield of DNA compared with the ordinary organic chemical reagent-based extraction method, it is considered applicable to various clinical and basic fields.

  13. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yost, Shane R.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2016-08-07

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the numbermore » of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.« less

  14. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  15. Boosted Regression Trees Outperforms Support Vector Machines in Predicting (Regional) Yields of Winter Wheat from Single and Cumulated Dekadal Spot-VGT Derived Normalized Difference Vegetation Indices

    NASA Astrophysics Data System (ADS)

    Stas, Michiel; Dong, Qinghan; Heremans, Stien; Zhang, Beier; Van Orshoven, Jos

    2016-08-01

    This paper compares two machine learning techniques to predict regional winter wheat yields. The models, based on Boosted Regression Trees (BRT) and Support Vector Machines (SVM), are constructed of Normalized Difference Vegetation Indices (NDVI) derived from low resolution SPOT VEGETATION satellite imagery. Three types of NDVI-related predictors were used: Single NDVI, Incremental NDVI and Targeted NDVI. BRT and SVM were first used to select features with high relevance for predicting the yield. Although the exact selections differed between the prefectures, certain periods with high influence scores for multiple prefectures could be identified. The same period of high influence stretching from March to June was detected by both machine learning methods. After feature selection, BRT and SVM models were applied to the subset of selected features for actual yield forecasting. Whereas both machine learning methods returned very low prediction errors, BRT seems to slightly but consistently outperform SVM.

  16. Prototype Procedures to Describe Army Jobs

    DTIC Science & Technology

    2010-07-01

    ratings for the same MOS. Consistent with a multi-trait multi- method framework, high profile similarities (or low mean differences ) among different ...rater types for the same MOS would indicate convergent validity. That is, different methods (i.e., rater types) yield converging results for the same... different methods of data collection depends upon the type of data collected. For example, it could be that data on work-oriented descriptors are most

  17. Ranging through Gabor logons-a consistent, hierarchical approach.

    PubMed

    Chang, C; Chatterjee, S

    1993-01-01

    In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.

  18. Mechanical Properties of Elastomeric Impression Materials: An In Vitro Comparison

    PubMed Central

    De Angelis, Francesco; Caputi, Sergio; D'Amario, Maurizio; D'Arcangelo, Camillo

    2015-01-01

    Purpose. Although new elastomeric impression materials have been introduced into the market, there are still insufficient data about their mechanical features. The tensile properties of 17 hydrophilic impression materials with different consistencies were compared. Materials and Methods. 12 vinylpolysiloxane, 2 polyether, and 3 hybrid vinylpolyether silicone-based impression materials were tested. For each material, 10 dumbbell-shaped specimens were fabricated (n = 10), according to the ISO 37:2005 specifications, and loaded in tension until failure. Mean values for tensile strength, yield strength, strain at break, and strain at yield point were calculated. Data were statistically analyzed using one-way ANOVA and Tukey's tests (α = 0.05). Results. Vinylpolysiloxanes consistently showed higher tensile strength values than polyethers. Heavy-body materials showed higher tensile strength than the light bodies from the same manufacturer. Among the light bodies, the highest yield strength was achieved by the hybrid vinylpolyether silicone (2.70 MPa). Polyethers showed the lowest tensile (1.44 MPa) and yield (0.94 MPa) strengths, regardless of the viscosity. Conclusion. The choice of an impression material should be based on the specific physical behavior of the elastomer. The light-body vinylpolyether silicone showed high tensile strength, yield strength, and adequate strain at yield/brake; those features might help to reduce tearing phenomena in the thin interproximal and crevicular areas. PMID:26693227

  19. Efficient production of recombinant adeno-associated viral vector, serotype DJ/8, carrying the GFP gene.

    PubMed

    Hashimoto, Haruo; Mizushima, Tomoko; Chijiwa, Tsuyoshi; Nakamura, Masato; Suemizu, Hiroshi

    2017-06-15

    The purpose of this study was to establish an efficient method for the preparation of an adeno-associated viral (AAV), serotype DJ/8, carrying the GFP gene (AAV-DJ/8-GFP). We compared the yields of AAV-DJ/8 vector, which were produced by three different combination methods, consisting of two plasmid DNA transfection methods (lipofectamine and calcium phosphate co-precipitation; CaPi) and two virus DNA purification methods (iodixanol and cesium chloride; CsCl). The results showed that the highest yield of AAV-DJ/8-GFP vector was accomplished with the combination method of lipofectamine transfection and iodixanol purification. The viral protein expression levels and the transduction efficacy in HEK293 and CHO cells were not different among four different combination methods for AAV-DJ/8-GFP vectors. We confirmed that the AAV-DJ/8-GFP vector could transduce to human and murine hepatocyte-derived cell lines. These results show that AAV-DJ/8-GFP, purified by the combination of lipofectamine and iodixanol, produces an efficient yield without altering the characteristics of protein expression and AAV gene transduction. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  1. FEASIBILITY OF HYDRAULIC FRACTURING OF SOILS TO IMPROVE REMEDIAL ACTIONS

    EPA Science Inventory

    Hydraulic fracturing, a technique commonly used to increase the yields of oil wells, could improve the effectiveness of several methods of in situ remediation. This project consisted of laboratory and field tests in which hydraulic fractures were created in soil. Laboratory te...

  2. A TRACER METHOD FOR COMPUTING TYPE IA SUPERNOVA YIELDS: BURNING MODEL CALIBRATION, RECONSTRUCTION OF THICKENED FLAMES, AND VERIFICATION FOR PLANAR DETONATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsley, Dean M.; Miles, Broxton J.; Timmes, F. X.

    2016-07-01

    We refine our previously introduced parameterized model for explosive carbon–oxygen fusion during thermonuclear Type Ia supernovae (SNe Ia) by adding corrections to post-processing of recorded Lagrangian fluid-element histories to obtain more accurate isotopic yields. Deflagration and detonation products are verified for propagation in a medium of uniform density. A new method is introduced for reconstructing the temperature–density history within the artificially thick model deflagration front. We obtain better than 5% consistency between the electron capture computed by the burning model and yields from post-processing. For detonations, we compare to a benchmark calculation of the structure of driven steady-state planar detonationsmore » performed with a large nuclear reaction network and error-controlled integration. We verify that, for steady-state planar detonations down to a density of 5 × 10{sup 6} g cm{sup −3}, our post-processing matches the major abundances in the benchmark solution typically to better than 10% for times greater than 0.01 s after the passage of the shock front. As a test case to demonstrate the method, presented here with post-processing for the first time, we perform a two-dimensional simulation of a SN Ia in the scenario of a Chandrasekhar-mass deflagration–detonation transition (DDT). We find that reconstruction of deflagration tracks leads to slightly more complete silicon burning than without reconstruction. The resulting abundance structure of the ejecta is consistent with inferences from spectroscopic studies of observed SNe Ia. We confirm the absence of a central region of stable Fe-group material for the multi-dimensional DDT scenario. Detailed isotopic yields are tabulated and change only modestly when using deflagration reconstruction.« less

  3. Lagrangian and Hamiltonian constraints for guiding-center Hamiltonian theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tronko, Natalia; Brizard, Alain J.

    A consistent guiding-center Hamiltonian theory is derived by Lie-transform perturbation method, with terms up to second order in magnetic-field nonuniformity. Consistency is demonstrated by showing that the guiding-center transformation presented here satisfies separate Jacobian and Lagrangian constraints that have not been explored before. A new first-order term appearing in the guiding-center phase-space Lagrangian is identified through a calculation of the guiding-center polarization. It is shown that this new polarization term also yields a simpler expression of the guiding-center toroidal canonical momentum, which satisfies an exact conservation law in axisymmetric magnetic geometries. Finally, an application of the guiding-center Lagrangian constraint onmore » the guiding-center Hamiltonian yields a natural interpretation for its higher-order corrections.« less

  4. Temperature increase reduces global yields of major crops in four independent estimates

    PubMed Central

    Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Lobell, David B.; Huang, Yao; Huang, Mengtian; Yao, Yitong; Bassu, Simona; Ciais, Philippe; Durand, Jean-Louis; Elliott, Joshua; Ewert, Frank; Janssens, Ivan A.; Li, Tao; Lin, Erda; Liu, Qiang; Martre, Pierre; Peng, Shushi; Wallach, Daniel; Wang, Tao; Wu, Donghai; Liu, Zhuo; Zhu, Yan; Zhu, Zaichun; Asseng, Senthold

    2017-01-01

    Wheat, rice, maize, and soybean provide two-thirds of human caloric intake. Assessing the impact of global temperature increase on production of these crops is therefore critical to maintaining global food supply, but different studies have yielded different results. Here, we investigated the impacts of temperature on yields of the four crops by compiling extensive published results from four analytical methods: global grid-based and local point-based models, statistical regressions, and field-warming experiments. Results from the different methods consistently showed negative temperature impacts on crop yield at the global scale, generally underpinned by similar impacts at country and site scales. Without CO2 fertilization, effective adaptation, and genetic improvement, each degree-Celsius increase in global mean temperature would, on average, reduce global yields of wheat by 6.0%, rice by 3.2%, maize by 7.4%, and soybean by 3.1%. Results are highly heterogeneous across crops and geographical areas, with some positive impact estimates. Multimethod analyses improved the confidence in assessments of future climate impacts on global major crops and suggest crop- and region-specific adaptation strategies to ensure food security for an increasing world population. PMID:28811375

  5. Temperature increase reduces global yields of major crops in four independent estimates.

    PubMed

    Zhao, Chuang; Liu, Bing; Piao, Shilong; Wang, Xuhui; Lobell, David B; Huang, Yao; Huang, Mengtian; Yao, Yitong; Bassu, Simona; Ciais, Philippe; Durand, Jean-Louis; Elliott, Joshua; Ewert, Frank; Janssens, Ivan A; Li, Tao; Lin, Erda; Liu, Qiang; Martre, Pierre; Müller, Christoph; Peng, Shushi; Peñuelas, Josep; Ruane, Alex C; Wallach, Daniel; Wang, Tao; Wu, Donghai; Liu, Zhuo; Zhu, Yan; Zhu, Zaichun; Asseng, Senthold

    2017-08-29

    Wheat, rice, maize, and soybean provide two-thirds of human caloric intake. Assessing the impact of global temperature increase on production of these crops is therefore critical to maintaining global food supply, but different studies have yielded different results. Here, we investigated the impacts of temperature on yields of the four crops by compiling extensive published results from four analytical methods: global grid-based and local point-based models, statistical regressions, and field-warming experiments. Results from the different methods consistently showed negative temperature impacts on crop yield at the global scale, generally underpinned by similar impacts at country and site scales. Without CO 2 fertilization, effective adaptation, and genetic improvement, each degree-Celsius increase in global mean temperature would, on average, reduce global yields of wheat by 6.0%, rice by 3.2%, maize by 7.4%, and soybean by 3.1%. Results are highly heterogeneous across crops and geographical areas, with some positive impact estimates. Multimethod analyses improved the confidence in assessments of future climate impacts on global major crops and suggest crop- and region-specific adaptation strategies to ensure food security for an increasing world population.

  6. Improved recovery of functionally active eosinophils and neutrophils using novel immunomagnetic technology.

    PubMed

    Son, Kiho; Mukherjee, Manali; McIntyre, Brendan A S; Eguez, Jose C; Radford, Katherine; LaVigne, Nicola; Ethier, Caroline; Davoine, Francis; Janssen, Luke; Lacy, Paige; Nair, Parameswaran

    2017-10-01

    Clinically relevant and reliable reports derived from in vitro research are dependent on the choice of cell isolation protocols adopted between different laboratories. Peripheral blood eosinophils are conventionally isolated using density-gradient centrifugation followed by immunomagnetic selection (positive/negative) while neutrophils follow a more simplified dextran-sedimentation methodology. With the increasing sophistication of molecular techniques, methods are now available that promise protocols with reduced user-manipulations, improved efficiency, and better yield without compromising the purity of enriched cell populations. These recent techniques utilize immunomagnetic particles with multiple specificities against differential cell surface markers to negatively select non-target cells from whole blood, greatly reducing the cost/time taken to isolate granulocytes. Herein, we compare the yield efficiencies, purity and baseline activation states of eosinophils/neutrophils isolated using one of these newer protocols that use immunomagnetic beads (MACSxpress isolation) vs. the standard isolation procedures. The study shows that the MACSxpress method consistently allowed higher yields per mL of peripheral blood compared to conventional methods (P<0.001, n=8, Wilcoxon paired test), with high isolation purities for both eosinophils (95.0±1.7%) and neutrophils (94.2±10.1%) assessed by two methods: Wright's staining and flow cytometry. In addition, enumeration of CD63 + (marker for eosinophil activation) and CD66b + (marker for neutrophil activation) cells within freshly isolated granulocytes, respectively, confirmed that conventional protocols using density-gradient centrifugation caused cellular activation of the granulocytes at baseline compared to the MACSxpress method. In conclusion, MACSxpress isolation kits were found to be superior to conventional techniques for consistent purifications of eosinophils and neutrophils that were suitable for activation assays involving degranulation markers. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Conductivity map from scanning tunneling potentiometry.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Durand, Corentin; Li, An-Ping; Zhang, X-G

    2016-08-01

    We present a novel method for extracting two-dimensional (2D) conductivity profiles from large electrochemical potential datasets acquired by scanning tunneling potentiometry of a 2D conductor. The method consists of a data preprocessing procedure to reduce/eliminate noise and a numerical conductivity reconstruction. The preprocessing procedure employs an inverse consistent image registration method to align the forward and backward scans of the same line for each image line followed by a total variation (TV) based image restoration method to obtain a (nearly) noise-free potential from the aligned scans. The preprocessed potential is then used for numerical conductivity reconstruction, based on a TV model solved by accelerated alternating direction method of multiplier. The method is demonstrated on a measurement of the grain boundary of a monolayer graphene, yielding a nearly 10:1 ratio for the grain boundary resistivity over bulk resistivity.

  8. Using Photo-Interviewing as Tool for Research and Evaluation.

    ERIC Educational Resources Information Center

    Dempsey, John V.; Tucker, Susan A.

    Arguing that photo-interviewing yields richer data than that usually obtained from verbal interviewing procedures alone, it is proposed that this method of data collection be added to "standard" methodologies in instructional development research and evaluation. The process, as described in this paper, consists of using photographs of…

  9. Evaluation of trends in wheat yield models

    NASA Technical Reports Server (NTRS)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  10. Towards a rational theory for CFD global stability

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.

    1989-01-01

    The fundamental notion of the consistent stability of semidiscrete analogues of evolution PDEs is explored. Lyapunov's direct method is used to develop CFD semidiscrete algorithms which yield the TVD constraint as a special case. A general formula for supplying dissipation parameters for arbitrary multidimensional conservation law systems is proposed. The reliability of the method is demonstrated by the results of two numerical tests for representative Euler shocked flows.

  11. [The importance of using the computer in treating children with strabismus and amblyopia].

    PubMed

    Tatarinov, S A; Amel'ianova, S G; Kashchenko, T P; Lakomkin, V I; Avuchenkova, T N; Galich, V I

    1993-01-01

    A method for therapy of strabismus and amblyopia with the use of IBM PC AT type computer is suggested. It consists in active interaction of a patient with various test objects on the monitor and is realized via a special set of programs. Clinical indications for the use of a new method are defined. Its use yielded good results in 82 of 97 children.

  12. Experiment, monitoring, and gradient methods used to infer climate change effects on plant communities yield consistent patterns

    Treesearch

    Sarah C. Elmendorf; Gregory H.R. Henry; Robert D. Hollisterd; Anna Maria Fosaa; William A. Gould; Luise Hermanutz; Annika Hofgaard; Ingibjorg I. Jonsdottir; Janet C. Jorgenson; Esther Levesque; Borgbor Magnusson; Ulf Molau; Isla H. Myers-Smith; Steven F. Oberbauer; Christian Rixen; Craig E. Tweedie; Marilyn Walkers

    2015-01-01

    Inference about future climate change impacts typically relies on one of three approaches: manipulative experiments, historical comparisons (broadly defined to include monitoring the response to ambient climate fluctuations using repeat sampling of plots, dendroecology, and paleoecology techniques), and space-for-time substitutions derived from sampling along...

  13. Using Least Squares to Solve Systems of Equations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2016-01-01

    The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…

  14. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  15. Screening cucurbit rootstocks for varietal resistance to Meloidogyne incognita and Rotylenchulus reniformis

    USDA-ARS?s Scientific Manuscript database

    Fusarium wilt (caused by the fungus Fusarium oxysporum f.sp. niveum) has been a consistent problem in the production of cucurbit species like watermelon. One method for combating this pathogen in the field is to graft a susceptible, high yielding scion on to a Fusarium wilt resistant rootstock. A co...

  16. Elevated Genetic Diversity in an F2:6 Population of Quinoa (Chenopodium quinoa) Developed through an Inter-ecotype Cross

    PubMed Central

    Benlhabib, Ouafae; Boujartani, Noura; Maughan, Peter J.; Jacobsen, Sven E.; Jellen, Eric N.

    2016-01-01

    Quinoa (Chenopodium quinoa) is a seed crop of the Andean highlands and Araucanian coastal regions of South America that has recently expanded in use and production beyond its native range. This is largely due to its superb nutritional value, consisting of protein that is rich in essential amino acids along with vitamins and minerals. Quinoa also presents a remarkable degree of tolerance to saline conditions, drought, and frost. The present study involved 72 F2:6 recombinant-inbred lines and parents developed through hybridization between highland (0654) and coastal (NL-6) germplasm groups. The purpose was to characterize the quinoa germplasm developed, to assess the discriminating potential of 21 agro-morpho-phenological traits, and to evaluate the extent of genetic variability recovered through selfing. A vast amount of genetic variation was detected among the 72 lines evaluated for quantitative and qualitative traits. Impressive transgressive segregation was measured for seed yield (22.42 g/plant), while plant height and maturity had higher heritabilities (73 and 89%, respectively). Other notable characters segregating in the population included panicle and stem color, panicle form, and resistance to downy mildew. In the Principal Component analysis, the first axis explained 74% of the total variation and was correlated to plant height, panicle size, stem diameter, biomass, mildew reaction, maturation, and seed yield; those traits are relevant discriminatory characters. Yield correlated positively with panicle length and biomass. Unweighted Pair Group Method with Arithmetic Mean-based cluster analysis identified three groups: one consisting of late, mildew-resistant, high-yielding lines; one having semi-late lines with intermediate yield and mildew susceptibility; and a third cluster consisting of early to semi-late accessions with low yield and mildew susceptibility. This study highlighted the extended diversity regenerated among the 72 accessions and helped to identify potentially adapted quinoa genotypes for production in the Moroccan coastal environment. PMID:27582753

  17. Optimizing Adipose Tissue Extract Isolation with Stirred Suspension Culture.

    PubMed

    Zhang, Yan; Yu, Mei; Zhao, Xueyong; Dai, Minjia; Chen, Chang; Tian, Weidong

    2018-05-31

    Adherent culture which is used to collect adipose tissue extract (ATE) previously brings the problem of inhomogeneity and non-repeatability. Here we aim to extract ATE with stirred suspension culture to speed up the extraction process, stabilize the yield and improve consistent potency metrics of ATE. ATE was collected with adherent culture (ATE-A) and stirred suspension culture (ATE-S) separately. Protein yield and composition were detected by SDS-PAGE while cytokines in ATE were determined with ELISA. The adipogenic and angiogenic potential of ATE were compared by Western blot and qPCR. In addition, HE staining and LDH activity assays were used to analyze the cell viability of adipose tissue cultured with different methods. The yield of ATE-S was consistent while ATE-A varied notably. Characterization of the protein composition and exosome-like vesicles (ELVs) indicated no significant difference between ATE-S and ATE-A. The concentrations of cytokines (VEGF, bFGF and IL-6) showed no significant difference while IGF in ATE-S was higher than that in ATE-A. ATE-S showed upregulated adipogenic and angiogenic potential compared to ATE-A. Morever, stirred suspension culture decreased the LDH activity of ATE while increased the number of viable adipocytes and reduced adipose tissue necrosis. Compared with adherent culture, stirred suspension culture is a reliable, time and labor-saving method to collect ATE, which might be used to improve the downstream applications of ATE.

  18. Water yields from forests: an agnostic view

    Treesearch

    Robert R. Ziemer

    1987-01-01

    Abstract - Although experimental watershed studies have consistently shown that water yield can be increased by removing trees and shrubs, programs to increase water yield on an operational scale have consistently failed. Failure has been related to overstated goals and benefits, unrealistic assumptions, political naivete, and the emergence of new interest groups....

  19. Spectrum-shape method and the next-to-leading-order terms of the β -decay shape factor

    NASA Astrophysics Data System (ADS)

    Haaranen, M.; Kotila, J.; Suhonen, J.

    2017-02-01

    Effective values of the axial-vector coupling constant gA have lately attracted much attention due to the prominent role of gA in determining the half-lives of double β decays, in particular their neutrinoless mode. The half-life method, i.e., comparing the calculated half-lives to the corresponding experimental ones, is the most widely used method to access the effective values of gA. The present paper investigates the possibilities offered by a complementary method: the spectrum-shape method (SSM). In the SSM, comparison of the shapes of the calculated and measured β electron spectra of forbidden nonunique β decays yields information on the magnitude of gA. In parallel, we investigate the impact of the next-to-leading-order terms of the β -decay shape function and the radiative corrections on the half-life method and the SSM by analyzing the fourfold forbidden decays of 113Cd and 115In by using three nuclear-structure theory frameworks; namely, the nuclear shell model, the microscopic interacting boson-fermion model, and the microscopic quasiparticle-phonon model. The three models yield a consistent result, gA≈0.92 , when the SSM is applied to the decay of 113Cd for which β -spectrum data are available. At the same time the half-life method yields results which are in tension with each other and the SSM result.

  20. An Investigation into the Relationship Between Distillate Yield and Stable Isotope Fractionation

    NASA Astrophysics Data System (ADS)

    Sowers, T.; Wagner, A. J.

    2016-12-01

    Recent breakthroughs in laser spectrometry have allowed for faster, more efficient analyses of stable isotopic ratios in water samples. Commercially available instruments from Los Gatos Research and Picarro allow users to quickly analyze a wide range of samples, from seawater to groundwater, with accurate isotope ratios of D/H to within ± 0.2 ‰ and 18O/16O to within ± 0.03 ‰. While these instruments have increased the efficiency of stable isotope laboratories, they come with some major limitations, such as not being able to analyze hypersaline waters. The Los Gatos Research Liquid Water Isotope Analyzer (LWIA) can accurately and consistently measure the stable isotope ratios in waters with salinities ranging from 0 to 4 grams per liter (0 to 40 parts per thousand). In order to analyze water samples with salinities greater than 4 grams per liter, however, it was necessary to develop a consistent method through which to reduce salinity while causing as little fractionation as possible. Using a consistent distillation method, predictable fractionation of δ 18O and δ 2 H values was found to occur. This fractionation occurs according to a linear relationship with respect to the percent yield of the water in the sample. Using this method, samples with high salinity can be analyzed using laser spectrometry instruments, thereby enabling laboratories with Los Gatos or Picarro instruments to analyze those samples in house without having to dilute them using labor-intensive in-house standards or expensive premade standards.

  1. Dry matter yields and quality of forages derived from grass species and organic production methods (year 111).

    PubMed

    Pholsen, S; Rodchum, P; Higgs, D E B

    2014-07-01

    This third year work was carried on at Khon Kaen University during the 2008-2009 to investigate dry matter yields of grass, grass plus legumes, grown on Korat soil series (Oxic Paleustults). The experiment consisted of twelve-treatment combinations of a 3x4 factorial arranged in a Randomized Complete Block Design (RCBD) with four replications. The results showed that Dry Matter Yields (DMY) of Ruzi and Guinea grass were similar with mean values of 6,585 and 6,130 kg ha(-1) whilst Napier gave the lowest (884 kg ha(-1)). With grass plus legume, grass species and production methods gave highly significant dry matter yields where Guinea and Ruzi gave dry matter yields of 7,165 and 7,181 kg ha(-1), respectively and Napier was the least (2,790 kg ha(-1)). The production methods with the use of cattle manure gave the highest DMY (grass alone) of 10,267 kg ha(-1) followed by Wynn and Verano with values of 6,064 and 3,623 kg ha(-1), respectively. Guinea plus cattle manure gave the highest DMY of 14,599 kg ha(-1) whilst Ruzi gave 12,977 kg ha(-1). Guinea plus Wynn gave DMY of 7,082 kg ha(-1). Ruzi plus Verano gave DMY of 6,501 kg ha(-1). Forage qualities of crude protein were highest with those grown with grass plus legumes. Some prospects in improving production were discussed.

  2. Effects of Dopant Metal Variation and Material Synthesis Method on the Material Properties of Mixed Metal Ferrites in Yttria Stabilized Zirconia for Solar Thermochemical Fuel Production

    DOE PAGES

    Leonard, Jeffrey; Reyes, Nichole; Allen, Kyle M.; ...

    2015-01-01

    Mixed metal ferrites have shown much promise in two-step solar-thermochemical fuel production. Previous work has typically focused on evaluating a particular metal ferrite produced by a particular synthesis process, which makes comparisons between studies performed by independent researchers difficult. A comparative study was undertaken to explore the effects different synthesis methods have on the performance of a particular material during redox cycling using thermogravimetry. This study revealed that materials made via wet chemistry methods and extended periods of high temperature calcination yield better redox performance. Differences in redox performance between materials made via wet chemistry methods were minimal and thesemore » demonstrated much better performance than those synthesized via the solid state method. Subsequently, various metal ferrite samples (NiFe 2 O 4 , MgFe 2 O 4 , CoFe 2 O 4 , and MnFe 2 O 4 ) in yttria stabilized zirconia (8YSZ) were synthesized via coprecipitation and tested to determine the most promising metal ferrite combination. It was determined that 10 wt.% CoFe 2 O 4 in 8YSZ produced the highest and most consistent yields of O 2 and CO. By testing the effects of synthesis methods and dopants in a consistent fashion, those aspects of ferrite preparation which are most significant can be revealed. More importantly, these insights can guide future efforts in developing the next generation of thermochemical fuel production materials.« less

  3. Estimating the impact of mineral aerosols on crop yields in food insecure regions using statistical crop models

    NASA Astrophysics Data System (ADS)

    Hoffman, A.; Forest, C. E.; Kemanian, A.

    2016-12-01

    A significant number of food-insecure nations exist in regions of the world where dust plays a large role in the climate system. While the impacts of common climate variables (e.g. temperature, precipitation, ozone, and carbon dioxide) on crop yields are relatively well understood, the impact of mineral aerosols on yields have not yet been thoroughly investigated. This research aims to develop the data and tools to progress our understanding of mineral aerosol impacts on crop yields. Suspended dust affects crop yields by altering the amount and type of radiation reaching the plant, modifying local temperature and precipitation. While dust events (i.e. dust storms) affect crop yields by depleting the soil of nutrients or by defoliation via particle abrasion. The impact of dust on yields is modeled statistically because we are uncertain which impacts will dominate the response on national and regional scales considered in this study. Multiple linear regression is used in a number of large-scale statistical crop modeling studies to estimate yield responses to various climate variables. In alignment with previous work, we develop linear crop models, but build upon this simple method of regression with machine-learning techniques (e.g. random forests) to identify important statistical predictors and isolate how dust affects yields on the scales of interest. To perform this analysis, we develop a crop-climate dataset for maize, soybean, groundnut, sorghum, rice, and wheat for the regions of West Africa, East Africa, South Africa, and the Sahel. Random forest regression models consistently model historic crop yields better than the linear models. In several instances, the random forest models accurately capture the temperature and precipitation threshold behavior in crops. Additionally, improving agricultural technology has caused a well-documented positive trend that dominates time series of global and regional yields. This trend is often removed before regression with traditional crop models, but likely at the cost of removing climate information. Our random forest models consistently discover the positive trend without removing any additional data. The application of random forests as a statistical crop model provides insight into understanding the impact of dust on yields in marginal food producing regions.

  4. Development of a European Ensemble System for Seasonal Prediction: Application to crop yield

    NASA Astrophysics Data System (ADS)

    Terres, J. M.; Cantelaube, P.

    2003-04-01

    Western European agriculture is highly intensive and the weather is the main source of uncertainty for crop yield assessment and for crop management. In the current system, at the time when a crop yield forecast is issued, the weather conditions leading up to harvest time are unknown and are therefore a major source of uncertainty. The use of seasonal weather forecast would bring additional information for the remaining crop season and has valuable benefit for improving the management of agricultural markets and environmentally sustainable farm practices. An innovative method for supplying seasonal forecast information to crop simulation models has been developed in the frame of the EU funded research project DEMETER. It consists in running a crop model on each individual member of the seasonal hindcasts to derive a probability distribution of crop yield. Preliminary results of cumulative probability function of wheat yield provides information on both the yield anomaly and the reliability of the forecast. Based on the spread of the probability distribution, the end-user can directly quantify the benefits and risks of taking weather-sensitive decisions.

  5. Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method

    PubMed Central

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.

    2007-01-01

    Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281

  6. Performance of blind source separation algorithms for fMRI analysis using a group ICA method.

    PubMed

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D

    2007-06-01

    Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.

  7. Reliable yields of public water-supply wells in the fractured-rock aquifers of central Maryland, USA

    NASA Astrophysics Data System (ADS)

    Hammond, Patrick A.

    2018-02-01

    Most studies of fractured-rock aquifers are about analytical models used for evaluating aquifer tests or numerical methods for describing groundwater flow, but there have been few investigations on how to estimate the reliable long-term drought yields of individual hard-rock wells. During the drought period of 1998 to 2002, many municipal water suppliers in the Piedmont/Blue Ridge areas of central Maryland (USA) had to institute water restrictions due to declining well yields. Previous estimates of the yields of those wells were commonly based on extrapolating drawdowns, measured during short-term single-well hydraulic pumping tests, to the first primary water-bearing fracture in a well. The extrapolations were often made from pseudo-equilibrium phases, frequently resulting in substantially over-estimated well yields. The methods developed in the present study to predict yields consist of extrapolating drawdown data from infinite acting radial flow periods or by fitting type curves of other conceptual models to the data, using diagnostic plots, inverse analysis and derivative analysis. Available drawdowns were determined by the positions of transition zones in crystalline rocks or thin-bedded consolidated sandstone/limestone layers (reservoir rocks). Aquifer dewatering effects were detected by type-curve matching of step-test data or by breaks in the drawdown curves constructed from hydraulic tests. Operational data were then used to confirm the predicted yields and compared to regional groundwater levels to determine seasonal variations in well yields. Such well yield estimates are needed by hydrogeologists and water engineers for the engineering design of water systems, but should be verified by the collection of long-term monitoring data.

  8. Using a time-series statistical framework to quantify trends and abrupt change in US corn, soybean, and wheat yields from 1970-2016

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Ives, A. R.; Turner, M. G.; Kucharik, C. J.

    2017-12-01

    Previous studies have identified global agricultural regions where "stagnation" of long-term crop yield increases has occurred. These studies have used a variety of simple statistical methods that often ignore important aspects of time series regression modeling. These methods can lead to differing and contradictory results, which creates uncertainty regarding food security given rapid global population growth. Here, we present a new statistical framework incorporating time series-based algorithms into standard regression models to quantify spatiotemporal yield trends of US maize, soybean, and winter wheat from 1970-2016. Our primary goal was to quantify spatial differences in yield trends for these three crops using USDA county level data. This information was used to identify regions experiencing the largest changes in the rate of yield increases over time, and to determine whether abrupt shifts in the rate of yield increases have occurred. Although crop yields continue to increase in most maize-, soybean-, and winter wheat-growing areas, yield increases have stagnated in some key agricultural regions during the most recent 15 to 16 years: some maize-growing areas, except for the northern Great Plains, have shown a significant trend towards smaller annual yield increases for maize; soybean has maintained an consistent long-term yield gains in the Northern Great Plains, the Midwest, and southeast US, but has experienced a shift to smaller annual increases in other regions; winter wheat maintained a moderate annual increase in eastern South Dakota and eastern US locations, but showed a decline in the magnitude of annual increases across the central Great Plains and western US regions. Our results suggest that there were abrupt shifts in the rate of annual yield increases in a variety of US regions among the three crops. The framework presented here can be broadly applied to additional yield trend analyses for different crops and regions of the Earth.

  9. Synthesis of block copolymers consists on vinylidene chloride and α- Methylstyrene by cationic polymerization using an acid exchanged motmorillonite clay as heterogeneous catalyst (Algerian MMT)

    NASA Astrophysics Data System (ADS)

    Ayat, Moulkheir; Belbachir, Mohamed; Rahmouni, Abdelkader

    2017-07-01

    The aim of this study was to develop the efficient and versatile method for the synthesis of block copolymers consists by cationic polymerization vinylidene chloride (VDC) and alpha-methylstyrene (alpha-MS) in the presence of a natural Algerian montmorillonite clay modified by 0.05-0.35 M H2SO4 (Algerian MMT-H+). It was found that H2SO4 concentration allows controlling the chemical composition, the porous structure of the acid-activated clays and their catalytic performance. The maximal yield of polymer is observed in the presence of Algerian MMT modified by 0.25 M H2SO4. Effects of VDC/MS molar ration, catalyst concentration, reaction time, reaction temperature and polarity medium on yield and molecular weight of polymer were revealed in the presence of the most active sample.

  10. Pressure-driven flow of a Herschel-Bulkley fluid with pressure-dependent rheological parameters

    NASA Astrophysics Data System (ADS)

    Panaseti, Pandelitsa; Damianou, Yiolanda; Georgiou, Georgios C.; Housiadas, Kostas D.

    2018-03-01

    The lubrication flow of a Herschel-Bulkley fluid in a symmetric long channel of varying width, 2h(x), is modeled extending the approach proposed by Fusi et al. ["Pressure-driven lubrication flow of a Bingham fluid in a channel: A novel approach," J. Non-Newtonian Fluid Mech. 221, 66-75 (2015)] for a Bingham plastic. Moreover, both the consistency index and the yield stress are assumed to be pressure-dependent. Under the lubrication approximation, the pressure at zero order depends only on x and the semi-width of the unyielded core is found to be given by σ(x) = -(1 + 1/n)h(x) + C, where n is the power-law exponent and the constant C depends on the Bingham number and the consistency-index and yield-stress growth numbers. Hence, in a channel of constant width, the width of the unyielded core is also constant, despite the pressure dependence of the yield stress, and the pressure distribution is not affected by the yield-stress function. With the present model, the pressure is calculated numerically solving an integro-differential equation and then the position of the yield surface and the two velocity components are computed using analytical expressions. Some analytical solutions are also derived for channels of constant and linearly varying widths. The lubrication solutions for other geometries are calculated numerically. The implications of the pressure-dependence of the material parameters and the limitations of the method are discussed.

  11. Estimation of 305 Day Milk Yield from Cumulative Monthly and Bimonthly Test Day Records in Indonesian Holstein Cattle

    NASA Astrophysics Data System (ADS)

    Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.

    2018-02-01

    The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.

  12. Emerging Issues in the Utilization of Weblogs in Higher Education Classrooms

    ERIC Educational Resources Information Center

    Ayao-ao, Shirley

    2014-01-01

    This paper examines the emerging issues in the utilization of weblogs in Philippine higher education and how these issues affect the performance of students. This study used a modified Delphi method. The Delphi panel consisted of 12 experts in the integration of technology, particularly blogs, in their teaching. The study yielded the following…

  13. Modeling maximum daily temperature using a varying coefficient regression model

    Treesearch

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  14. Facile mechanical shaking method is an improved isolation approach for islet preparation and transplantation.

    PubMed

    Yin, Nina; Chen, Tao; Yu, Yuling; Han, Yongming; Yan, Fei; Zheng, Zhou; Chen, Zebin

    2016-12-01

    Successful islet isolation is crucial for islet transplantation and cell treatment for type 1 diabetes. Current isolation methods are able to obtain 500-1,000 islets per rat, which results in a waste of ≥50% of total islets. In the present study, a facile mechanical shaking method for improving islet yield (up to 1,500 per rat) was developed and summarized, which was demonstrated to be more effective than the existing well-established stationary method. The present results showed that isolated islets have a maximum yield of 1,326±152 when shaking for 15 min for the fully-cannulated pancreas. For both fully-cannulated and half-cannulated pancreas in the presence of rat DNAse inhibitor, the optimal shaking time was amended to 20 min with a further increased yield of 1,344±134 and 1,286±124 islets, respectively. Furthermore, the majority of the isolated islets were morphologically intact with a well-defined surface and almost no central necrotic zone, which suggested that the condition of islets obtained via the mechanical shaking method was consistent with the stationary method. Islet size distribution was also calculated and it was demonstrated that islets from the stationary method exhibited the same size distribution as the non-cannulated group, which had more larger islets than the fully-cannulated and half-cannulated groups isolated via the shaking method. In addition, the results of glucose challenge showed that the refraction index of each group was >2.5, which indicated the well-preserved function of isolated islets. Furthermore, the transplanted islets exhibited a therapeutic effect after 1 day of transplantation; however, they failed to control blood glucose levels after ~7 days of transplantation. In conclusion, these results demonstrated that the facile mechanical shaking method may markedly improve the yield of rat islet isolation, and in vitro and in vivo investigation demonstrated the well-preserved function of isolated islets in the control of blood glucose. Therefore, the facile mechanical shaking method may be an alternative improved procedure to obtain higher islet yield for islet preparation and transplantation in the treatment of type 1 diabetes.

  15. Self-consistent DFT +U method for real-space time-dependent density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Tancogne-Dejean, Nicolas; Oliveira, Micael J. T.; Rubio, Angel

    2017-12-01

    We implemented various DFT+U schemes, including the Agapito, Curtarolo, and Buongiorno Nardelli functional (ACBN0) self-consistent density-functional version of the DFT +U method [Phys. Rev. X 5, 011006 (2015), 10.1103/PhysRevX.5.011006] within the massively parallel real-space time-dependent density functional theory (TDDFT) code octopus. We further extended the method to the case of the calculation of response functions with real-time TDDFT+U and to the description of noncollinear spin systems. The implementation is tested by investigating the ground-state and optical properties of various transition-metal oxides, bulk topological insulators, and molecules. Our results are found to be in good agreement with previously published results for both the electronic band structure and structural properties. The self-consistent calculated values of U and J are also in good agreement with the values commonly used in the literature. We found that the time-dependent extension of the self-consistent DFT+U method yields improved optical properties when compared to the empirical TDDFT+U scheme. This work thus opens a different theoretical framework to address the nonequilibrium properties of correlated systems.

  16. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  17. Effect of incomplete pedigrees on estimates of inbreeding and inbreeding depression for days to first service and summit milk yield in Holsteins and Jerseys.

    PubMed

    Cassell, B G; Adamec, V; Pearson, R E

    2003-09-01

    A method to measure completeness of pedigree information is applied to populations of Holstein (registered and grade) and Jersey (largely registered) cows. Inbreeding coefficients where missing ancestors make no contribution were compared to a method using average relationships for missing ancestors. Estimated inbreeding depression was from an animal model that simultaneously adjusted for breeding values. Inbreeding and its standard deviation increased with more information, from 0.04 +/- 0.84 to 1.65 +/- 2.05 and 2.06 +/- 2.22 for grade Holsteins with <31%, 31 to 70%, and 71 to 100% complete five-generation pedigrees. Inbreeding from the method of average relationships for missing ancestors was 2.75 +/- 1.06, 3.10 +/- 2.21, and 2.89 +/- 2.37 for the same groups. Pedigrees of registered Holsteins and Jerseys were over 97% and over 89% complete, respectively. Inbreeding depression in days to first service and summit milk yield was estimated from both methods. Inbreeding depression for days to first service was not consistently significant for grade Holsteins and ranged from -0.37 d/1% increase in inbreeding (grade Holstein pedigrees <31% complete) to 0.15 d for grade Holstein pedigrees >70% complete. Estimates were similar for both methods. Inbreeding depression for registered Holsteins and Jerseys were positive (undesirable) but not significant for days to first service. Inbreeding depressed summit milk yield significantly in all groups by both methods. Summit milk yield declined by -0.12 to -0.06 kg/d per 1% increase in inbreeding in Holsteins and by -0.08 kg/1% increase in inbreeding in Jerseys. Pedigrees of grade animals are frequently incomplete and can yield misleading estimates of inbreeding depression. This problem is not overcome by inserting average relationships for missing ancestors in calculation of inbreeding coefficients.

  18. Selective aerobic alcohol oxidation method for conversion of lignin into simple aromatic compounds

    DOEpatents

    Stahl, Shannon S; Rahimi, Alireza

    2015-03-03

    Described is a method to oxidize lignin or lignin sub-units. The method includes oxidation of secondary benzylic alcohol in the lignin or lignin sub-unit to a corresponding ketone in the presence of unprotected primarily aliphatic alcohol in the lignin or lignin sub-unit. The optimal catalyst system consists of HNO.sub.3 in combination with another Bronsted acid, in the absence of a metal-containing catalyst, thereby yielding a selectively oxidized lignin or lignin sub-unit. The method may be carried out in the presence or absence of additional reagents including TEMPO and TEMPO derivatives.

  19. Growth and yield of patchouli (Pogostemon cablin, Benth) due to mulching and method of fertilizer on rain-fed land

    NASA Astrophysics Data System (ADS)

    Nasruddin; Harahap, E. M.; Hanum, C.; Siregar, L. A. M.

    2018-02-01

    The drought stress that occurs during growth results in a drastic reduction in growth and yield. This study was aimed to study the effect of mulching and method of fertilizer application in reducing the impact of drought stress on patchouli plants. The experiment was conducted from July to December 2016 using a split plot design into three replications with two treatment factors. The first factor was mulch factor with three levels, i.e. M0 (without mulch), M1 (rice straw mulch) and M2 (silver black plastic mulch). The second factor was the method of fertilizer application consisting of three stages: C1 (once), C2 (twice), C3 (three times). The parameters included plant height, number of branches, number of leaves, root length, wet weight of plant, root canopy ratio, total of chlorophyll, soil temperature and soil moisture content. The results showed the use of straw mulch reduce the impact of drought stress on patchouli plants. Two times fertilizer application gave better growth and yield. The use of straw mulch produced lower temperature degrees and maintained soil moisture content.

  20. Measuring 3D point configurations in pictorial space

    PubMed Central

    Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J

    2011-01-01

    We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications. PMID:23145227

  1. Total suspended solids concentrations and yields for water-quality monitoring stations in Gwinnett County, Georgia, 1996-2009

    USGS Publications Warehouse

    Landers, Mark N.

    2013-01-01

    The U.S. Geological Survey, in cooperation with the Gwinnett County Department of Water Resources, established a water-quality monitoring program during late 1996 to collect comprehensive, consistent, high-quality data for use by watershed managers. As of 2009, continuous streamflow and water-quality data as well as discrete water-quality samples were being collected for 14 watershed monitoring stations in Gwinnett County. This report provides statistical summaries of total suspended solids (TSS) concentrations for 730 stormflow and 710 base-flow water-quality samples collected between 1996 and 2009 for 14 watershed monitoring stations in Gwinnett County. Annual yields of TSS were estimated for each of the 14 watersheds using methods described in previous studies. TSS yield was estimated using linear, ordinary least-squares regression of TSS and explanatory variables of discharge, turbidity, season, date, and flow condition. The error of prediction for estimated yields ranged from 1 to 42 percent for the stations in this report; however, the actual overall uncertainty of the estimated yields cannot be less than that of the observed yields (± 15 to 20 percent). These watershed yields provide a basis for evaluation of how watershed characteristics, climate, and watershed management practices affect suspended sediment yield.

  2. Second order Møller-Plesset and coupled cluster singles and doubles methods with complex basis functions for resonances in electron-molecule scattering

    DOE PAGES

    White, Alec F.; Epifanovsky, Evgeny; McCurdy, C. William; ...

    2017-06-21

    The method of complex basis functions is applied to molecular resonances at correlated levels of theory. Møller-Plesset perturbation theory at second order and equation-of-motion electron attachment coupled-cluster singles and doubles (EOM-EA-CCSD) methods based on a non-Hermitian self-consistent-field reference are used to compute accurate Siegert energies for shape resonances in small molecules including N 2 - , CO - , CO 2 - , and CH 2 O - . Analytic continuation of complex θ-trajectories is used to compute Siegert energies, and the θ-trajectories of energy differences are found to yield more consistent results than those of total energies.more » Furthermore, the ability of such methods to accurately compute complex potential energy surfaces is investigated, and the possibility of using EOM-EA-CCSD for Feshbach resonances is explored in the context of e-helium scattering.« less

  3. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  4. Second order Møller-Plesset and coupled cluster singles and doubles methods with complex basis functions for resonances in electron-molecule scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Epifanovsky, Evgeny; McCurdy, C. William

    The method of complex basis functions is applied to molecular resonances at correlated levels of theory. Møller-Plesset perturbation theory at second order and equation-of-motion electron attachment coupled-cluster singles and doubles (EOM-EA-CCSD) methods based on a non-Hermitian self-consistent-field reference are used to compute accurate Siegert energies for shape resonances in small molecules including N 2 - , CO - , CO 2 - , and CH 2 O - . Analytic continuation of complex θ-trajectories is used to compute Siegert energies, and the θ-trajectories of energy differences are found to yield more consistent results than those of total energies.more » Furthermore, the ability of such methods to accurately compute complex potential energy surfaces is investigated, and the possibility of using EOM-EA-CCSD for Feshbach resonances is explored in the context of e-helium scattering.« less

  5. Stress and multiple sclerosis: A systematic review considering potential moderating and mediating factors and methods of assessing stress.

    PubMed

    Briones-Buixassa, Laia; Milà, Raimon; Mª Aragonès, Josep; Bufill, Enric; Olaya, Beatriz; Arrufat, Francesc Xavier

    2015-07-01

    Research about the effects of stress on multiple sclerosis has yielded contradictory results. This study aims to systematically review the evidence focusing on two possible causes: the role of stress assessment and potential moderating and mediating factors. The Web of Knowledge (MEDLINE and Web of Science), Scopus, and PsycINFO databases were searched for relevant articles published from 1900 through December 2014 using the terms "stress*" AND "multiple sclerosis." Twenty-three articles were included. Studies focused on the effect of stress on multiple sclerosis onset ( n  = 9) were mostly retrospective, and semi-structured interviews and scales yielded the most consistent associations. Studies focused on multiple sclerosis progression ( n  = 14) were mostly prospective, and self-reported diaries yielded the most consistent results. The most important modifying factors were stressor duration, severity, and frequency; cardiovascular reactivity and heart rate; and social support and escitalopram intake. Future studies should consider the use of prospective design with self-reported evaluations and the study of moderators and mediators related to amount of stress and autonomic nervous system reactivity to determine the effects of stress on multiple sclerosis.

  6. Stress and multiple sclerosis: A systematic review considering potential moderating and mediating factors and methods of assessing stress

    PubMed Central

    Briones-Buixassa, Laia; Milà, Raimon; Mª Aragonès, Josep; Bufill, Enric; Olaya, Beatriz; Arrufat, Francesc Xavier

    2015-01-01

    Research about the effects of stress on multiple sclerosis has yielded contradictory results. This study aims to systematically review the evidence focusing on two possible causes: the role of stress assessment and potential moderating and mediating factors. The Web of Knowledge (MEDLINE and Web of Science), Scopus, and PsycINFO databases were searched for relevant articles published from 1900 through December 2014 using the terms “stress*” AND “multiple sclerosis.” Twenty-three articles were included. Studies focused on the effect of stress on multiple sclerosis onset (n = 9) were mostly retrospective, and semi-structured interviews and scales yielded the most consistent associations. Studies focused on multiple sclerosis progression (n = 14) were mostly prospective, and self-reported diaries yielded the most consistent results. The most important modifying factors were stressor duration, severity, and frequency; cardiovascular reactivity and heart rate; and social support and escitalopram intake. Future studies should consider the use of prospective design with self-reported evaluations and the study of moderators and mediators related to amount of stress and autonomic nervous system reactivity to determine the effects of stress on multiple sclerosis. PMID:28070374

  7. The effects of nutrient solution sterilization on the growth and yield of hydroponically grown lettuce

    NASA Technical Reports Server (NTRS)

    Schwartzkopf, S. H.; Dudzinski, D.; Minners, R. S.

    1987-01-01

    Two methods of removing bacteria from hydroponic nutrient solution [ultraviolet (UV) radiation and submicronic filter] were evaluated for efficiency and for their effects on lettuce (Lactuca sativa L.) production. Both methods were effective in removing bacteria; but, at high intensity, the ultraviolet sterilizer significantly inhibited the production of plants grown in the treated solution. Bacterial removal by lower intensity UV or a submicronic filter seemed to promote plant growth slightly, but showed no consistent, statistically significant effect.

  8. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    PubMed Central

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  9. Recent Developments in the Formability of Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Banabic, Dorel; Cazacu, Oana; Paraianu, Liana; Jurco, Paul

    2005-08-01

    The paper presents a few recent contributions brought by the authors in the field of the formability of aluminum alloys. A new concept for calculating Forming Limit Diagrams (FLD) using the finite element method is presented. The article presents a new strategy for calculating both branches of an FLD, using a Hutchinson - Neale model implemented in a finite element code. The simulations have been performed with Abaqus/Standard. The constitutive model has been implemented using a UMAT subroutine. The plastic anisotropy of the sheet metal is described by the Cazacu-Barlat and the BBC2003 yield criteria. The theoretical predictions have been compared with the results given by the classical Hutchinson - Neale method and also with experimental data for different aluminum alloys. The comparison proves the capability of the finite element method to predict the strain localization. A computer program used for interactive calculation and graphical representation of different Yield Loci and Forming Limit Diagrams has also been developed. The program is based on a Hutchinson-Neale model. Different yield criteria (Hill 1948, Barlat-Lian and BBC 2003) are implemented in this model. The program consists in three modules: a graphical interface for input, a module for the identification and visualization of the yield surfaces, and a module for calculating and visualizing the forming limit curves. A useful facility offered by the program is the possibility to perform the sensitivity analysis both for the yield surface and the forming limit curves. The numerical results can be compared with experimental data, using the import/export facilities included in the program.

  10. Recent Developments in the Formability of Aluminum Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banabic, Dorel; Paraianu, Liana; Jurco, Paul

    The paper presents a few recent contributions brought by the authors in the field of the formability of aluminum alloys. A new concept for calculating Forming Limit Diagrams (FLD) using the finite element method is presented. The article presents a new strategy for calculating both branches of an FLD, using a Hutchinson - Neale model implemented in a finite element code. The simulations have been performed with Abaqus/Standard. The constitutive model has been implemented using a UMAT subroutine. The plastic anisotropy of the sheet metal is described by the Cazacu-Barlat and the BBC2003 yield criteria. The theoretical predictions have beenmore » compared with the results given by the classical Hutchinson - Neale method and also with experimental data for different aluminum alloys. The comparison proves the capability of the finite element method to predict the strain localization. A computer program used for interactive calculation and graphical representation of different Yield Loci and Forming Limit Diagrams has also been developed. The program is based on a Hutchinson-Neale model. Different yield criteria (Hill 1948, Barlat-Lian and BBC 2003) are implemented in this model. The program consists in three modules: a graphical interface for input, a module for the identification and visualization of the yield surfaces, and a module for calculating and visualizing the forming limit curves. A useful facility offered by the program is the possibility to perform the sensitivity analysis both for the yield surface and the forming limit curves. The numerical results can be compared with experimental data, using the import/export facilities included in the program.« less

  11. Electromagnetic Navigation Diagnostic Bronchoscopy

    PubMed Central

    Gildea, Thomas R.; Mazzone, Peter J.; Karnak, Demet; Meziane, Moulay; Mehta, Atul C.

    2006-01-01

    Rationale: Electromagnetic navigation bronchoscopy using superDimension/Bronchus System is a novel method to increase diagnostic yield of peripheral and mediastinal lung lesions. Objectives: A prospective, open label, single-center, pilot study was conducted to determine the ability of electromagnetic navigation bronchoscopy to sample peripheral lung lesions and mediastinal lymph nodes with standard bronchoscopic instruments and demonstrate safety. Methods: Electromagnetic navigation bronchoscopy was performed using the superDimension/Bronchus system consisting of electromagnetic board, position sensor encapsulated in the tip of a steerable probe, extended working channel, and real-time reconstruction of previously acquired multiplanar computed tomography images. The final distance of the steerable probe to lesion, expected error based on the actual and virtual markers, and procedure yield was gathered. Measurements: 60 subjects were enrolled between December 2004 and September 2005. Mean navigation times were 7 ± 6 min and 2 ± 2 min for peripheral lesions and lymph nodes, respectively. The steerable probe tip was navigated to the target lung area in all cases. The mean peripheral lesions and lymph nodes size was 22.8 ± 12.6 mm and 28.1 ± 12.8 mm. Yield was determined by results obtained during the bronchoscopy per patient. Results: The yield/procedure was 74% and 100% for peripheral lesions and lymph nodes, respectively. A diagnosis was obtained in 80.3% of bronchoscopic procedures. A definitive diagnosis of lung malignancy was made in 74.4% of subjects. Pneumothorax occurred in two subjects. Conclusion: Electromagnetic navigation bronchoscopy is a safe method for sampling peripheral and mediastinal lesions with high diagnostic yield independent of lesion size and location. PMID:16873767

  12. Forecasting volcanic air pollution in Hawaii: Tests of time series models

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2012-12-01

    Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.

  13. The Comparative Effectiveness of Different Item Analysis Techniques in Increasing Change Score Reliability.

    ERIC Educational Resources Information Center

    Crocker, Linda M.; Mehrens, William A.

    Four new methods of item analysis were used to select subsets of items which would yield measures of attitude change. The sample consisted of 263 students at Michigan State University who were tested on the Inventory of Beliefs as freshmen and retested on the same instrument as juniors. Item change scores and total change scores were computed for…

  14. Photoionization and Recombination

    NASA Technical Reports Server (NTRS)

    Nahar, Sultana N.

    2000-01-01

    Theoretically self-consistent calculations for photoionization and (e + ion) recombination are described. The same eigenfunction expansion for the ion is employed in coupled channel calculations for both processes, thus ensuring consistency between cross sections and rates. The theoretical treatment of (e + ion) recombination subsumes both the non-resonant recombination ("radiative recombination"), and the resonant recombination ("di-electronic recombination") processes in a unified scheme. In addition to the total, unified recombination rates, level-specific recombination rates and photoionization cross sections are obtained for a large number of atomic levels. Both relativistic Breit-Pauli, and non-relativistic LS coupling, calculations are carried out in the close coupling approximation using the R-matrix method. Although the calculations are computationally intensive, they yield nearly all photoionization and recombination parameters needed for astrophysical photoionization models with higher precision than hitherto possible, estimated at about 10-20% from comparison with experimentally available data (including experimentally derived DR rates). Results are electronically available for over 40 atoms and ions. Photoionization and recombination of He-, and Li-like C and Fe are described for X-ray modeling. The unified method yields total and complete (e+ion) recombination rate coefficients, that can not otherwise be obtained theoretically or experimentally.

  15. Probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  16. Extraction of basil leaves (ocimum canum) oleoresin with ethyl acetate solvent by using soxhletation method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Purba, R. R. H.; Ginting, H. K.

    2017-09-01

    The goal of this research is to produce oleoresin from basil leaves (Ocimum canum) by using soxhletation method and ethyl acetate as solvent. Basil commonly used in culinary as fresh vegetables. Basil contains essential oils and oleoresin that are used as flavouring agent in food, in cosmetic and ingredient in traditional medicine. The extraction method commonly used to obtain oleoresin is maceration. The problem of this method is many solvents necessary and need time to extract the raw material. To resolve the problem and to produce more oleoresin, we use soxhletation method with a combination of extraction time and ratio from the material with a solvent. The analysis consists of yield, density, refractive index, and essential oil content. The best treatment of basil leaves oleoresin extraction is at ratio of material and solvent 1:6 (w / v) for 6 hours extraction time. In this condition, the yield of basil oleoresin is 20.152%, 0.9688 g/cm3 of density, 1.502 of refractive index, 15.77% of essential oil content, and the colour of oleoresin product is dark-green.

  17. Comparison of non-toxic methods for creating beta-carotene encapsulated in PMMA nanoparticles

    NASA Astrophysics Data System (ADS)

    Dobrzanski, Christopher D.

    Nano/microcapsules are becoming more prevalent in various industries such as drug delivery, cosmetics, etc. Current methods of particle formation often use toxic or carcinogenic/mutagenic/reprotoxic (CMR) chemicals. This study intends to improve upon existing methods of particle formation and compare their effectiveness in terms of entrapment efficiency, mean particle size, and yield utilizing only non-toxic chemicals. In this study, the solvent evaporation (SE), spontaneous emulsification, and spontaneous emulsion solvent diffusion (SESD) methods were compared in systems containing green solvents ethyl acetate, dimethyl carbonate or acetone. PMMA particles containing encapsulated beta carotene, an ultraviolet sensitive substance, were synthesized. It was desired to produce particles with minimum mean size and maximum yield and entrapment of beta carotene. The mass of the water phase, the mass of the polymer and the pumping or blending rate were varied for each synthesis method. The smallest particle sizes for SE and SESD both were obtained from the middle water phase sizes, 200 g and 100 g respectively. The particles obtained from the larger water phase in SESD were much bigger, about 5 microns in diameter, even larger than the ones obtained from SE. When varying the mass of PMMA used in each synthesis method, as expected, more PMMA led to larger particles. Increasing the blending rate in SE from 6,500 to 13,500 rpm had a minimal effect on average particle size, but the higher shear resulted in highly polydisperse particles (PDI = 0.87). By decreasing the pump rate in SESD, particles became smaller and had lower entrapment efficiency. The entrapment efficiencies of the particles were generally higher for the larger particles within a mode. Therefore, we found that minimizing the particle size while maximizing entrapment were somewhat contradictory goals. The solvent evaporation method was very consistent in terms of the values of mean particle size, yield, and entrapment efficiency. Comparing the synthesis methods, the smallest particles with the highest yield and entrapment efficiency were generated by the spontaneous emulsification method.

  18. Lineaments on Skylab photographs: Detection, mapping, and hydrologic significance in central Tennessee

    NASA Technical Reports Server (NTRS)

    Moore, G. K. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Lineaments were detected on Skylab photographs by stereo viewing, projection viewing, and composite viewing. Sixty-nine percent more lineaments were found by stereo viewing than by projection, but segments of projection lineaments are longer; total length of lineaments found by these two methods is nearly the same. Most Skylab lineaments consist of topographic depression: stream channel alinements, straight valley walls, elongated swales, and belts where sinkholes are abundant. Most of the remainder are vegetation alinements. Lineaments are most common in dissected areas having a thin soil cover. Results of test drilling show: (1) the median yield of test wells on Skylab lineaments is about six times the median yield of all existing wells; (2) three out of seven wells on Skylab lineaments yield more than 6.3 1/s (110 gal/min): (3) low yields are possible on lineaments as well as in other favorable locations; and (4) the largest well yields can be obtained at well locations of Skylab lineaments that also are favorably located with respect to topography and geologic structure, and are in the vicinity of wells with large yields.

  19. Deuterium-tritium neutron yield measurements with the 4.5 m neutron-time-of-flight detectors at NIF.

    PubMed

    Moran, M J; Bond, E J; Clancy, T J; Eckart, M J; Khater, H Y; Glebov, V Yu

    2012-10-01

    The first several campaigns of laser fusion experiments at the National Ignition Facility (NIF) included a family of high-sensitivity scintillator∕photodetector neutron-time-of-flight (nTOF) detectors for measuring deuterium-deuterium (DD) and DT neutron yields. The detectors provided consistent neutron yield (Y(n)) measurements from below 10(9) (DD) to nearly 10(15) (DT). The detectors initially demonstrated detector-to-detector Y(n) precisions better than 5%, but lacked in situ absolute calibrations. Recent experiments at NIF now have provided in situ DT yield calibration data that establish the absolute sensitivity of the 4.5 m differential tissue harmonic imaging (DTHI) detector with an accuracy of ± 10% and precision of ± 1%. The 4.5 m nTOF calibration measurements also have helped to establish improved detector impulse response functions and data analysis methods, which have contributed to improving the accuracy of the Y(n) measurements. These advances have also helped to extend the usefulness of nTOF measurements of ion temperature and downscattered neutron ratio (neutron yield 10-12 MeV divided by yield 13-15 MeV) with other nTOF detectors.

  20. Rate constant for the reaction SO + BrO yields SO2 + Br

    NASA Technical Reports Server (NTRS)

    Brunning, J.; Stief, L.

    1986-01-01

    The rate of the radical-radical reaction SO + BrO yields SO2 + Br has been determined at 298 K in a discharge flow system near 1 torr pressure with detection of SO and BrO via collision-free sampling mass spectrometry. The rate constant was determined using two different methods: measuring the decay of SO radicals in the presence of an excess of BrO and measuring the decay of BrO radicals in excess SO. The results from the two methods are in reasonable agreement and the simple mean of the two values gives the recommended rate constant at 298 K, k = (5.7 + or - 2.0) x 10 to the -11th cu cm/s. This represents the first determination of this rate constant and it is consistent with a previously derived lower limit based on SO2 formation. Comparison is made with other radical-radical reactions involving SO or BrO. The reaction SO + BrO yields SO2 + Br is of interest for models of the upper atmosphere of the earth and provides a potential coupling between atmospheric sulfur and bromine chemistry.

  1. Characterization of cap-shaped silver particles for surface-enhanced fluorescence effects.

    PubMed

    Yamaguchi, Tetsuji; Kaya, Takatoshi; Takei, Hiroyuki

    2007-05-15

    Surface-enhanced fluorescence has potentially many desirable properties as an analytical method for medical diagnostics, but the effect observed so far is rather modest and only in conjunction with fluorophores with low quantum yields. Coupled with the fact that preparation of suitable surfaces at low costs has been difficult, this has limited its utilities. Here we report a novel method for forming uniform and reproducible surfaces with respectable enhancement ratios even for high-quantum-yield fluorophores. Formation of dense surface-adsorbed latex spheres on a flat surface via partial aggregation, followed by evaporation of silver, results in a film consisting of cap-shaped silver particles at high densities. Binding of fluorescence biomolecules, either through physisorption or antigen-antibody reaction, was performed, and enhancements close to 50 have been observed with fluorophores such as R-phycoerythrin and Alexa 546-labeled, bovine serum albumin, both of which have quantum yields around 0.8. We attribute this to the unique shape of the silver particle and the presence of abundant gaps among adjacent particles at high densities. The effectiveness of the new surface is also demonstrated with IL-6 sandwich assays.

  2. Effect of ultrasonic treatment on the polyphenol content and antioxidant capacity of extract from defatted hemp, flax and canola seed cakes.

    PubMed

    Teh, Sue-Siang; Birch, Edward John

    2014-01-01

    The effectiveness of ultrasonic extraction of phenolics and flavonoids from defatted hemp, flax and canola seed cakes was compared to the conventional extraction method. Ultrasonic treatment at room temperature showed increased polyphenol extraction yield and antioxidant capacity by two-fold over the conventional extraction method. Different combinations of ultrasonic treatment parameters consisting of solvent volume (25, 50, 75 and 100 mL), extraction time (20, 30 and 40 min) and temperature (40, 50, 60 and 70 °C) were selected for polyphenol extractions from the seed cakes. The chosen parameters had a significant effect (p<0.05) on the polyphenol extraction yield and subsequent antioxidant capacity from the seed cakes. Application of heat during ultrasonic extraction yielded higher polyphenol content in extracts compared to the non-heated extraction. From an orthogonal design test, the best combination of parameters was 50 mL of solvent volume, 20 min of extraction time and 70 °C of ultrasonic temperature. Copyright © 2013. Published by Elsevier B.V.

  3. Depth profile of production yields of natPb(p, xn) 206,205,204,203,202,201Bi nuclear reactions

    NASA Astrophysics Data System (ADS)

    Mokhtari Oranj, Leila; Jung, Nam-Suk; Kim, Dong-Hyun; Lee, Arim; Bae, Oryun; Lee, Hee-Seock

    2016-11-01

    Experimental and simulation studies on the depth profiles of production yields of natPb(p, xn) 206,205,204,203,202,201Bi nuclear reactions were carried out. Irradiation experiments were performed at the high-intensity proton linac facility (KOMAC) in Korea. The targets, irradiated by 100-MeV protons, were arranged in a stack consisting of natural Pb, Al, Au foils and Pb plates. The proton beam intensity was determined by activation analysis method using 27Al(p, 3p1n)24Na, 197Au(p, p1n)196Au, and 197Au(p, p3n)194Au monitor reactions and also by Gafchromic film dosimetry method. The yields of produced radio-nuclei in the natPb activation foils and monitor foils were measured by HPGe spectroscopy system. Monte Carlo simulations were performed by FLUKA, PHITS/DCHAIN-SP, and MCNPX/FISPACT codes and the calculated data were compared with the experimental results. A satisfactory agreement was observed between the present experimental data and the simulations.

  4. Integration process of fermentation and liquid biphasic flotation for lipase separation from Burkholderia cepacia.

    PubMed

    Sankaran, Revathy; Show, Pau Loke; Lee, Sze Ying; Yap, Yee Jiun; Ling, Tau Chuan

    2018-02-01

    Liquid Biphasic Flotation (LBF) is an advanced recovery method that has been effectively applied for biomolecules extraction. The objective of this investigation is to incorporate the fermentation and extraction process of lipase from Burkholderia cepacia using flotation system. Initial study was conducted to compare the performance of bacteria growth and lipase production using flotation and shaker system. From the results obtained, bacteria shows quicker growth and high lipase yield via flotation system. Integration process for lipase separation was investigated and the result showed high efficiency reaching 92.29% and yield of 95.73%. Upscaling of the flotation system exhibited consistent result with the lab-scale which are 89.53% efficiency and 93.82% yield. The combination of upstream and downstream processes in a single system enables the acceleration of product formation, improves the product yield and facilitates downstream processing. This integration system demonstrated its potential for biomolecules fermentation and separation that possibly open new opportunities for industrial production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  6. A comparison of two methods for measuring vessel length in woody plants.

    PubMed

    Pan, Ruihua; Geng, Jing; Cai, Jing; Tyree, Melvin T

    2015-12-01

    Vessel lengths are important to plant hydraulic studies, but are not often reported because of the time required to obtain measurements. This paper compares the fast dynamic method (air injection method) with the slower but traditional static method (rubber injection method). Our hypothesis was that the dynamic method should yield a larger mean vessel length than the static method. Vessel length was measured by both methods in current year stems of Acer, Populus, Vitis and Quercus representing short- to long-vessel species. The hypothesis was verified. The reason for the consistently larger values of vessel length is because the dynamic method measures air flow rates in cut open vessels. The Hagen-Poiseuille law predicts that the air flow rate should depend on the product of number of cut open vessels times the fourth power of vessel diameter. An argument is advanced that the dynamic method is more appropriate because it measures the length of the vessels that contribute most to hydraulic flow. If all vessels had the same vessel length distribution regardless of diameter, then both methods should yield the same average length. This supports the hypothesis that large-diameter vessels might be longer than short-diameter vessels in most species. © 2015 John Wiley & Sons Ltd.

  7. The Zugspitze radiative closure experiment for quantifying water vapor absorption over the terrestrial and solar infrared - Part 2: Accurate calibration of high spectral-resolution infrared measurements of surface solar radiation

    NASA Astrophysics Data System (ADS)

    Reichert, Andreas; Rettinger, Markus; Sussmann, Ralf

    2016-09-01

    Quantitative knowledge of water vapor absorption is crucial for accurate climate simulations. An open science question in this context concerns the strength of the water vapor continuum in the near infrared (NIR) at atmospheric temperatures, which is still to be quantified by measurements. This issue can be addressed with radiative closure experiments using solar absorption spectra. However, the spectra used for water vapor continuum quantification have to be radiometrically calibrated. We present for the first time a method that yields sufficient calibration accuracy for NIR water vapor continuum quantification in an atmospheric closure experiment. Our method combines the Langley method with spectral radiance measurements of a high-temperature blackbody calibration source (< 2000 K). The calibration scheme is demonstrated in the spectral range 2500 to 7800 cm-1, but minor modifications to the method enable calibration also throughout the remainder of the NIR spectral range. The resulting uncertainty (2σ) excluding the contribution due to inaccuracies in the extra-atmospheric solar spectrum (ESS) is below 1 % in window regions and up to 1.7 % within absorption bands. The overall radiometric accuracy of the calibration depends on the ESS uncertainty, on which at present no firm consensus has been reached in the NIR. However, as is shown in the companion publication Reichert and Sussmann (2016), ESS uncertainty is only of minor importance for the specific aim of this study, i.e., the quantification of the water vapor continuum in a closure experiment. The calibration uncertainty estimate is substantiated by the investigation of calibration self-consistency, which yields compatible results within the estimated errors for 91.1 % of the 2500 to 7800 cm-1 range. Additionally, a comparison of a set of calibrated spectra to radiative transfer model calculations yields consistent results within the estimated errors for 97.7 % of the spectral range.

  8. Ultrasound-assisted extraction and purification of schisandrin B from Schisandra chinensis (Turcz.) Baill seeds: optimization by response surface methodology.

    PubMed

    Zhang, Y B; Wang, L H; Zhang, D Y; Zhou, L L; Guo, Y X

    2014-03-01

    The objective of this study is to develop a process consisting of ultrasonic-assisted extraction, silica-gel column chromatography and crystallization to optimize pilot scale recovery of schisandrin B (SAB) from Schisandra chinensis seeds. The effects of five independent variables including liquid-solid ratio, ethanol concentration, ultrasonic power, extraction time, and temperature on the SAB yield were evaluated with fractional factorial design (FFD). The FFD results showed that the ethanol concentration was the only significant factor for the yield of SAB. Then, with the liquid-solid ratio 5 (mL/g) and ultrasonic power 600 W, the other three parameters were further optimized by means of response surface methodology (RSM). The RSM results revealed that the optimal conditions consisted of 95% ethanol, 60 °C and 70 min. The average experimental SAB yield under the optimum conditions was found to be 5.80 mg/g, which was consistent with the predicted value of 5.83 mg/g. Subsequently, a silica gel chromatographic process was used to prepare the SAB-enriched extract with petroleum ether/acetone (95:5, v/v) as eluents. After final crystallization, 1.46 g of SAB with the purity of 99.4% and the overall recovery of 57.1% was obtained from 400 g seeds powder. This method provides an efficient and low-cost way for SAB purification for pharmaceutical industrial applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. An Optimized Centrifugal Method for Separation of Semen from Superabsorbent Polymers for Forensic Analysis.

    PubMed

    Camarena, Lucy R; Glasscock, Bailey K; Daniels, Demi; Ackley, Nicolle; Sciarretta, Marybeth; Seashols-Williams, Sarah J

    2017-03-01

    Connection of a perpetrator to a sexual assault is best performed through the confirmed presence of semen, thereby proving sexual contact. Evidentiary items can include sanitary napkins or diapers containing superabsorbent polymers (SAPs), complicating spermatozoa visualization and DNA analysis. In this report, we evaluated the impact of SAPS on the current forensic DNA workflow, developing an efficient centrifugal protocol for separating spermatozoa from SAP material. The optimized filtration method was compared to common practices of excising the top layer only, resulting in significantly higher sperm yields when a core sample of the substrate was taken. Direct isolation of the SAP-containing materials without filtering resulted in 20% sample failure; additionally, SAP material was observed in the final eluted DNA samples, causing physical interference. Thus, use of the described centrifugal-filtering method is a simple preliminary step that improves spermatozoa visualization and enables more consistent DNA yields, while also avoiding SAP interference. © 2016 American Academy of Forensic Sciences.

  10. GC-FID coupled with chemometrics for quantitative and chemical fingerprinting analysis of Alpinia oxyphylla oil.

    PubMed

    Miao, Qing; Kong, Weijun; Zhao, Xiangsheng; Yang, Shihai; Yang, Meihua

    2015-01-01

    Analytical methods for quantitative analysis and chemical fingerprinting of volatile oils from Alpinia oxyphylla were established. The volatile oils were prepared by hydrodistillation, and the yields were between 0.82% and 1.33%. The developed gas chromatography-flame ionization detection (GC-FID) method showed good specificity, linearity, reproducibility, stability and recovery, and could be used satisfactorily for quantitative analysis. The results showed that the volatile oils contained 2.31-77.30 μL/mL p-cymene and 12.38-99.34 mg/mL nootkatone. A GC-FID fingerprinting method was established, and the profiles were analyzed using chemometrics. GC-MS was used to identify the principal compounds in the GC-FID profiles. The profiles of almost all the samples were consistent and stable. The harvesting time and source were major factors that affected the profile, while the volatile oil yield and the nootkatone content had minor secondary effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Discussion on the installation checking method of precast composite floor slab with lattice girders

    NASA Astrophysics Data System (ADS)

    Chen, Li; Jin, Xing; Wang, Yahui; Zhou, Hele; Gu, Jianing

    2018-03-01

    Based on the installation checking requirements of China’s current standards and the international norms for prefabricated structural precast components, it proposed an installation checking method for precast composite floor slab with lattice girders. By taking an equivalent composite beam consisted of a single lattice girder and the precast concrete slab as the checking object, compression instability stress of upper chords and yield stress of slab distribution reinforcement at the maximum positive moment, tensile yield stress of upper chords, slab normal section normal compression stress and shear instability stress of diagonal bars at the maximum negative moment were checked. And the bending stress and deflection of support beams, strength and compression stability bearing capacity of the vertical support, shear bearing capacity of the bolt and compression bearing capacity of steel tube wall at the bolt were checked at the same time. Every different checking object was given a specific load value and load combination. Application of installation checking method was given and testified by example.

  12. A new method to derive electronegativity from resonant inelastic x-ray scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carniato, S.; Journel, L.; Guillemin, R.

    2012-10-14

    Electronegativity is a well-known property of atoms and substituent groups. Because there is no direct way to measure it, establishing a useful scale for electronegativity often entails correlating it to another chemical parameter; a wide variety of methods have been proposed over the past 80 years to do just that. This work reports a new approach that connects electronegativity to a spectroscopic parameter derived from resonant inelastic x-ray scattering. The new method is demonstrated using a series of chlorine-containing compounds, focusing on the Cl 2p{sup -1}LUMO{sup 1} electronic states reached after Cl 1s{yields} LUMO core excitation and subsequent KL radiativemore » decay. Based on an electron-density analysis of the LUMOs, the relative weights of the Cl 2p{sub z} atomic orbital contributing to the Cl 2p{sub 3/2} molecular spin-orbit components are shown to yield a linear electronegativity scale consistent with previous approaches.« less

  13. Identification of saline soils with multi-year remote sensing of crop yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobell, D; Ortiz-Monasterio, I; Gurrola, F C

    2006-10-17

    Soil salinity is an important constraint to agricultural sustainability, but accurate information on its variation across agricultural regions or its impact on regional crop productivity remains sparse. We evaluated the relationships between remotely sensed wheat yields and salinity in an irrigation district in the Colorado River Delta Region. The goals of this study were to (1) document the relative importance of salinity as a constraint to regional wheat production and (2) develop techniques to accurately identify saline fields. Estimates of wheat yield from six years of Landsat data agreed well with ground-based records on individual fields (R{sup 2} = 0.65).more » Salinity measurements on 122 randomly selected fields revealed that average 0-60 cm salinity levels > 4 dS m{sup -1} reduced wheat yields, but the relative scarcity of such fields resulted in less than 1% regional yield loss attributable to salinity. Moreover, low yield was not a reliable indicator of high salinity, because many other factors contributed to yield variability in individual years. However, temporal analysis of yield images showed a significant fraction of fields exhibited consistently low yields over the six year period. A subsequent survey of 60 additional fields, half of which were consistently low yielding, revealed that this targeted subset had significantly higher salinity at 30-60 cm depth than the control group (p = 0.02). These results suggest that high subsurface salinity is associated with consistently low yields in this region, and that multi-year yield maps derived from remote sensing therefore provide an opportunity to map salinity across agricultural regions.« less

  14. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  15. Reduced-Volume Fracture Toughness Characterization for Transparent Polymers

    DTIC Science & Technology

    2015-03-21

    Caruthers et al. (2004) developed a thermodynamically consistent, nonlinear viscoelastic bulk constitutive model based on a potential energy clock ( PEC ...except that relaxation times change. Because of its formulation, the PEC model predicts mechanical yield as a natural consequence of relaxation...softening type of behavior, but hysteresis effects are not naturally accounted for. Adolf et al. (2009) developed a method of simplifying the PEC model

  16. Isolation and purification of monosialotetrahexosylgangliosides from pig brain by extraction and liquid chromatography.

    PubMed

    Bian, Liujiao; Yang, Jianting; Sun, Yu

    2015-10-01

    Monosialotetrahexosylganglioside (GM1), one of glycosphingolipids containing sialic acid, plays particularly important role in fighting against paralysis, dementia and other diseases caused by brain and nerve damage. In this work, a simple and highly efficient method with high yield was developed for isolation and purification of GM1 from pig brain. The method consisted of an extraction by chloroform-methanol-water and a two-step chromatographic separation by DEAE-Sepharose Fast Flow anion-exchange medium and Sephacryl S-100 HR size-exclusion medium. The purified GM1 was proved to be homogeneous and had a purity of >98.0% by high-performance anion-exchange and size-exclusion chromatography. The molecular weight was 30.0 kDa by high-performance size-exclusion chromatography and 1546.9 Da by electrospray ionization mass spectrometry. The chromogenic reaction by resorcinol-hydrochloric acid solution indicated that the purified GM1 showed a specific chromogenic reaction of sialic acid. Through this isolation and purification program, ~1.0 mg of pure GM1 could be captured from 500 g wet pig brain tissue and the yield of GM1 was around 0.022%, which was higher than the yields by other methods. The method may provide an alternative for isolation and purification of GM1 in other biological tissues. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Minimization of Residual Stress in an Al-Cu Alloy Forged Plate by Different Heat Treatments

    NASA Astrophysics Data System (ADS)

    Dong, Ya-Bo; Shao, Wen-Zhu; Jiang, Jian-Tang; Zhang, Bao-You; Zhen, Liang

    2015-06-01

    In order to improve the balance of mechanical properties and residual stress, various quenching and aging treatments were applied to Al-Cu alloy forged plate. Residual stresses determined by the x-ray diffraction method and slitting method were compared. The surface residual stress measured by x-ray diffraction method was consistent with that measured by slitting method. The residual stress distribution of samples quenched in water with different temperatures (20, 60, 80, and 100 °C) was measured, and the results showed that the boiling water quenching results in a 91.4% reduction in residual stress magnitudes compared with cold water quenching (20 °C), but the tensile properties of samples quenched in boiling water were unacceptably low. Quenching in 80 °C water results in 75% reduction of residual stress, and the reduction of yield strength is 12.7%. The residual stress and yield strength level are considerable for the dimensional stability of aluminum alloy. Quenching samples into 30% polyalkylene glycol quenchants produced 52.2% reduction in the maximum compressive residual stress, and the reduction in yield strength is 19.7%. Moreover, the effects of uphill quenching and thermal-cold cycling on the residual stress were also investigated. Uphill quenching and thermal-cold cycling produced approximately 25-40% reduction in residual stress, while the effect on tensile properties is quite slight.

  18. A Comparison of Observed Abundances in Five Well-Studied Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Tanner, Jolene; Balick, B.; Kwitter, K. B.

    2013-01-01

    We have assembled data and derived abundances in several recent careful studies for five bright planetary nebulae (PNe) of low, moderate, and high ionization and relatively simple morphology. Each of the studies employ different apertures, aperture placement, and facilities for the observations. Various methods were used to derive total abundances. All used spectral windows that included [OII]3727 in the UV through Argon lines in the red. Our ultimate goal is to determine the extent to which the derived abundances are consistent. We show that the reddening-corrected line ratios are surprisingly similar despite the different modes of observation and that the various abundance analysis methods yield generally consistent results for He/H, N/H, O/H, and Ne/H (within 50% with a few larger deviations). In addition we processed the line ratios from the different sources using a common abundance derivation method (ELSA) to search for clues of systematic methodological inconsistencies. None were uncovered.

  19. Multidimensional heuristic process for high-yield production of astaxanthin and fragrance molecules in Escherichia coli.

    PubMed

    Zhang, Congqiang; Seow, Vui Yin; Chen, Xixian; Too, Heng-Phon

    2018-05-11

    Optimization of metabolic pathways consisting of large number of genes is challenging. Multivariate modular methods (MMMs) are currently available solutions, in which reduced regulatory complexities are achieved by grouping multiple genes into modules. However, these methods work well for balancing the inter-modules but not intra-modules. In addition, application of MMMs to the 15-step heterologous route of astaxanthin biosynthesis has met with limited success. Here, we expand the solution space of MMMs and develop a multidimensional heuristic process (MHP). MHP can simultaneously balance different modules by varying promoter strength and coordinating intra-module activities by using ribosome binding sites (RBSs) and enzyme variants. Consequently, MHP increases enantiopure 3S,3'S-astaxanthin production to 184 mg l -1 day -1 or 320 mg l -1 . Similarly, MHP improves the yields of nerolidol and linalool. MHP may be useful for optimizing other complex biochemical pathways.

  20. A new method for measuring low resistivity contacts between silver and YBa2Cu3O(7-x) superconductor

    NASA Technical Reports Server (NTRS)

    Hsi, Chi-Shiung; Haertling, Gene H.; Sherrill, Max D.

    1991-01-01

    Several methods of measuring contact resistivity between silver electrodes and YBa2Cu3O(7-x) superconductors were investigated; including the two-point, the three point, and the lap-joint methods. The lap-joint method was found to yield the most consistent and reliable results and is proposed as a new technique for this measurement. Painting, embedding, and melting methods were used to apply the electrodes to the superconductor. Silver electrodes produced good ohmic contacts to YBa2Cu3O(7-x) superconductors with contact resistivities as low as 1.9 x 10 to the -9th ohm sq cm.

  1. An Efficient Extraction Method for Fragrant Volatiles from Jasminum sambac (L.) Ait.

    PubMed

    Ye, Qiuping; Jin, Xinyi; Zhu, Xinliang; Lin, Tongxiang; Hao, Zhilong; Yang, Qian

    2015-01-01

    The sweet smell of aroma of Jasminum sambac (L.) Ait. is releasing while the flowers are blooming. Although components of volatile oil have been extensively studied, there are problematic issues, such as low efficiency of yield, flavour distortion. Here, the subcritical fluid extraction (SFE) was performed to extract fragrant volatiles from activated carbon that had absorbed the aroma of jasmine flowers. This novel method could effectively obtain main aromatic compounds with quality significantly better than solvent extraction (SE). Based on the analysis data with response surface methodology (RSM), we optimized the extraction conditions which consisted of a temperature of 44°C, a solvent-to-material ratio of 3.5:1, and an extraction time of 53 min. Under these conditions, the extraction yield was 4.91%. Furthermore, the key jasmine essence oil components, benzyl acetate and linalool, increase 7 fold and 2 fold respectively which lead to strong typical smell of the jasmine oil. The new method can reduce spicy components which lead to the essential oils smelling sweeter. Thus, the quality of the jasmine essence oil was dramatically improved and yields based on the key component increased dramatically. Our results provide a new effective technique for extracting fragrant volatiles from jasmine flowers.

  2. Chemical properties of colliding sources in 124, 136Xe and 112, 124Sn induced collisions in isobaric yield ratio difference and isoscaling methods

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Wang, Shan-Shan; Zhang, Yan-Li; Wei, Hui-Ling

    2013-12-01

    Isoscaling and isobaric yield ratio difference (IBD) methods are used to study Δμ/T (Δμ being the difference between the chemical potentials of the neutron and proton, and T being the temperature) in the measured 1 A GeV 124Sn + 124Sn, 112Sn + 112Sn, 136Xe + Pb and 124Xe + Pb reactions. The isoscaling phenomena in the 124Sn/112Sn and 136Xe/124Xe reaction pairs are investigated, and the isoscaling parameters α and β are obtained. The Δμ/T determined by the isoscaling method (IS-Δμ/T) and the IBD method (IB-Δμ/T) in the measured Sn and Xe reactions are compared. It is shown that in most fragments, the IS- and IB-Δμ/T are consistent in the Xe reactions, while the IS- and IB-Δμ/T ones are only similar in the less neutron-rich fragments in the Sn reactions. The shell effects in IB-Δμ/T are also discussed.

  3. Determination of the Effective Detector Area of an Energy-Dispersive X-Ray Spectrometer at the Scanning Electron Microscope Using Experimental and Theoretical X-Ray Emission Yields.

    PubMed

    Procop, Mathias; Hodoroaba, Vasile-Dan; Terborg, Ralf; Berger, Dirk

    2016-12-01

    A method is proposed to determine the effective detector area for energy-dispersive X-ray spectrometers (EDS). Nowadays, detectors are available for a wide range of nominal areas ranging from 10 up to 150 mm2. However, it remains in most cases unknown whether this nominal area coincides with the "net active sensor area" that should be given according to the related standard ISO 15632, or with any other area of the detector device. Moreover, the specific geometry of EDS installation may further reduce a given detector area. The proposed method can be applied to most scanning electron microscope/EDS configurations. The basic idea consists in a comparison of the measured count rate with the count rate resulting from known X-ray yields of copper, titanium, or silicon. The method was successfully tested on three detectors with known effective area and applied further to seven spectrometers from different manufacturers. In most cases the method gave an effective area smaller than the area given in the detector description.

  4. Characterizing Facesheet/Core Disbonding in Honeycomb Core Sandwich Structure

    NASA Technical Reports Server (NTRS)

    Rinker, Martin; Ratcliffe, James G.; Adams, Daniel O.; Krueger, Ronald

    2013-01-01

    Results are presented from an experimental investigation into facesheet core disbonding in carbon fiber reinforced plastic/Nomex honeycomb sandwich structures using a Single Cantilever Beam test. Specimens with three, six and twelve-ply facesheets were tested. Specimens with different honeycomb cores consisting of four different cell sizes were also tested, in addition to specimens with three different widths. Three different data reduction methods were employed for computing apparent fracture toughness values from the test data, namely an area method, a compliance calibration technique and a modified beam theory method. The compliance calibration and modified beam theory approaches yielded comparable apparent fracture toughness values, which were generally lower than those computed using the area method. Disbonding in the three-ply facesheet specimens took place at the facesheet/core interface and yielded the lowest apparent fracture toughness values. Disbonding in the six and twelve-ply facesheet specimens took place within the core, near to the facesheet/core interface. Specimen width was not found to have a significant effect on apparent fracture toughness. The amount of scatter in the apparent fracture toughness data was found to increase with honeycomb core cell size.

  5. Climate Change and Its Impact on the Yield of Major Food Crops: Evidence from Pakistan

    PubMed Central

    Ali, Sajjad; Liu, Ying; Ishaq, Muhammad; Shah, Tariq; Abdullah; Ilyas, Aasir; Din, Izhar Ud

    2017-01-01

    Pakistan is vulnerable to climate change, and extreme climatic conditions are threatening food security. This study examines the effects of climate change (e.g., maximum temperature, minimum temperature, rainfall, relative humidity, and the sunshine) on the major crops of Pakistan (e.g., wheat, rice, maize, and sugarcane). The methods of feasible generalized least square (FGLS) and heteroscedasticity and autocorrelation (HAC) consistent standard error were employed using time series data for the period 1989 to 2015. The results of the study reveal that maximum temperature adversely affects wheat production, while the effect of minimum temperature is positive and significant for all crops. Rainfall effect towards the yield of a selected crop is negative, except for wheat. To cope with and mitigate the adverse effects of climate change, there is a need for the development of heat- and drought-resistant high-yielding varieties to ensure food security in the country. PMID:28538704

  6. Structure in the 3D Galaxy Distribution. III. Fourier Transforming the Universe: Phase and Power Spectra

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. G.

    2017-01-01

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform of finely binned galaxy positions. In both cases, deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multipoint hierarchy. We identify some threads of modern large-scale inference methodology that will presumably yield detections in new wider and deeper surveys.

  7. Climate Change and Its Impact on the Yield of Major Food Crops: Evidence from Pakistan.

    PubMed

    Ali, Sajjad; Liu, Ying; Ishaq, Muhammad; Shah, Tariq; Abdullah; Ilyas, Aasir; Din, Izhar Ud

    2017-05-24

    Pakistan is vulnerable to climate change, and extreme climatic conditions are threatening food security. This study examines the effects of climate change (e.g., maximum temperature, minimum temperature, rainfall, relative humidity, and the sunshine) on the major crops of Pakistan (e.g., wheat, rice, maize, and sugarcane). The methods of feasible generalized least square (FGLS) and heteroscedasticity and autocorrelation (HAC) consistent standard error were employed using time series data for the period 1989 to 2015. The results of the study reveal that maximum temperature adversely affects wheat production, while the effect of minimum temperature is positive and significant for all crops. Rainfall effect towards the yield of a selected crop is negative, except for wheat. To cope with and mitigate the adverse effects of climate change, there is a need for the development of heat- and drought-resistant high-yielding varieties to ensure food security in the country.

  8. Total lipid extraction of homogenized and intact lean fish muscles using pressurized fluid extraction and batch extraction techniques.

    PubMed

    Isaac, Giorgis; Waldebäck, Monica; Eriksson, Ulla; Odham, Göran; Markides, Karin E

    2005-07-13

    The reliability and efficiency of pressurized fluid extraction (PFE) technique for the extraction of total lipid content from cod and the effect of sample treatment on the extraction efficiency have been evaluated. The results were compared with two liquid-liquid extraction methods, traditional and modified methods according to Jensen. Optimum conditions were found to be with 2-propanol/n-hexane (65:35, v/v) as a first and n-hexane/diethyl ether (90:10, v/v) as a second solvent, 115 degrees C, and 10 min of static time. PFE extracts were cleaned up using the same procedure as in the methods according to Jensen. When total lipid yields obtained from homogenized cod muscle using PFE were compared yields obtained with original and modified Jensen methods, PFE gave significantly higher yields, approximately 10% higher (t test, P < 0.05). Infrared and NMR spectroscopy suggested that the additional material that inflates the gravimetric results is rather homogeneous and is primarily consists of phospholipid with headgroups of inositidic and/or glycosidic nature. The comparative study demonstrated that PFE is an alternative suitable technique to extract total lipid content from homogenized cod (lean fish) and herring (fat fish) muscle showing a precision comparable to that obtained with the traditional and modified Jensen methods. Despite the necessary cleanup step, PFE showed important advantages in the solvent consumption was cut by approximately 50% and automated extraction was possible.

  9. Seven-Year Clinical Surveillance Program Demonstrates Consistent MARD Accuracy Performance of a Blood Glucose Test Strip.

    PubMed

    Setford, Steven; Grady, Mike; Mackintosh, Stephen; Donald, Robert; Levy, Brian

    2018-05-01

    MARD (mean absolute relative difference) is increasingly used to describe performance of glucose monitoring systems, providing a single-value quantitative measure of accuracy and allowing comparisons between different monitoring systems. This study reports MARDs for the OneTouch Verio® glucose meter clinical data set of 80 258 data points (671 individual batches) gathered as part of a 7.5-year self-surveillance program Methods: Test strips were routinely sampled from randomly selected manufacturer's production batches and sent to one of 3 clinic sites for clinical accuracy assessment using fresh capillary blood from patients with diabetes, using both the meter system and standard laboratory reference instrument. Evaluation of the distribution of strip batch MARD yielded a mean value of 5.05% (range: 3.68-6.43% at ±1.96 standard deviations from mean). The overall MARD for all clinic data points (N = 80 258) was also 5.05%, while a mean bias of 1.28 was recorded. MARD by glucose level was found to be consistent, yielding a maximum value of 4.81% at higher glucose (≥100 mg/dL) and a mean absolute difference (MAD) of 5.60 mg/dL at low glucose (<100 mg/dL). MARD by year of manufacture varied from 4.67-5.42% indicating consistent accuracy performance over the surveillance period. This 7.5-year surveillance program showed that this meter system exhibits consistently low MARD by batch, glucose level and year, indicating close agreement with established reference methods whilste exhibiting lower MARD values than continuous glucose monitoring (CGM) systems and providing users with confidence in the performance when transitioning to each new strip batch.

  10. Degradation spectra and ionization yields of electrons in gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inokuti, M.; Douthat, D.A.; Rau, A.R.P.

    1975-01-01

    Progress in the microscopic theory of electron degradation in gases by Platzman, Fano, and co-workers is outlined. The theory consists of (1) the cataloging of all major inelastic-collision cross sections for electrons (including secondary-electron energy distribution in a single ionizing collision) and (2) the evaluation of cumulative consequences of individual electron collisions for the electrons themselves as well as for target molecules. For assessing the data consistency and reliability and extrapolating the data to the unexplored ranges of variables (such as electron energy), a series of plots devised by Platzman are very powerful. Electron degradation spectra were obtained through numericalmore » solution of the Spencer--Fano equation for all electron energies down to the first ionization thresholds for a few examples such as He and Ne. The systematics of the solutions resulted in the recognition of approximate scaling properties of the degradation spectra for different initial electron energies and pointed to new methods of more efficient treatment. Systematics of the ionization yields and their energy dependence on the initial electron energy were also recognized. Finally, the Spencer--Fano equation for the degradation spectra and the Fowler equation for the ionization and other yields are tightly linked with each other by a set of variational principles. (52 references, 7 figures) (DLC)« less

  11. Improved assessment of pyrogenic carbon quantity and quality in soils by liquid chromatography

    NASA Astrophysics Data System (ADS)

    Wiedemeier, Daniel B.; Hilf, Michael D.; Smittenberg, Rienk H.; Schmidt, Michael W. I.

    2013-04-01

    Fire-derived (pyrogenic) carbon (PyC) is produced by the incomplete combustion of biomass, for example during wildfires. It can persist in the environment for a long time due to its relative resistance against biological and chemical breakdown. Its accurate quantification in soils, sediments and other environmental media is of great interest because the slow turn-over of PyC has implications for the global carbon cycle. It is thus relevant for climate scenarios and mitigation. Moreover, PyC in pedological and sedimentological records can be used to reconstruct wildfire history, which is closely linked to climate history. PyC assessment is also a valuable tool for characterizing biochars and other pyrogenic products. A whole suite of PyC quantification methods exists because PyC is not a defined chemical structure but rather a continuum of thermally altered biomass. The benzene polycarboxylic acids (BPCA) analysis is a molecular marker method that was shown to yield conservative estimates of PyC quantity in soils and environmental samples. In addition, it yields unique qualitative information about the degree of aromaticity and condensation of PyC, which is indicative for the pyrolysis temperature of PyC and its resistance against degradation. The commonly used BPCA method consists in digesting samples with nitric acid that breaks down the PyC into a suite of BPCAs, which are cleaned, derivatized and finally analyzed by gas chromatography-flame ionization detection (GC-FID). Here, we present a modified BPCA quantification method for soils, sediments and other environmental samples that uses a high performance liquid chromatography system coupled to diode array detection (HPLC-DAD). We demonstrate that this method greatly enhances the reproducibility of PyC measurements while significantly reducing analysis time. Moreover, much less sample material is needed for precise PyC assessment and we show that the HPLC-DAD method yields more consistent PyC measurements than the GC-FID method. Additionally, the new method also facilitates δ13C and 14C measurements of the PyC fraction in these complex matrix samples. The isotopic information of PyC further supports the assessment of carbon budgets in soils, sediments or (bio-)chars and the reconstruction of past burning and climate events, as will be shown with examples.

  12. (U-Th)/He ages of phosphates from Zagami and ALHA77005 Martian meteorites: Implications to shock temperatures

    NASA Astrophysics Data System (ADS)

    Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik

    2017-01-01

    Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.

  13. Analysis of atomic force microscopy data for surface characterization using fuzzy logic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Mousa, Amjed, E-mail: aalmousa@vt.edu; Niemann, Darrell L.; Niemann, Devin J.

    2011-07-15

    In this paper we present a methodology to characterize surface nanostructures of thin films. The methodology identifies and isolates nanostructures using Atomic Force Microscopy (AFM) data and extracts quantitative information, such as their size and shape. The fuzzy logic based methodology relies on a Fuzzy Inference Engine (FIE) to classify the data points as being top, bottom, uphill, or downhill. The resulting data sets are then further processed to extract quantitative information about the nanostructures. In the present work we introduce a mechanism which can consistently distinguish crowded surfaces from those with sparsely distributed structures and present an omni-directional searchmore » technique to improve the structural recognition accuracy. In order to demonstrate the effectiveness of our approach we present a case study which uses our approach to quantitatively identify particle sizes of two specimens each with a unique gold nanoparticle size distribution. - Research Highlights: {yields} A Fuzzy logic analysis technique capable of characterizing AFM images of thin films. {yields} The technique is applicable to different surfaces regardless of their densities. {yields} Fuzzy logic technique does not require manual adjustment of the algorithm parameters. {yields} The technique can quantitatively capture differences between surfaces. {yields} This technique yields more realistic structure boundaries compared to other methods.« less

  14. Using statistical model to simulate the impact of climate change on maize yield with climate and crop uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining

    2017-11-01

    Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.

  15. Development of Yield and Tensile Strength Design Curves for Alloy 617

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nancy Lybeck; T. -L. Sham

    2013-10-01

    The U.S. Department of Energy Very High Temperature Reactor Program is acquiring data in preparation for developing an Alloy 617 Code Case for inclusion in the nuclear section of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code. A draft code case was previously developed, but effort was suspended before acceptance by ASME. As part of the draft code case effort, a database was compiled of yield and tensile strength data from tests performed in air. Yield strength and tensile strength at temperature are used to set time independent allowable stress for construction materials in B&PVmore » Code, Section III, Subsection NH. The yield and tensile strength data used for the draft code case has been augmented with additional data generated by Idaho National Laboratory and Oak Ridge National Laboratory in the U.S. and CEA in France. The standard ASME Section II procedure for generating yield and tensile strength at temperature is presented, along with alternate methods that accommodate the change in temperature trends seen at high temperatures, resulting in a more consistent design margin over the temperature range of interest.« less

  16. Lunar Proton Albedo Anomalies: Soil, Surveyors, and Statistics

    NASA Astrophysics Data System (ADS)

    Wilson, J. K.; Schwadron, N.; Spence, H. E.; Case, A. W.; Golightly, M. J.; Jordan, A.; Looper, M. D.; Petro, N. E.; Robinson, M. S.; Stubbs, T. J.; Zeitlin, C. J.; Blake, J. B.; Kasper, J. C.; Mazur, J. E.; Smith, S. S.; Townsend, L. W.

    2014-12-01

    Since the launch of LRO in 2009, the CRaTER instrument has been mapping albedo protons (~100 MeV) from the Moon. These protons are produced by nuclear spallation, a consequence of galactic cosmic ray (GCR) bombardment of the lunar regolith. Just as spalled neutrons and gamma rays reveal elemental abundances in the lunar regolith, albedo protons may be a complimentary method for mapping compositional variations. We presently find that the lunar maria have an average proton yield 0.9% ±0.3% higher than the average yield in the highlands; this is consistent with neutron data that is sensitive to the regolith's average atomic weight. We also see cases where two or more adjacent pixels (15° × 15°) have significantly anomalous yields above or below the mean. These include two high-yielding regions in the maria, and three low-yielding regions in the far-side highlands. Some of the regions could be artifacts of Poisson noise, but for completeness we consider possible effects from compositional anomalies in the lunar regolith, including pyroclastic flows, antipodes of fresh craters, and so-called "red spots". We also consider man-made landers and crash sites that may have brought elements not normally found in the lunar regolith.

  17. Common methods for fecal sample storage in field studies yield consistent signatures of individual identity in microbiome sequencing data.

    PubMed

    Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny

    2016-08-16

    Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.

  18. Current lipid extraction methods are significantly enhanced adding a water treatment step in Chlorella protothecoides.

    PubMed

    Ren, Xiaojie; Zhao, Xinhe; Turcotte, François; Deschênes, Jean-Sébastien; Tremblay, Réjean; Jolicoeur, Mario

    2017-02-11

    Microalgae have the potential to rapidly accumulate lipids of high interest for the food, cosmetics, pharmaceutical and energy (e.g. biodiesel) industries. However, current lipid extraction methods show efficiency limitation and until now, extraction protocols have not been fully optimized for specific lipid compounds. The present study thus presents a novel lipid extraction method, consisting in the addition of a water treatment of biomass between the two-stage solvent extraction steps of current extraction methods. The resulting modified method not only enhances lipid extraction efficiency, but also yields a higher triacylglycerols (TAG) ratio, which is highly desirable for biodiesel production. Modification of four existing methods using acetone, chloroform/methanol (Chl/Met), chloroform/methanol/H 2 O (Chl/Met/H 2 O) and dichloromethane/methanol (Dic/Met) showed respective lipid extraction yield enhancement of 72.3, 35.8, 60.3 and 60.9%. The modified acetone method resulted in the highest extraction yield, with 68.9 ± 0.2% DW total lipids. Extraction of TAG was particularly improved with the water treatment, especially for the Chl/Met/H 2 O and Dic/Met methods. The acetone method with the water treatment led to the highest extraction level of TAG with 73.7 ± 7.3 µg/mg DW, which is 130.8 ± 10.6% higher than the maximum value obtained for the four classical methods (31.9 ± 4.6 µg/mg DW). Interestingly, the water treatment preferentially improved the extraction of intracellular fractions, i.e. TAG, sterols, and free fatty acids, compared to the lipid fractions of the cell membranes, which are constituted of phospholipids (PL), acetone mobile polar lipids and hydrocarbons. Finally, from the 32 fatty acids analyzed for both neutral lipids (NL) and polar lipids (PL) fractions, it is clear that the water treatment greatly improves NL-to-PL ratio for the four standard methods assessed. Water treatment of biomass after the first solvent extraction step helps the subsequent release of intracellular lipids in the second extraction step, thus improving the global lipids extraction yield. In addition, the water treatment positively modifies the intracellular lipid class ratios of the final extract, in which TAG ratio is significantly increased without changes in the fatty acids composition. The novel method thus provides an efficient way to improve lipid extraction yield of existing methods, as well as selectively favoring TAG, a lipid of the upmost interest for biodiesel production.

  19. Optimization of analytical parameters for inferring relationships among Escherichia coli isolates from repetitive-element PCR by maximizing correspondence with multilocus sequence typing data.

    PubMed

    Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S

    2006-09-01

    Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.

  20. A simple-rapid method to separate uranium, thorium, and protactinium for U-series age-dating of materials

    PubMed Central

    Knight, Andrew W.; Eitrheim, Eric S.; Nelson, Andrew W.; Nelson, Steven; Schultz, Michael K.

    2017-01-01

    Uranium-series dating techniques require the isolation of radionuclides in high yields and in fractions free of impurities. Within this context, we describe a novel-rapid method for the separation and purification of U, Th, and Pa. The method takes advantage of differences in the chemistry of U, Th, and Pa, utilizing a commercially-available extraction chromatographic resin (TEVA) and standard reagents. The elution behavior of U, Th, and Pa were optimized using liquid scintillation counting techniques and fractional purity was evaluated by alpha-spectrometry. The overall method was further assessed by isotope dilution alpha-spectrometry for the preliminary age determination of an ancient carbonate sample obtained from the Lake Bonneville site in western Utah (United States). Preliminary evaluations of the method produced elemental purity of greater than 99.99% and radiochemical recoveries exceeding 90% for U and Th and 85% for Pa. Excellent purity and yields (76% for U, 96% for Th and 55% for Pa) were also obtained for the analysis of the carbonate samples and the preliminary Pa and Th ages of about 39,000 years before present are consistent with 14C-derived age of the material. PMID:24681438

  1. Evaluation of Thompson-type trend and monthly weather data models for corn yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.

  2. Analytical study to define a helicopter stability derivative extraction method, volume 1

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1973-01-01

    A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.

  3. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    PubMed

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Determination of Peukert's Constant Using Impedance Spectroscopy: Application to Supercapacitors.

    PubMed

    Mills, Edmund Martin; Kim, Sangtae

    2016-12-15

    Peukert's equation is widely used to model the rate dependence of battery capacity, and has recently attracted attention for application to supercapacitors. Here we present a newly developed method to readily determine Peukert's constant using impedance spectroscopy. Impedance spectroscopy is ideal for this purpose as it has the capability of probing electrical performance of a device over a wide range of time-scales within a single measurement. We demonstrate that the new method yields consistent results with conventional galvanostatic measurements through applying it to commercially available supercapacitors. Additionally, the novel method is much simpler and more precise, making it an attractive alternative for the determination of Peukert's constant.

  5. Measuring Ultrasonic Acoustic Velocity in a Thin Sheet of Graphite Epoxy Composite

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A method for measuring the acoustic velocity in a thin sheet of a graphite epoxy composite (GEC) material was investigated. This method uses two identical acoustic-emission (AE) sensors, one to transmit and one to receive. The delay time as a function of distance between sensors determines a bulk velocity. A lightweight fixture (balsa wood in the current implementation) provides a consistent method of positioning the sensors, thus providing multiple measurements of the time delay between sensors at different known distances. A linear fit to separation, x, versus delay time, t, will yield an estimate of the velocity from the slope of the line.

  6. [Predicting the impact of climate change in the next 40 years on the yield of maize in China].

    PubMed

    Ma, Yu-ping; Sun, Lin-li; E, You-hao; Wu, Wei

    2015-01-01

    Climate change will significantly affect agricultural production in China. The combination of the integral regression model and the latest climate projection may well assess the impact of future climate change on crop yield. In this paper, the correlation model of maize yield and meteorological factors was firstly established for different provinces in China by using the integral regression method, then the impact of climate change in the next 40 years on China's maize production was evaluated combined the latest climate prediction with the reason be ing analyzed. The results showed that if the current speeds of maize variety improvement and science and technology development were constant, maize yield in China would be mainly in an increasing trend of reduction with time in the next 40 years in a range generally within 5%. Under A2 climate change scenario, the region with the most reduction of maize yield would be the Northeast except during 2021-2030, and the reduction would be generally in the range of 2.3%-4.2%. Maize yield reduction would be also high in the Northwest, Southwest and middle and lower reaches of Yangtze River after 2031. Under B2 scenario, the reduction of 5.3% in the Northeast in 2031-2040 would be the greatest across all regions. Other regions with considerable maize yield reduction would be mainly in the Northwest and the Southwest. Reduction in maize yield in North China would be small, generally within 2%, under any scenarios, and that in South China would be almost unchanged. The reduction of maize yield in most regions would be greater under A2 scenario than under B2 scenario except for the period of 2021-2030. The effect of the ten day precipitation on maize yield in northern China would be almost positive. However, the effect of ten day average temperature on yield of maize in all regions would be generally negative. The main reason of maize yield reduction was temperature increase in most provinces but precipitation decrease in a few provinces. Assessments of the future change of maize yield in China based on the different methods were not consistent. Further evaluation needs to consider the change of maize variety and scientific and technological progress, and to enhance the reliability of evaluation models.

  7. Self-consistent projection operator theory in nonlinear quantum optical systems: A case study on degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Degenfeld-Schonburg, Peter; Navarrete-Benlloch, Carlos; Hartmann, Michael J.

    2015-05-01

    Nonlinear quantum optical systems are of paramount relevance for modern quantum technologies, as well as for the study of dissipative phase transitions. Their nonlinear nature makes their theoretical study very challenging and hence they have always served as great motivation to develop new techniques for the analysis of open quantum systems. We apply the recently developed self-consistent projection operator theory to the degenerate optical parametric oscillator to exemplify its general applicability to quantum optical systems. We show that this theory provides an efficient method to calculate the full quantum state of each mode with a high degree of accuracy, even at the critical point. It is equally successful in describing both the stationary limit and the dynamics, including regions of the parameter space where the numerical integration of the full problem is significantly less efficient. We further develop a Gaussian approach consistent with our theory, which yields sensibly better results than the previous Gaussian methods developed for this system, most notably standard linearization techniques.

  8. Determination of melanterite-rozenite and chalcanthite-bonattite equilibria by humidity measurements at 0.1 MPa

    USGS Publications Warehouse

    Chou, I.-Ming; Seal, R.R.; Hemingway, B.S.

    2002-01-01

    Melanterite (FeSO4??7H2O)-rozenite (FeSO4??4H2O) and chalcanthite (CuSO4??5H2O)-bonattite (CuSO4??3H2O) equilibria were determined by humidity measurements at 0.1 MPa. Two methods were used; one is the gas-flow-cell method (between 21 and 98 ??C), and the other is the humidity-buffer method (between 21 and 70 ??C). The first method has a larger temperature uncertainty even though it is more efficient. With the aid of humidity buffers, which correspond to a series of saturated binary salt solutions, the second method yields reliable results as demonstrated by very tight reversals along each humidity buffer. These results are consistent with those obtained by the first method, and also with the solubility data reported in the literature. Thermodynamic analysis of these data yields values of 29.231 ?? 0.025 and 22.593 ?? 0.040 kJ/mol for standard Gibbs free energy of reaction at 298.15 K and 0.1 MPa for melanterite-rozenite and chalcanthite-bonattite equilibria, respectively. The methods used in this study hold great potential for unraveling the thermodynamic properties of sulfate salts involved in dehydration reactions at near ambient conditions.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz-Cruz, J. L.

    We discuss several aspects of the flavor problem in the Supersymmetry. First, in order to quantify the SUSY flavor problem, we generate randomly the entries of the sfermion mass matrices and determine which percentage of the points are consistent with current bounds on the flavor violating transitions, for which we take as an illustration the lepton flavor violating (LFV) decays li {yields} lj{gamma}. In the first instance we apply the mass-insertion method, and study how this percentage changes as one varies the parameters of the model. It is found that for 105 points about 10% of points pass current LFVmore » bounds on {mu} {yields} e{gamma} provided the sleptons masses are {approx} 10 TeV. While bounds on {tau} {yields} {mu}{gamma}, e{gamma} are satisfied for almost 100% of points even for sleptons masses as low as 360 GeV. Then, we consider an ansatz for sfermion masses that can be diagonalized exactly, and compare the results obtained previously for {tau} {yields} {mu}{gamma}. Now, we get that 100% of points satisfy the experimental bounds but with sleptons masses larger than 460 GeV.« less

  10. Spatial and Temporal Patterns of Suspended Sediment Yields in Nested Urban Catchments

    NASA Astrophysics Data System (ADS)

    Kemper, J. T.; Miller, A. J.; Welty, C.

    2017-12-01

    In a highly regulated area such as the Chesapeake Bay watershed, suspended sediment is a matter of primary concern. Near real-time turbidity and discharge data have been collected continuously for more than four years at five stream gages representing three nested watershed scales (1-2 sq km, 5-6 sq km, 14 sq km) in the highly impervious Dead Run watershed, located in Baltimore County, MD. Using turbidity-concentration relationships based on sample analyses at the gage site, sediment yields for each station can be quantified for a variety of temporal scales. Sediment yields have been calculated for 60+ different storms across four years. Yields show significant spatial variation, both at equivalent sub-watershed scales and from headwaters to mouth. Yields are higher at the headwater station with older development and virtually no stormwater management (DR5) than at the station with more recent development and more extensive stormwater management (DR2). However, this pattern is reversed for the stations at the next larger scale: yields are lower at DR4, downstream of DR5, than at DR3, downstream of DR2. This suggests spatial variation in the dominant sediment sources within each subwatershed. Additionally, C-Q hysteresis curves display consistent counterclockwise behavior at the DR4 station, in contrast to the consistent clockwise behavior displayed at the DR3 station. This further suggests variation in dominant sediment sources (perhaps distal vs local, respectively). We observe consistent seasonal trends in the relative magnitudes of sediment yield for different subwatersheds (e.g. DR3>DR4 in summer, DR5>DR2 in spring). We also observe significant year-to-year variation in sediment yield at the headwater and intermediate scales, whereas yields at the 14 sq km scale are largely similar across the monitored years. This observation would be consistent with the possibility that internal storage and remobilization tend to modulate downstream yields even with spatial and temporal variation in upstream sources. The fine-scale design of this study represents a unique opportunity to compare and contrast sediment yields across a variety of spatial and temporal scales, and provide insight into sediment transport dynamics within an urbanized watershed.

  11. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  12. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  13. Lipoabdominoplasty: An exponential advantage for a consistently safe and aesthetic outcome.

    PubMed

    Kanjoor, J R; Singh, A K

    2012-01-01

    Extensive liposuction along with limited dissection of abdominal flaps is slowly emerging as a well proven advantageous method over standard abdominoplasty. A retrospective study analyzed 146 patients managed for the abdominal contour deformities from March 2004 to February 2010. A simple method to project the post operative outcome by rotation of a supine lateral photograph to upright posture in 46 patients prospectively has succeeded in projecting a predictable result. All patients were encouraged to practice chest physiotherapy in 'tummy tuck' position during the preoperative counseling. Aggressive liposuction of entire upper abdomen, a limited dissection in the midline, plication of diastasis of rectus whenever indicated, panniculectomy and neoumblicoplasty were done in all patients. The patients had a mean age of 43, youngest being 29 and oldest 72 years. Majority were of normal weight (94%). Twelve were morbidly obese; 57 patients had undergone previous abdominal surgeries; 49 patients had associated hernias. Lipoabdominoplasty yielded a satisfactory result in 110 (94%) patients. The postoperative patient had a definitely less heavy harmonious abdomen with improved waistline. The complications were more with higher BMI, fat thickness of more than 7 cm and prolonged operating time when other procedures were combined. Extensive liposuction combined with limited dissection method applied to all abdominoplasty patients yielded consistently safe, reliable and predictable aesthetic results with less complications and faster recovery. The simple photographic manipulation has helped project the postoperative outcome reliably. The preoperative chest physiotherapy in tummytuck position helped prevent chest complications.

  14. Psychometric Properties of the Autism-Spectrum Quotient for Assessing Low and High Levels of Autistic Traits in College Students.

    PubMed

    Stevenson, Jennifer L; Hart, Kari R

    2017-06-01

    The current study systematically investigated the effects of scoring and categorization methods on the psychometric properties of the Autism-Spectrum Quotient. Four hundred and three college students completed the Autism-Spectrum Quotient at least once. Total scores on the Autism-Spectrum Quotient had acceptable internal consistency and test-retest reliability using a binary or Likert scoring method, but the results were more varied for the subscales. Overall, Likert scoring yielded higher internal consistency and test-retest reliability than binary scoring. However, agreement in categorization of low and high autistic traits was poor over time (except for a median split on Likert scores). The results support using Likert scoring and administering the Autism-Spectrum Quotient at the same time as the task of interest with neurotypical participants.

  15. Recruitment methods employed in the prostate, lung, colorectal, and ovarian cancer screening trial

    PubMed Central

    Gren, Lisa; Broski, Karen; Childs, Jeffery; Cordes, Jill; Engelhard, Deborah; Gahagan, Betsy; Gamito, Eduard; Gardner, Vivien; Geisser, Mindy; Higgins, Darlene; Jenkins, Victoria; Lamerato, Lois; Lappe, Karen; Lowery, Heidi; McGuire, Colleen; Miedzinski, Mollie; Ogden, Sheryl; Tenorio, Sally; Watt, Gavin; Wohlers, Bonita; Marcus, Pamela

    2015-01-01

    Background The Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) is a US National Cancer Institute (NCI)-funded randomized controlled trial designed to evaluate whether certain screening tests reduce mortality from prostate, lung, colorectal, and ovarian cancer. To obtain adequate statistical power, it was necessary to enroll over 150,000 healthy volunteers. Recruitment began in 1993 and ended in 2001. Purpose Our goal is to evaluate the success of recruitment methods employed by the 10 PLCO screening centers. We also provide estimates of recruitment yield and cost for our most successful strategy, direct mail. Methods Each screening center selected its own methods of recruitment. Methods changed throughout the recruitment period as needed. For this manuscript, representatives from each screening center provided information on methods utilized and their success. Results In the United States between 1993 and 2001, ten screening centers enrolled 154,934 study participants. Based on participant self-report, an estimated 95% of individuals were recruited by direct mail. Overall, enrollment yield for direct mail was 1.0%. Individual center enrollment yield ranged from 0.7% to 3.8%. Cost per enrolled participant was $9.64–35.38 for direct mail, excluding personnel costs. Limitations Numeric data on recruitment processes were not kept consistently at individual screening centers. Numeric data in this manuscript are based on the experiences of 5 of the 10 centers. Conclusions Direct mail, using rosters of names and addresses from profit and not-for-profit (including government) organizations, was the most successful and most often used recruitment method. Other recruitment strategies, such as community outreach and use of mass media, can be an important adjunct to direct mail in recruiting minority populations. PMID:19254935

  16. Linkages and Interactions Analysis of Major Effect Drought Grain Yield QTLs in Rice.

    PubMed

    Vikram, Prashant; Swamy, B P Mallikarjuna; Dixit, Shalabh; Trinidad, Jennylyn; Sta Cruz, Ma Teresa; Maturan, Paul C; Amante, Modesto; Kumar, Arvind

    2016-01-01

    Quantitative trait loci conferring high grain yield under drought in rice are important genomic resources for climate resilient breeding. Major and consistent drought grain yield QTLs usually co-locate with flowering and/or plant height QTLs, which could be due to either linkage or pleiotropy. Five mapping populations used for the identification of major and consistent drought grain yield QTLs underwent multiple-trait, multiple-interval mapping test (MT-MIM) to estimate the significance of pleiotropy effects. Results indicated towards possible linkages between the drought grain yield QTLs with co-locating flowering and/or plant height QTLs. Linkages of days to flowering and plant height were eliminated through a marker-assisted breeding approach. Drought grain yield QTLs also showed interaction effects with flowering QTLs. Drought responsiveness of the flowering locus on chromosome 3 (qDTY3.2) has been revealed through allelic analysis. Considering linkage and interaction effects associated with drought QTLs, a comprehensive marker-assisted breeding strategy was followed to develop rice genotypes with improved grain yield under drought stress.

  17. Contradictory genetic make-up of Dutch harbour porpoises: Response to van der Plas-Duivesteijn et al.

    NASA Astrophysics Data System (ADS)

    Kopps, Anna M.; Palsbøll, Per J.

    2016-02-01

    The assessment of the status of endangered species or populations typically draw generously on the plethora of population genetic software available to detect population genetic structuring. However, despite the many available analytical approaches, population genetic inference methods [of neutral genetic variation] essentially capture three basic processes; migration, random genetic drift and mutation. Consequently, different analytical approaches essentially capture the same basic process, and should yield consistent results.

  18. CdTe1-x S x (x  ⩽  0.05) thin films synthesized by aqueous solution deposition and annealing

    NASA Astrophysics Data System (ADS)

    Pruzan, Dennis S.; Hahn, Carina E.; Misra, Sudhajit; Scarpulla, Michael A.

    2017-11-01

    While CdS thin films are commonly deposited from aqueous solutions, CdTe thin films are extremely difficult to deposit directly from aqueous solution. In this work, we report on polycrystalline CdTe1-x S x thin films synthesized via deposition from aqueous precursor solutions followed by annealing treatments and on their physical properties. The deposition method uses spin-coating of alternating Cd2+ and Te2- aqueous solutions and rinse steps to allow formation of the films but to shear off excess reactants and poorly-bonded solids. Films are then annealed in the presence of CdCl2 as is commonly done for CdTe photovoltaic absorber layers deposited by any means. Scanning electron microscopy (SEM) reveals low void fractions and grain sizes up to 4 µm and x-ray diffraction (XRD) shows that the films are primarily cubic CdTe1-x S x (x  ⩽  0.05) with random crystallographic orientation. Optical transmission yields bandgap absorption consistent with a CdTe1-x S x dilute alloy and low-temperature photoluminescence (PL) consists of an emission band centered at 1.35 eV consistent with donor-acceptor pair (DAP) transitions in CdTe1-x S x . Together, the crystalline quality and PL yield from films produced by this method represent an important step towards electroless, ligand-free solution processed CdTe and related alloy thin films suitable for optoelectronic device applications such as thin film heterojunction or nanodipole-based photovoltaics.

  19. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Influence of the Extractive Method on the Recovery of Phenolic Compounds in Different Parts of Hymenaea martiana Hayne

    PubMed Central

    Oliveira, Fernanda Granja da Silva; de Lima-Saraiva, Sarah Raquel Gomes; Oliveira, Ana Paula; Rabêlo, Suzana Vieira; Rolim, Larissa Araújo; Almeida, Jackson Roberto Guedes da Silva

    2016-01-01

    Background: Popularly known as “jatobá,” Hymenaea martiana Hayne is a medicinal plant widely used in the Brazilian Northeast for the treatment of various diseases. Objective: The aim of this study was to evaluate the influence of different extractive methods in the production of phenolic compounds from different parts of H. martiana. Materials and Methods: The leaves, bark, fruits, and seeds were dried, pulverized, and submitted to maceration, ultrasound, and percolation extractive methods, which were evaluated for yield, visual aspects, qualitative phytochemical screening, phenolic compound content, and total flavonoids. Results: The highest results of yield were obtained from the maceration of the leaves, which may be related to the contact time between the plant drug and solvent. The visual aspects of the extracts presented some differences between the extractive methods. The phytochemical screening showed consistent data with other studies of the genus. Both the vegetal part as the different extractive methods influenced significantly the levels of phenolic compounds, and the highest content was found in the maceration of the barks, even more than the content found previously. No differences between the levels of total flavonoids were significant. The highest concentration of total flavonoids was found in the ultrasound of the barks, followed by maceration on this drug. According to the results, the barks of H. martiana presented the highest total flavonoid contents. Conclusion: The results demonstrate that both the vegetable and the different extractive methods influenced significantly various parameters obtained in the various extracts, demonstrating the importance of systematic comparative studies for the development of pharmaceuticals and cosmetics. SUMMARY The phytochemical screening showed consistent data with other studies of the genus HymenaeaBoth the vegetable part and the different extractive methods influenced significantly various parameters obtained in the various extracts, including the levels of phenolic compoundsThe barks of H. martiana presented the highest total phenolic and flavonoid contents. PMID:27695267

  1. Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds

    USGS Publications Warehouse

    Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark

    2009-01-01

    Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.

  2. Microwave-assisted extraction (MAE) of bioactive saponin from mahogany seed (Swietenia mahogany Jacq)

    NASA Astrophysics Data System (ADS)

    Waziiroh, E.; Harijono; Kamilia, K.

    2018-03-01

    Mahogany is frequently used for medicines for cancer, tumor, and diabetes, as it contains saponin and flavonoid. Saponin is a complex glycosydic compound consisted of triterpenoids or steroids. Saponin can be extracted from a plant by using a solvent extraction. Microwave Assisted Extraction (MAE) is a non-conventional extraction method that use micro waves in the process. This research was conducted by a Complete Random Design with two factors which were extraction time (120, 150, and 180 seconds) and solvent ratio (10:1, 15:1, and 20:1 v/w). The best treatment of MAE were the solvent ratio 15:1 (v/w) for 180 seconds. The best treatment resulting crude saponin extract yield of 41.46%, containing 11.53% total saponins, and 49.17% of antioxidant activity. Meanwhile, the treatment of maceration method were the solvent ratio 20:1 (v/w) for 48 hours resulting 39.86% yield of saponin crude extract, 9.26% total saponins and 56.23% of antioxidant activity. The results showed MAE was more efficient (less time of extraction and solvent amount) than maceration method.

  3. First-principles method for calculating the rate constants of internal-conversion and intersystem-crossing transitions.

    PubMed

    Valiev, R R; Cherepanov, V N; Baryshnikov, G V; Sundholm, D

    2018-02-28

    A method for calculating the rate constants for internal-conversion (k IC ) and intersystem-crossing (k ISC ) processes within the adiabatic and Franck-Condon (FC) approximations is proposed. The applicability of the method is demonstrated by calculation of k IC and k ISC for a set of organic and organometallic compounds with experimentally known spectroscopic properties. The studied molecules were pyrromethene-567 dye, psoralene, hetero[8]circulenes, free-base porphyrin, naphthalene, and larger polyacenes. We also studied fac-Alq 3 and fac-Ir(ppy) 3 , which are important molecules in organic light emitting diodes (OLEDs). The excitation energies were calculated at the multi-configuration quasi-degenerate second-order perturbation theory (XMC-QDPT2) level, which is found to yield excitation energies in good agreement with experimental data. Spin-orbit coupling matrix elements, non-adiabatic coupling matrix elements, Huang-Rhys factors, and vibrational energies were calculated at the time-dependent density functional theory (TDDFT) and complete active space self-consistent field (CASSCF) levels. The computed fluorescence quantum yields for the pyrromethene-567 dye, psoralene, hetero[8]circulenes, fac-Alq 3 and fac-Ir(ppy) 3 agree well with experimental data, whereas for the free-base porphyrin, naphthalene, and the polyacenes, the obtained quantum yields significantly differ from the experimental values, because the FC and adiabatic approximations are not accurate for these molecules.

  4. Modeling ultrafast solvated electronic dynamics using time-dependent density functional theory and polarizable continuum model.

    PubMed

    Liang, Wenkel; Chapman, Craig T; Ding, Feizhi; Li, Xiaosong

    2012-03-01

    A first-principles solvated electronic dynamics method is introduced. Solvent electronic degrees of freedom are coupled to the time-dependent electronic density of a solute molecule by means of the implicit reaction field method, and the entire electronic system is propagated in time. This real-time time-dependent approach, incorporating the polarizable continuum solvation model, is shown to be very effective in describing the dynamical solvation effect in the charge transfer process and yields a consistent absorption spectrum in comparison to the conventional linear response results in solution. © 2012 American Chemical Society

  5. Hydrazine-Assisted Liquid Exfoliation of MoS2 for Catalytic Hydrodeoxygenation of 4-Methylphenol.

    PubMed

    Liu, Guoliang; Ma, Hualong; Teixeira, Ivo; Sun, Zhenyu; Xia, Qineng; Hong, Xinlin; Tsang, Shik Chi Edman

    2016-02-24

    A simple but effective method to exfoliate bulk MoS2 in a range of solvents is presented for the preparation of colloid flakes consisted of one to a few molecular layers by application of ultrasonic treatment in N2 H4 . Their high yield in solution and exposure of more active surface sites allows the synthesis of corresponding solid catalysts with remarkably high activity in hydrodeoxygenation of 4-methylphenol and this method can also be applied to other two dimensional materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Targeted Next Generation Sequencing in Patients with Inborn Errors of Metabolism

    PubMed Central

    Yubero, Dèlia; Brandi, Núria; Ormazabal, Aida; Garcia-Cazorla, Àngels; Pérez-Dueñas, Belén; Campistol, Jaime; Ribes, Antonia; Palau, Francesc

    2016-01-01

    Background Next-generation sequencing (NGS) technology has allowed the promotion of genetic diagnosis and are becoming increasingly inexpensive and faster. To evaluate the utility of NGS in the clinical field, a targeted genetic panel approach was designed for the diagnosis of a set of inborn errors of metabolism (IEM). The final aim of the study was to compare the findings for the diagnostic yield of NGS in patients who presented with consistent clinical and biochemical suspicion of IEM with those obtained for patients who did not have specific biomarkers. Methods The subjects studied (n = 146) were classified into two categories: Group 1 (n = 81), which consisted of patients with clinical and biochemical suspicion of IEM, and Group 2 (n = 65), which consisted of IEM cases with clinical suspicion and unspecific biomarkers. A total of 171 genes were analyzed using a custom targeted panel of genes followed by Sanger validation. Results Genetic diagnosis was achieved in 50% of patients (73/146). In addition, the diagnostic yield obtained for Group 1 was 78% (63/81), and this rate decreased to 15.4% (10/65) in Group 2 (X2 = 76.171; p < 0.0001). Conclusions A rapid and effective genetic diagnosis was achieved in our cohort, particularly the group that had both clinical and biochemical indications for the diagnosis. PMID:27243974

  7. Measuring NMHC and NMOG emissions from motor vehicles via FTIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Gierczak, Christine A.; Kralik, Lora L.; Mauti, Adolfo; Harwell, Amy L.; Maricq, M. Matti

    2017-02-01

    The determination of non-methane organic gases (NMOG) emissions according to United States Environmental Protection Agency (EPA) regulations is currently a multi-step process requiring separate measurement of various emissions components by a number of independent on-line and off-line techniques. The Fourier transform infrared spectroscopy (FTIR) method described in this paper records all required components using a single instrument. It gives data consistent with the regulatory method, greatly simplifies the process, and provides second by second time resolution. Non-methane hydrocarbons (NMHCs) are measured by identifying a group of hydrocarbons, including oxygenated species, that serve as a surrogate for this class, the members of which are dynamically included if they are present in the exhaust above predetermined threshold levels. This yields an FTIR equivalent measure of NMHC that correlates within 5% to the regulatory flame ionization detection (FID) method. NMOG is then determined per regulatory calculation solely from FTIR recorded emissions of NMHC, ethanol, acetaldehyde, and formaldehyde, yielding emission rates that also correlate within 5% with the reference method. Examples are presented to show how the resulting time resolved data benefit aftertreatment development for light duty vehicles.

  8. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  9. Search for the decay modes D⁰→e⁺e⁻, D⁰→μ⁺μ⁻, and D⁰→e ±μ∓

    DOE PAGES

    Lees, J. P.; Poireau, V.; Tisserand, V.; ...

    2012-08-01

    We present searches for the rare decay modes D⁰→e⁺e⁻, D0→μ⁺μ⁻, and D⁰→e ±μ ∓ in continuum e⁺e⁻→cc¯ events recorded by the BABAR detector in a data sample that corresponds to an integrated luminosity of 468 fb⁻¹. These decays are highly Glashow–Iliopoulos–Maiani suppressed but may be enhanced in several extensions of the standard model. Our observed event yields are consistent with the expected backgrounds. An excess is seen in the D⁰→μ⁺μ⁻ channel, although the observed yield is consistent with an upward background fluctuation at the 5% level. Using the Feldman–Cousins method, we set the following 90% confidence level intervals on themore » branching fractions: B(D⁰→e⁺e⁻)<1.7×10⁻⁷, B(D⁰→μ⁺μ⁻) within [0.6,8.1]×10⁻⁷, and B(D⁰→e ±μ ∓)<3.3×10⁻⁷.« less

  10. Measuring leader perceptions of school readiness for reforms: use of an iterative model combining classical and Rasch methods.

    PubMed

    Chatterji, Madhabi

    2002-01-01

    This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.

  11. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  12. Kupffer Cell Isolation for Nanoparticle Toxicity Testing

    PubMed Central

    Bourgognon, Maxime; Klippstein, Rebecca; Al-Jamal, Khuloud T.

    2015-01-01

    The large majority of in vitro nanotoxicological studies have used immortalized cell lines for their practicality. However, results from nanoparticle toxicity testing in immortalized cell lines or primary cells have shown discrepancies, highlighting the need to extend the use of primary cells for in vitro assays. This protocol describes the isolation of mouse liver macrophages, named Kupffer cells, and their use to study nanoparticle toxicity. Kupffer cells are the most abundant macrophage population in the body and constitute part of the reticulo-endothelial system (RES), responsible for the capture of circulating nanoparticles. The Kupffer cell isolation method reported here is based on a 2-step perfusion method followed by purification on density gradient. The method, based on collagenase digestion and density centrifugation, is adapted from the original protocol developed by Smedsrød et al. designed for rat liver cell isolation and provides high yield (up to 14 x 106 cells per mouse) and high purity (>95%) of Kupffer cells. This isolation method does not require sophisticated or expensive equipment and therefore represents an ideal compromise between complexity and cell yield. The use of heavier mice (35-45 g) improves the yield of the isolation method but also facilitates remarkably the procedure of portal vein cannulation. The toxicity of functionalized carbon nanotubes f-CNTs was measured in this model by the modified LDH assay. This method assesses cell viability by measuring the lack of structural integrity of Kupffer cell membrane after incubation with f-CNTs. Toxicity induced by f-CNTs can be measured consistently using this assay, highlighting that isolated Kupffer cells are useful for nanoparticle toxicity testing. The overall understanding of nanotoxicology could benefit from such models, making the nanoparticle selection for clinical translation more efficient. PMID:26327223

  13. Transition of Premature Infants From Hospital to Home Life

    PubMed Central

    Lopez, Greta L.; Anderson, Kathryn Hoehn; Feutchinger, Johanna

    2013-01-01

    Purpose To conduct an integrative literature review to studies that focus on the transition of premature infants from neonatal intensive care unit (NICU) to home. Method A literature search was performed in Cumulative Index to Nursing and Allied Health Literature (CINAHL), PubMed, and MEDLINE to identify studies consisting on the transition of premature infants from hospital to home life. Results The search yielded seven articles that emphasized the need for home visits, child and family assessment methods, methods of keeping contact with health care providers, educational and support groups, and described the nurse’s role in the transition program. The strategy to ease the transition differed in each article. Conclusion Home visits by a nurse were a key component by providing education, support, and nursing care. A program therefore should consist of providing parents of premature infants with home visits implemented by a nurse or staying in contact with a nurse (e.g., via video-conference). PMID:22763247

  14. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE PAGES

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...

    2016-10-18

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  15. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  16. Analysis of impact melt and vapor production in CTH for planetary applications

    DOE PAGES

    Quintana, S. N.; Crawford, D. A.; Schultz, P. H.

    2015-05-19

    This study explores impact melt and vapor generation for a variety of impact speeds and materials using the shock physics code CTH. The study first compares the results of two common methods of impact melt and vapor generation to demonstrate that both the peak pressure method and final temperature method are appropriate for high-speed impact models (speeds greater than 10 km/s). However, for low-speed impact models (speeds less than 10 km/s), only the final temperature method is consistent with laboratory analyses to yield melting and vaporization. Finally, a constitutive model for material strength is important for low-speed impacts because strengthmore » can cause an increase in melting and vaporization.« less

  17. Analysis of impact melt and vapor production in CTH for planetary applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, S. N.; Crawford, D. A.; Schultz, P. H.

    This study explores impact melt and vapor generation for a variety of impact speeds and materials using the shock physics code CTH. The study first compares the results of two common methods of impact melt and vapor generation to demonstrate that both the peak pressure method and final temperature method are appropriate for high-speed impact models (speeds greater than 10 km/s). However, for low-speed impact models (speeds less than 10 km/s), only the final temperature method is consistent with laboratory analyses to yield melting and vaporization. Finally, a constitutive model for material strength is important for low-speed impacts because strengthmore » can cause an increase in melting and vaporization.« less

  18. Evaluation of the methods for enumerating coliform bacteria from water samples using precise reference standards.

    PubMed

    Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M

    2006-04-01

    To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.

  19. Enhancing cellulose accessibility of corn stover by deep eutectic solvent pretreatment for butanol fermentation.

    PubMed

    Xu, Guo-Chao; Ding, Ji-Cai; Han, Rui-Zhi; Dong, Jin-Jun; Ni, Ye

    2016-03-01

    In this study, an effective corn stover (CS) pretreatment method was developed for biobutanol fermentation. Deep eutectic solvents (DESs), consisted of quaternary ammonium salts and hydrogen donors, display similar properties to room temperature ionic liquid. Seven DESs with different hydrogen donors were facilely synthesized. Choline chloride:formic acid (ChCl:formic acid), an acidic DES, displayed excellent performance in the pretreatment of corn stover by removal of hemicellulose and lignin as confirmed by SEM, FTIR and XRD analysis. After optimization, glucose released from pretreated CS reached 17.0 g L(-1) and yield of 99%. The CS hydrolysate was successfully utilized in butanol fermentation by Clostridium saccharobutylicum DSM 13864, achieving butanol titer of 5.63 g L(-1) with a yield of 0.17 g g(-1) total sugar and productivity of 0.12 g L(-1)h(-1). This study demonstrates DES could be used as a promising and biocompatible pretreatment method for the conversion of lignocellulosic biomass into biofuel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. STRUCTURE IN THE 3D GALAXY DISTRIBUTION: III. FOURIER TRANSFORMING THE UNIVERSE: PHASE AND POWER SPECTRA.

    PubMed

    Scargle, Jeffrey D; Way, M J; Gazis, P R

    2017-04-10

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys.

  1. The SPIDER fission fragment spectrometer for fission product yield measurements

    DOE PAGES

    Meierbachtol, K.; Tovesson, F.; Shields, D.; ...

    2015-04-01

    We developed the SPectrometer for Ion DEtermination in fission Research (SPIDER) for measuring mass yield distributions of fission products from spontaneous and neutron-induced fission. The 2E–2v method of measuring the kinetic energy (E) and velocity (v) of both outgoing fission products has been utilized, with the goal of measuring the mass of the fission products with an average resolution of 1 atomic mass unit (amu). Moreover, the SPIDER instrument, consisting of detector components for time-of-flight, trajectory, and energy measurements, has been assembled and tested using 229Th and 252Cf radioactive decay sources. For commissioning, the fully assembled system measured fission productsmore » from spontaneous fission of 252Cf. Individual measurement resolutions were met for time-of-flight (250 ps FWHM), spacial resolution (2 mm FHWM), and energy (92 keV FWHM for 8.376 MeV). Finally, these mass yield results measured from 252Cf spontaneous fission products are reported from an E–v measurement.« less

  2. Calculated quantum yield of photosynthesis of phytoplankton in the Marine Light-Mixed Layers (59 deg N, 21 deg W)

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.

    1995-01-01

    The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.

  3. STRUCTURE IN THE 3D GALAXY DISTRIBUTION: III. FOURIER TRANSFORMING THE UNIVERSE: PHASE AND POWER SPECTRA

    PubMed Central

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R.

    2017-01-01

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys. PMID:29628519

  4. Structure in the 3D Galaxy Distribution. III. Fourier Transforming the Universe: Phase and Power Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R., E-mail: Jeffrey.D.Scargle@nasa.gov, E-mail: Michael.J.Way@nasa.gov, E-mail: PGazis@sbcglobal.net

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform of finely binned galaxy positions. In both cases, deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fouriermore » transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multipoint hierarchy. We identify some threads of modern large-scale inference methodology that will presumably yield detections in new wider and deeper surveys.« less

  5. Structure in the 3D Galaxy Distribution: III. Fourier Transforming the Universe: Phase and Power Spectra

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R.

    2017-01-01

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys.

  6. Enhanced yields and soil quality in a wheat-maize rotation using buried straw mulch.

    PubMed

    Guo, Zhibin; Liu, Hui; Wan, Shuixia; Hua, Keke; Jiang, Chaoqiang; Wang, Daozhong; He, Chuanlong; Guo, Xisheng

    2017-08-01

    Straw return may improve soil quality and crop yields. In a 2-year field study, a straw return method (ditch-buried straw return, DB-SR) was used to investigate the soil quality and crop productivity effects on a wheat-corn rotation system. This study consisted of three treatments, each with three replicates: (1) mineral fertilisation alone (CK0); (2) mineral fertilisation + 7500 kg ha -1 wheat straw incorporated at depth of 0-15 cm (NPKWS); and (3) mineral fertilisation + 7500 kg ha -1 wheat straw ditch buried at 15-30 cm (NPKDW). NPKWS and NPKDW enhanced crop yield and improved soil biotical properties compared to mineral fertilisation alone. NPKDW contributed to greater crop yields and soil nutrient availability at 15-30 cm depths, compared to NPKWS treatment. NPKDW enhanced soil microbial activity and bacteria species richness and diversity in the 0-15 cm layer. NPKWS increased soil microbial biomass, bacteria species richness and diversity at 15-30 cm. The comparison of the CK0 and NPKWS treatments indicates that a straw ditch buried by digging to the depth of 15-30 cm can improve crop yields and soil quality in a wheat-maize rotation system. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  7. Regional crop gross primary production and yield estimation using fused Landsat-MODIS data

    NASA Astrophysics Data System (ADS)

    He, M.; Kimball, J. S.; Maneta, M. P.; Maxwell, B. D.; Moreno, A.

    2017-12-01

    Accurate crop yield assessments using satellite-based remote sensing are of interest for the design of regional policies that promote agricultural resiliency and food security. However, the application of current vegetation productivity algorithms derived from global satellite observations are generally too coarse to capture cropland heterogeneity. Merging information from sensors with reciprocal spatial and temporal resolution can improve the accuracy of these retrievals. In this study, we estimate annual crop yields for seven important crop types -alfalfa, barley, corn, durum wheat, peas, spring wheat and winter wheat over Montana, United States (U.S.) from 2008 to 2015. Yields are estimated as the product of gross primary production (GPP) and a crop-specific harvest index (HI) at 30 m spatial resolution. To calculate GPP we used a modified form of the MOD17 LUE algorithm driven by a 30 m 8-day fused NDVI dataset constructed by blending Landsat (5 or 7) and MODIS Terra reflectance data. The fused 30-m NDVI record shows good consistency with the original Landsat and MODIS data, but provides better spatiotemporal information on cropland vegetation growth. The resulting GPP estimates capture characteristic cropland patterns and seasonal variations, while the estimated annual 30 m crop yield results correspond favorably with county-level crop yield data (r=0.96, p<0.05). The estimated crop yield performance was generally lower, but still favorable in relation to field-scale crop yield surveys (r=0.42, p<0.01). Our methods and results are suitable for operational applications at regional scales.

  8. Stereoselective synthesis of unsaturated α-amino acids.

    PubMed

    Fanelli, Roberto; Jeanne-Julien, Louis; René, Adeline; Martinez, Jean; Cavelier, Florine

    2015-06-01

    Stereoselective synthesis of unsaturated α-amino acids was performed by asymmetric alkylation. Two methods were investigated and their enantiomeric excess measured and compared. The first route consisted of an enantioselective approach induced by the Corey-Lygo catalyst under chiral phase transfer conditions while the second one involved the hydroxypinanone chiral auxiliary, both implicating Schiff bases as substrate. In all cases, the use of a prochiral Schiff base gave higher enantiomeric excess and yield in the final desired amino acid.

  9. High Energy Explosive Yield Enhancer Using Microencapsulation.

    DTIC Science & Technology

    The invention consists of a class of high energy explosive yield enhancers created through the use of microencapsulation techniques. The... microcapsules consist of combinations of highly reactive oxidizers that are encapsulated in either passivated inorganic fuels or inert materials and inorganic...fuels. Depending on the application, the availability of the various oxidizers and fuels within the microcapsules can be customized to increase the

  10. Measurement of acoustic attenuation in South Pole ice

    NASA Astrophysics Data System (ADS)

    IceCube Collaboration; Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Gustafsson, L.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Paul, L.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; IceCube Collaboration

    2011-01-01

    Using the South Pole Acoustic Test Setup (SPATS) and a retrievable transmitter deployed in holes drilled for the IceCube experiment, we have measured the attenuation of acoustic signals by South Pole ice at depths between 190 m and 500 m. Three data sets, using different acoustic sources, have been analyzed and give consistent results. The method with the smallest systematic uncertainties yields an amplitude attenuation coefficient α = 3.20 ± 0.57 km-1 between 10 and 30 kHz, considerably larger than previous theoretical estimates. Expressed as an attenuation length, the analyses give a consistent result for λ ≡ 1/α of ˜300 m with 20% uncertainty. No significant depth or frequency dependence has been found.

  11. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  12. Pancreatic islet isolation variables in non-human primates (rhesus macaques).

    PubMed

    Andrades, P; Asiedu, C K; Gansuvd, B; Inusah, S; Goodwin, K J; Deckard, L A; Jargal, U; Thomas, J M

    2008-07-01

    Non-human primates (NHPs) are important preclinical models for pancreatic islet transplantation (PIT) because of their close phylogenetic and immunological relationship with humans. However, low availability of NHP tissue, long learning curves and prohibitive expenses constrain the consistency of isolated NHP islets for PIT studies. To advance preclinical studies, we attempted to identify key variables that consistently influence the quantity and quality of NHP islets. Seventy-two consecutive pancreatic islet isolations from rhesus macaques were reviewed retrospectively. A scaled down, semi-automated islet isolation method was used, and monkeys with streptozotocin-induced diabetes, weighing 3-7 kg, served as recipients for allotransplantation. We analysed the effects of 22 independent variables grouped as donor factors, surgical factors and isolation technique factors. Islet yields, success of isolation and transplantation results were used as quantitative and qualitative outcomes. In the multivariate analysis, variables that significantly affected islet yield were the type of monkey, pancreas preservation, enzyme lot and volume of enzyme delivered. The variables associated with successful isolation were the enzyme lot and volume delivered. The transplant result was correlated with pancreas preservation, enzyme lot, endotoxin levels and COBE collection method. Islet quantity and quality are highly variable between isolations. The data reviewed suggest that future NHP isolations should use bilayer preservation, infuse more than 80 ml of Liberase into the pancreas, collect non-fractioned tissue from the COBE, and strictly monitor for infection.

  13. Midwest agriculture and ENSO: A comparison of AVHRR NDVI3g data and crop yields in the United States Corn Belt from 1982 to 2014

    NASA Astrophysics Data System (ADS)

    Glennie, Erin; Anyamba, Assaf

    2018-06-01

    A time series of Advanced Very High Resolution Radiometer (AVHRR) derived normalized difference vegetation index (NDVI) data were compared to National Agricultural Statistics Service (NASS) corn yield data in the United States Corn Belt from 1982 to 2014. The main objectives of the comparison were to assess 1) the consistency of regional Corn Belt responses to El Niño/Southern Oscillation (ENSO) teleconnection signals, and 2) the reliability of using NDVI as an indicator of crop yield. Regional NDVI values were used to model a seasonal curve and to define the growing season - May to October. Seasonal conditions in each county were represented by NDVI and land surface temperature (LST) composites, and corn yield was represented by average annual bushels produced per acre. Correlation analysis between the NDVI, LST, corn yield, and equatorial Pacific sea surface temperature anomalies revealed patterns in land surface dynamics and corn yield, as well as typical impacts of ENSO episodes. It was observed from the study that growing seasons coincident with La Niña events were consistently warmer, but El Niño events did not consistently impact NDVI, temperature, or corn yield data. Moreover, the El Niño and La Niña composite images suggest that impacts vary spatially across the Corn Belt. While corn is the dominant crop in the region, some inconsistencies between corn yield and NDVI may be attributed to soy crops and other background interference. The overall correlation between the total growing season NDVI anomaly and detrended corn yield was 0.61(p = 0.00013), though the strength of the relationship varies across the Corn Belt.

  14. Non-Fermi-liquid superconductivity: Eliashberg approach versus the renormalization group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Huajia; Raghu, Srinivas; Torroba, Gonzalo

    Here, we address the problem of superconductivity for non-Fermi liquids using two commonly adopted, yet apparently distinct, methods: (1) the renormalization group (RG) and (2) Eliashberg theory. The extent to which both methods yield consistent solutions for the low-energy behavior of quantum metals has remained unclear. We show that the perturbative RG beta function for the 4-Fermi coupling can be explicitly derived from the linearized Eliashberg equations, under the assumption that quantum corrections are approximately local across energy scales. We apply our analysis to the test case of phonon-mediated superconductivity and show the consistency of both the Eliashberg and RGmore » treatments. We next study superconductivity near a class of quantum critical points and find a transition between superconductivity and a “naked” metallic quantum critical point with finite, critical BCS couplings. We speculate on the applications of our theory to the phenomenology of unconventional metals.« less

  15. Non-Fermi-liquid superconductivity: Eliashberg approach versus the renormalization group

    DOE PAGES

    Wang, Huajia; Raghu, Srinivas; Torroba, Gonzalo

    2017-04-15

    Here, we address the problem of superconductivity for non-Fermi liquids using two commonly adopted, yet apparently distinct, methods: (1) the renormalization group (RG) and (2) Eliashberg theory. The extent to which both methods yield consistent solutions for the low-energy behavior of quantum metals has remained unclear. We show that the perturbative RG beta function for the 4-Fermi coupling can be explicitly derived from the linearized Eliashberg equations, under the assumption that quantum corrections are approximately local across energy scales. We apply our analysis to the test case of phonon-mediated superconductivity and show the consistency of both the Eliashberg and RGmore » treatments. We next study superconductivity near a class of quantum critical points and find a transition between superconductivity and a “naked” metallic quantum critical point with finite, critical BCS couplings. We speculate on the applications of our theory to the phenomenology of unconventional metals.« less

  16. Multitask assessment of roads and vehicles network (MARVN)

    NASA Astrophysics Data System (ADS)

    Yang, Fang; Yi, Meng; Cai, Yiran; Blasch, Erik; Sullivan, Nichole; Sheaff, Carolyn; Chen, Genshe; Ling, Haibin

    2018-05-01

    Vehicle detection in wide area motion imagery (WAMI) has drawn increasing attention from the computer vision research community in recent decades. In this paper, we present a new architecture for vehicle detection on road using multi-task network, which is able to detect and segment vehicles, estimate their pose, and meanwhile yield road isolation for a given region. The multi-task network consists of three components: 1) vehicle detection, 2) vehicle and road segmentation, and 3) detection screening. Segmentation and detection components share the same backbone network and are trained jointly in an end-to-end way. Unlike background subtraction or frame differencing based methods, the proposed Multitask Assessment of Roads and Vehicles Network (MARVN) method can detect vehicles which are slowing down, stopped, and/or partially occluded in a single image. In addition, the method can eliminate the detections which are located at outside road using yielded road segmentation so as to decrease the false positive rate. As few WAMI datasets have road mask and vehicles bounding box anotations, we extract 512 frames from WPAFB 2009 dataset and carefully refine the original annotations. The resulting dataset is thus named as WAMI512. We extensively compare the proposed method with state-of-the-art methods on WAMI512 dataset, and demonstrate superior performance in terms of efficiency and accuracy.

  17. The evaluation of extraction techniques for Tetranychus urticae (Acari: Tetranychidae) from apple (Malus domestica) and cherry (Prunus avium) leaves.

    PubMed

    Harris, Adrian L; Ullah, Roshan; Fountain, Michelle T

    2017-08-01

    Tetranychus urticae is a widespread polyphagous mite, found on a variety of fruit crops. Tetranychus urticae feeds on the underside of the leaves perforating plant cells and sucking the cell contents. Foliar damage and excess webbing produced by T. urticae can reduce fruit yield. Assessments of T. urticae populations while small provide reliable and accurate ways of targeting control strategies and recording their efficacy against T. urticae. The aim of this study was to evaluate four methods for extracting low levels of T. urticae from leaf samples, representative of developing infestations. These methods were compared to directly counting of mites on leaves under a dissecting microscope. These methods were ethanol washing, a modified paraffin/ethanol meniscus technique, Tullgren funnel extraction and the Henderson and McBurnie mite brushing machine with consideration to: accuracy, precision and simplicity. In addition, two physically different leaf morphologies were compared; Prunus leaves which are glabrous with Malus leaves which are setaceous. Ethanol extraction consistently yielded the highest numbers of mites and was the most rapid method for recovering T. urticae from leaf samples, irrespective of leaf structure. In addition the samples could be processed and stored before final counting. The advantages and disadvantages of each method are discussed in detail.

  18. High-efficient Extraction of Drainage Networks from Digital Elevation Model Data Constrained by Enhanced Flow Enforcement from Known River Map

    NASA Astrophysics Data System (ADS)

    Wu, T.; Li, T.; Li, J.; Wang, G.

    2017-12-01

    Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.

  19. A simple-rapid method to separate uranium, thorium, and protactinium for U-series age-dating of materials.

    PubMed

    Knight, Andrew W; Eitrheim, Eric S; Nelson, Andrew W; Nelson, Steven; Schultz, Michael K

    2014-08-01

    Uranium-series dating techniques require the isolation of radionuclides in high yields and in fractions free of impurities. Within this context, we describe a novel-rapid method for the separation and purification of U, Th, and Pa. The method takes advantage of differences in the chemistry of U, Th, and Pa, utilizing a commercially-available extraction chromatographic resin (TEVA) and standard reagents. The elution behavior of U, Th, and Pa were optimized using liquid scintillation counting techniques and fractional purity was evaluated by alpha-spectrometry. The overall method was further assessed by isotope dilution alpha-spectrometry for the preliminary age determination of an ancient carbonate sample obtained from the Lake Bonneville site in western Utah (United States). Preliminary evaluations of the method produced elemental purity of greater than 99.99% and radiochemical recoveries exceeding 90% for U and Th and 85% for Pa. Excellent purity and yields (76% for U, 96% for Th and 55% for Pa) were also obtained for the analysis of the carbonate samples and the preliminary Pa and Th ages of about 39,000 years before present are consistent with (14)C-derived age of the material. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Estimating the abundance of mouse populations of known size: promises and pitfalls of new methods

    USGS Publications Warehouse

    Conn, P.B.; Arthur, A.D.; Bailey, L.L.; Singleton, G.R.

    2006-01-01

    Knowledge of animal abundance is fundamental to many ecological studies. Frequently, researchers cannot determine true abundance, and so must estimate it using a method such as mark-recapture or distance sampling. Recent advances in abundance estimation allow one to model heterogeneity with individual covariates or mixture distributions and to derive multimodel abundance estimators that explicitly address uncertainty about which model parameterization best represents truth. Further, it is possible to borrow information on detection probability across several populations when data are sparse. While promising, these methods have not been evaluated using mark?recapture data from populations of known abundance, and thus far have largely been overlooked by ecologists. In this paper, we explored the utility of newly developed mark?recapture methods for estimating the abundance of 12 captive populations of wild house mice (Mus musculus). We found that mark?recapture methods employing individual covariates yielded satisfactory abundance estimates for most populations. In contrast, model sets with heterogeneity formulations consisting solely of mixture distributions did not perform well for several of the populations. We show through simulation that a higher number of trapping occasions would have been necessary to achieve good estimator performance in this case. Finally, we show that simultaneous analysis of data from low abundance populations can yield viable abundance estimates.

  1. Psychometric Properties of the Persian Version of the Social Anxiety - Acceptance and Action Questionnaire

    PubMed Central

    Soltani, Esmail; Bahrainian, Seyed Abdolmajid; Masjedi Arani, Abbas; Farhoudian, Ali; Gachkar, Latif

    2016-01-01

    Background Social anxiety disorder is often related to specific impairment or distress in different areas of life, including occupational, social and family settings. Objective The purpose of the present study was to examine the psychometric properties of the persian version of the social anxiety-acceptance and action questionnaire (SA-AAQ) in university students. Materials and Methods In this descriptive cross-sectional study, 324 students from Shahid Beheshti University of Medical Sciences participated via the cluster sampling method during year 2015. Factor analysis by the principle component analysis method, internal consistency analysis, and convergent and divergent validity were conducted to examine the validity of the SA-AAQ. To calculate the reliability of the SA-AAQ, Cronbach’s alpha and test-retest reliability were used. Results The results from factor analysis by principle component analysis method yielded three factors that were named acceptance, action and non-judging of experience. The three-factor solution explained 51.82% of the variance. Evidence for the internal consistency of SA-AAQ was obtained via calculating correlations between SA-AAQ and its subscales. Support for convergent and discriminant validity of the SA-AAQ via its correlations with the acceptance and action questionnaire - II, social interaction anxiety scale, cognitive fusion questionnaire, believability of anxious feelings and thoughts questionnaire, valued living questionnaire and WHOQOL- BREF was obtained. The reliability of the SA-AAQ via calculating Cronbach’s alpha and test-retest coefficients yielded values of 0.84 and 0.84, respectively. Conclusions The Iranian version of the SA-AAQ has acceptable levels of psychometric properties in university students. The SA-AAQ is a valid and reliable measure to be utilized in research investigations and therapeutic interventions. PMID:27803719

  2. An in vitro biomechanical comparison of two fixation methods for transverse osteotomies of the medial proximal forelimb sesamoid bones in horses.

    PubMed

    Wilson, D A; Keegan, K G; Carson, W L

    1999-01-01

    This study compared the mechanical properties of the normal intact suspensory apparatus and two methods of fixation for repair of transverse, midbody fractures of the proximal sesamoid bones of adult horses: transfixation wiring (TW) and screws placed in lag fashion (LS). An in vitro, paired study using equine cadaver limbs mounted in a loading apparatus was used to test the mechanical properties of TW and LS. Seventeen paired (13 repaired, 4 normal) equine cadaver limbs consisting of the suspensory apparatus third metacarpal bone, and first and second phalanges. The two methods of repair and normal intact specimens were evaluated in single cycle-to-failure loading. Yield failure was defined to occur at the first notable discontinuity (>50 N) in the load-displacement curve, the first visible failure as evident on the videotape, or a change in the slope of the moment-fetlock angle curve. Ultimate failure was defined to occur at the highest load resisted by the specimen. Corresponding resultant force and force per kg of body weight on the suspensory apparatus, fetlock joint moment, and angle of fetlock dorsiflexion were calculated by use of specimen dimensions and applied load. These were compared along with specimen stiffness, and ram displacement. Load on the suspensory apparatus, load on the suspensory apparatus per kg of body weight, moment, applied load, and angle of fetlock dorsiflexion at yield failure were significantly greater for the TW-repaired than for the LS-repaired specimens. A 3 to 5 mm gap was observed before yield failure in most TW-repaired osteotomies. Transfixation wiring provided greater strength to yield failure than screws placed in lag fashion in single cycle load-to-failure mechanical testing of repaired transverse osteotomized specimens of the medial proximal forelimb sesamoid bone.

  3. Modified Lipid Extraction Methods for Deep Subsurface Shale

    PubMed Central

    Akondi, Rawlings N.; Trexler, Ryan V.; Pfiffner, Susan M.; Mouser, Paula J.; Sharma, Shikha

    2017-01-01

    Growing interest in the utilization of black shales for hydrocarbon development and environmental applications has spurred investigations of microbial functional diversity in the deep subsurface shale ecosystem. Lipid biomarker analyses including phospholipid fatty acids (PLFAs) and diglyceride fatty acids (DGFAs) represent sensitive tools for estimating biomass and characterizing the diversity of microbial communities. However, complex shale matrix properties create immense challenges for microbial lipid extraction procedures. Here, we test three different lipid extraction methods: modified Bligh and Dyer (mBD), Folch (FOL), and microwave assisted extraction (MAE), to examine their ability in the recovery and reproducibility of lipid biomarkers in deeply buried shales. The lipid biomarkers were analyzed as fatty acid methyl esters (FAMEs) with the GC-MS, and the average PL-FAME yield ranged from 67 to 400 pmol/g, while the average DG-FAME yield ranged from 600 to 3,000 pmol/g. The biomarker yields in the intact phospholipid Bligh and Dyer treatment (mBD + Phos + POPC), the Folch, the Bligh and Dyer citrate buffer (mBD-Cit), and the MAE treatments were all relatively higher and statistically similar compared to the other extraction treatments for both PLFAs and DGFAs. The biomarker yields were however highly variable within replicates for most extraction treatments, although the mBD + Phos + POPC treatment had relatively better reproducibility in the consistent fatty acid profiles. This variability across treatments which is associated with the highly complex nature of deeply buried shale matrix, further necessitates customized methodological developments for the improvement of lipid biomarker recovery. PMID:28790998

  4. Resonating group method as applied to the spectroscopy of α-transfer reactions

    NASA Astrophysics Data System (ADS)

    Subbotin, V. B.; Semjonov, V. M.; Gridnev, K. A.; Hefter, E. F.

    1983-10-01

    In the conventional approach to α-transfer reactions the finite- and/or zero-range distorted-wave Born approximation is used in liaison with a macroscopic description of the captured α particle in the residual nucleus. Here the specific example of 16O(6Li,d)20Ne reactions at different projectile energies is taken to present a microscopic resonating group method analysis of the α particle in the final nucleus (for the reaction part the simple zero-range distorted-wave Born approximation is employed). In the discussion of suitable nucleon-nucleon interactions, force number one of the effective interactions presented by Volkov is shown to be most appropriate for the system considered. Application of the continuous analog of Newton's method to the evaluation of the resonating group method equations yields an increased accuracy with respect to traditional methods. The resonating group method description induces only minor changes in the structures of the angular distributions, but it does serve its purpose in yielding reliable and consistent spectroscopic information. NUCLEAR STRUCTURE 16O(6Li,d)20Ne; E=20 to 32 MeV; calculated B(E2); reduced widths, dσdΩ extracted α-spectroscopic factors. ZRDWBA with microscope RGM description of residual α particle in 20Ne; application of continuous analog of Newton's method; tested and applied Volkov force No. 1; direct mechanism.

  5. Quantitative analysis of relationships between irradiation parameters and the reproducibility of cyclotron-produced (99m)Tc yields.

    PubMed

    Tanguay, J; Hou, X; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-05-21

    Cyclotron production of (99m)Tc through the (100)Mo(p,2n) (99m)Tc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional (99m)Tc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity (99m)Tc. However, variations in proton beam currents and the thickness and isotopic composition of enriched (100)Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute (99m)Tc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including (100)Mo target thicknesses and proton beam currents, and reproducibility of absolute (99m)Tc yields (defined as the end of bombardment (EOB) (99m)Tc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB (99m)Tc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in (99m)Tc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of (99m)Tc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the minimum daily activity requirement. The irradiation parameters that would be required to achieve these service rates are described. We believe the developed formalism will aid in the development of quality-control criteria required to ensure consistent supply of large quantities of high-radionuclidic-purity cyclotron-produced (99m)Tc.

  6. Quantitative analysis of relationships between irradiation parameters and the reproducibility of cyclotron-produced 99mTc yields

    NASA Astrophysics Data System (ADS)

    Tanguay, J.; Hou, X.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.

    2015-05-01

    Cyclotron production of 99mTc through the 100Mo(p,2n) 99mTc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional 99mTc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity 99mTc. However, variations in proton beam currents and the thickness and isotopic composition of enriched 100Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute 99mTc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including 100Mo target thicknesses and proton beam currents, and reproducibility of absolute 99mTc yields (defined as the end of bombardment (EOB) 99mTc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB 99mTc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in 99mTc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of 99mTc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the minimum daily activity requirement. The irradiation parameters that would be required to achieve these service rates are described. We believe the developed formalism will aid in the development of quality-control criteria required to ensure consistent supply of large quantities of high-radionuclidic-purity cyclotron-produced 99mTc.

  7. Effect of concentrate feeding method on the performance of dairy cows in early to mid lactation.

    PubMed

    Purcell, P J; Law, R A; Gordon, A W; McGettrick, S A; Ferris, C P

    2016-04-01

    The objective of the current study was to determine the effects of concentrate feeding method on milk yield and composition, dry matter (DM) intake (DMI), body weight and body condition score, reproductive performance, energy balance, and blood metabolites of housed (i.e., accommodated indoors) dairy cows in early to mid lactation. Eighty-eight multiparous Holstein-Friesian cows were managed on 1 of 4 concentrate feeding methods (CFM; 22 cows per CFM) for the first 21 wk postpartum. Cows on all 4 CFM were offered grass silage plus maize silage (in a 70:30 ratio on a DM basis) ad libitum throughout the study. In addition, cows had a target concentrate allocation of 11 kg/cow per day (from d 13 postpartum) via 1 of 4 CFM, consisting of (1) offered on a flat-rate basis via an out-of-parlor feeding system, (2) offered based on individual cow's milk yields in early lactation via an out-of-parlor feeding system, (3) offered as part of a partial mixed ration (target intake of 5 kg/cow per day) with additional concentrate offered based on individual cow's milk yields in early lactation via an out-of-parlor feeding system, and (4) offered as part of a partial mixed ration containing a fixed quantity of concentrate for each cow in the group. In addition, all cows were offered 1 kg/cow per day of concentrate pellets via an in-parlor feeding system. We detected no effect of CFM on concentrate or total DMI, mean daily milk yield, concentrations and yields of milk fat and protein, or metabolizable energy intakes, requirements, or balances throughout the study. We also found no effects of CFM on mean or final body weight, mean or final body condition score, conception rates to first service, or any of the blood metabolites examined. The results of this study suggest that CFM has little effect on the overall performance of higher-yielding dairy cows in early to mid lactation when offered diets based on conserved forages. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Optimizing Dense Plasma Focus Neutron Yields with Fast Gas Jets

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Kueny, Christopher; Stein, Elizabeth; Link, Anthony; Schmidt, Andrea

    2016-10-01

    We report a study using the particle-in-cell code LSP to perform fully kinetic simulations modeling dense plasma focus (DPF) devices with high density gas jets on axis. The high density jet models fast gas puffs which allow for more mass on axis while maintaining the optimal pressure for the DPF. As the density of the jet compared to the background fill increases we find the neutron yield increases, as does the variability in the neutron yield. Introducing perturbations in the jet density allow for consistent seeding of the m =0 instability leading to more consistent ion acceleration and higher neutron yields with less variability. Jets with higher on axis density are found to have the greatest yield. The optimal jet configuration is explored. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  10. Short communication: Development of a rapid laboratory method to polymerize lactose to nondigestible carbohydrates.

    PubMed

    Kuechel, A F; Schoenfuss, T C

    2018-04-01

    Nondigestible carbohydrates with a degree of polymerization between 3 and 10 (oligosaccharides) are commonly used as dietary fiber ingredients in the food industry, once they have been confirmed to have positive effects on human health by regulatory authorities. These carbohydrates are produced through chemical or enzymatic synthesis. Polylactose, a polymerization product of lactose and glucose, has been produced by reactive extrusion using a twin-screw extruder, with citric acid as the catalyst. Trials using powdered cheese whey permeate as the lactose source for this reaction were unsuccessful. The development of a laboratory method was necessary to investigate the effect of ingredients present in permeate powder that could be inhibiting polymerization. A Mars 6 Microwave Digestion System (CEM Corp., Matthews, NC) was used to heat and polymerize the sugars. The temperatures had to be lowered from extrusion conditions to produce a caramel-like product and not decompose the sugars. Small amounts of water had to be added to the reaction vessels to allow consistent heating of sugars between vessels. Elevated levels of water (22.86 and 28.57%, vol/wt) and calcium phosphate (0.928 and 1.856%, wt/wt) reduced the oligosaccharide yield in the laboratory method. Increasing the citric acid (catalyst) concentration increased the oligosaccharide yield for the pure sugar blend and when permeate powder was used. The utility of the laboratory method to predict oligosaccharide yields was confirmed during extrusion trials of permeate when this increased acid catalyst concentration resulted in similar oligosaccharide concentrations. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Microfluidic labeling of biomolecules with radiometals for use in nuclear medicine.

    PubMed

    Wheeler, Tobias D; Zeng, Dexing; Desai, Amit V; Önal, Birce; Reichert, David E; Kenis, Paul J A

    2010-12-21

    Radiometal-based radiopharmaceuticals, used as imaging and therapeutic agents in nuclear medicine, consist of a radiometal that is bound to a targeting biomolecule (BM) using a bifunctional chelator (BFC). Conventional, macroscale radiolabeling methods use an excess of the BFC-BM conjugate (ligand) to achieve high radiolabeling yields. Subsequently, to achieve maximal specific activity (minimal amount of unlabeled ligand), extensive chromatographic purification is required to remove unlabeled ligand, often resulting in longer synthesis times and loss of imaging sensitivity due to radioactive decay. Here we describe a microreactor that overcomes the above issues through integration of efficient mixing and heating strategies while working with small volumes of concentrated reagents. As a model reaction, we radiolabel 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) conjugated to the peptide cyclo(Arg-Gly-Asp-DPhe-Lys) with (64)Cu(2+). We show that the microreactor (made from polydimethylsiloxane and glass) can withstand 260 mCi of activity over 720 hours and retains only minimal amounts of (64)Cu(2+) (<5%) upon repeated use. A direct comparison between the radiolabeling yields obtained using the microreactor and conventional radiolabeling methods shows that improved mixing and heat transfer in the microreactor leads to higher yields for identical reaction conditions. Most importantly, by using small volumes (~10 µL) of concentrated solutions of reagents (>50 µM), yields of over 90% can be achieved in the microreactor when using a 1:1 stoichiometry of radiometal to BFC-BM. These high yields eliminate the need for use of excess amounts of often precious BM and obviate the need for a chromatographic purification process to remove unlabeled ligand. The results reported here demonstrate the potential of microreactor technology to improve the production of patient-tailored doses of radiometal-based radiopharmaceuticals in the clinic.

  12. Effect of preparation method and CuO promotion in the conversion of ethanol into 1,3-butadiene over SiO₂-MgO catalysts.

    PubMed

    Angelici, Carlo; Velthoen, Marjolein E Z; Weckhuysen, Bert M; Bruijnincx, Pieter C A

    2014-09-01

    Silica-magnesia (Si/Mg=1:1) catalysts were studied in the one-pot conversion of ethanol to butadiene. The catalyst synthesis method was found to greatly influence morphology and performance, with materials prepared through wet-kneading performing best both in terms of ethanol conversion and butadiene yield. Detailed characterization of the catalysts synthesized through co-precipitation or wet-kneading allowed correlation of activity and selectivity with morphology, textural properties, crystallinity, and acidity/basicity. The higher yields achieved with the wet-kneaded catalysts were attributed to a morphology consisting of SiO2 spheres embedded in a thin layer of MgO. The particle size of the SiO2 catalysts also influenced performance, with catalysts with smaller SiO2 spheres showing higher activity. Temperature-programmed desorption (TPD) measurements showed that best butadiene yields were obtained with SiO2-MgO catalysts characterized by an intermediate amount of acidic and basic sites. A Hammett indicator study showed the catalysts' pK(a) value to be inversely correlated with the amount of dehydration by-products formed. Butadiene yields could be further improved by the addition of 1 wt% of CuO as promoter to give butadiene yields and selectivities as high as 40% and 53%, respectively. The copper promoter boosts the production of the acetaldehyde intermediate changing the rate-determining step of the process. TEM-energy-dispersive X-ray (EDX) analyses showed CuO to be present on both the SiO2 and MgO components. UV/Vis spectra of promoted catalysts in turn pointed at the presence of cluster-like CuO species, which are proposed to be responsible for the increased butadiene production. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Generalization of the swelling method to measure the intrinsic curvature of lipids

    NASA Astrophysics Data System (ADS)

    Barragán Vidal, I. A.; Müller, M.

    2017-12-01

    Via computer simulation of a coarse-grained model of two-component lipid bilayers, we compare two methods of measuring the intrinsic curvatures of the constituting monolayers. The first one is a generalization of the swelling method that, in addition to the assumption that the spontaneous curvature linearly depends on the composition of the lipid mixture, incorporates contributions from its elastic energy. The second method measures the effective curvature-composition coupling between the apposing leaflets of bilayer structures (planar bilayers or cylindrical tethers) to extract the spontaneous curvature. Our findings demonstrate that both methods yield consistent results. However, we highlight that the two-leaflet structure inherent to the latter method has the advantage of allowing measurements for mixed lipid systems up to their critical point of demixing as well as in the regime of high concentration (of either species).

  14. Reference Genes for qPCR Analysis in Resin-Tapped Adult Slash Pine As a Tool to Address the Molecular Basis of Commercial Resinosis

    PubMed Central

    de Lima, Júlio C.; de Costa, Fernanda; Füller, Thanise N.; Rodrigues-Corrêa, Kelly C. da Silva; Kerber, Magnus R.; Lima, Mariano S.; Fett, Janette P.; Fett-Neto, Arthur G.

    2016-01-01

    Pine oleoresin is a major source of terpenes, consisting of turpentine (mono- and sesquiterpenes) and rosin (diterpenes) fractions. Higher oleoresin yields are of economic interest, since oleoresin derivatives make up a valuable source of materials for chemical industries. Oleoresin can be extracted from living trees, often by the bark streak method, in which bark removal is done periodically, followed by application of stimulant paste containing sulfuric acid and other chemicals on the freshly wounded exposed surface. To better understand the molecular basis of chemically-stimulated and wound induced oleoresin production, we evaluated the stability of 11 putative reference genes for the purpose of normalization in studying Pinus elliottii gene expression during oleoresinosis. Samples for RNA extraction were collected from field-grown adult trees under tapping operations using stimulant pastes with different compositions and at various time points after paste application. Statistical methods established by geNorm, NormFinder, and BestKeeper softwares were consistent in pointing as adequate reference genes HISTO3 and UBI. To confirm expression stability of the candidate reference genes, expression profiles of putative P. elliottii orthologs of resin biosynthesis-related genes encoding Pinus contorta β-pinene synthase [PcTPS-(−)β-pin1], P. contorta levopimaradiene/abietadiene synthase (PcLAS1), Pinus taeda α-pinene synthase [PtTPS-(+)αpin], and P. taeda α-farnesene synthase (PtαFS) were examined following stimulant paste application. Increased oleoresin yields observed in stimulated treatments using phytohormone-based pastes were consistent with higher expression of pinene synthases. Overall, the expression of all genes examined matched the expected profiles of oleoresin-related transcript changes reported for previously examined conifers. PMID:27379135

  15. Reference Genes for qPCR Analysis in Resin-Tapped Adult Slash Pine As a Tool to Address the Molecular Basis of Commercial Resinosis.

    PubMed

    de Lima, Júlio C; de Costa, Fernanda; Füller, Thanise N; Rodrigues-Corrêa, Kelly C da Silva; Kerber, Magnus R; Lima, Mariano S; Fett, Janette P; Fett-Neto, Arthur G

    2016-01-01

    Pine oleoresin is a major source of terpenes, consisting of turpentine (mono- and sesquiterpenes) and rosin (diterpenes) fractions. Higher oleoresin yields are of economic interest, since oleoresin derivatives make up a valuable source of materials for chemical industries. Oleoresin can be extracted from living trees, often by the bark streak method, in which bark removal is done periodically, followed by application of stimulant paste containing sulfuric acid and other chemicals on the freshly wounded exposed surface. To better understand the molecular basis of chemically-stimulated and wound induced oleoresin production, we evaluated the stability of 11 putative reference genes for the purpose of normalization in studying Pinus elliottii gene expression during oleoresinosis. Samples for RNA extraction were collected from field-grown adult trees under tapping operations using stimulant pastes with different compositions and at various time points after paste application. Statistical methods established by geNorm, NormFinder, and BestKeeper softwares were consistent in pointing as adequate reference genes HISTO3 and UBI. To confirm expression stability of the candidate reference genes, expression profiles of putative P. elliottii orthologs of resin biosynthesis-related genes encoding Pinus contorta β-pinene synthase [PcTPS-(-)β-pin1], P. contorta levopimaradiene/abietadiene synthase (PcLAS1), Pinus taeda α-pinene synthase [PtTPS-(+)αpin], and P. taeda α-farnesene synthase (PtαFS) were examined following stimulant paste application. Increased oleoresin yields observed in stimulated treatments using phytohormone-based pastes were consistent with higher expression of pinene synthases. Overall, the expression of all genes examined matched the expected profiles of oleoresin-related transcript changes reported for previously examined conifers.

  16. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants

    PubMed Central

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-01-01

    Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test—a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene–gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. Contact: heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25075117

  17. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants.

    PubMed

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-11-15

    Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  18. Using flow cytometry to estimate pollen DNA content: improved methodology and applications

    PubMed Central

    Kron, Paul; Husband, Brian C.

    2012-01-01

    Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815

  19. Solubilities of crystalline drugs in polymers: an improved analytical method and comparison of solubilities of indomethacin and nifedipine in PVP, PVP/VA, and PVAc.

    PubMed

    Sun, Ye; Tao, Jing; Zhang, Geoff G Z; Yu, Lian

    2010-09-01

    A previous method for measuring solubilities of crystalline drugs in polymers has been improved to enable longer equilibration and used to survey the solubilities of indomethacin (IMC) and nifedipine (NIF) in two homo-polymers [polyvinyl pyrrolidone (PVP) and polyvinyl acetate (PVAc)] and their co-polymer (PVP/VA). These data are important for understanding the stability of amorphous drug-polymer dispersions, a strategy actively explored for delivering poorly soluble drugs. Measuring solubilities in polymers is difficult because their high viscosities impede the attainment of solubility equilibrium. In this method, a drug-polymer mixture prepared by cryo-milling is annealed at different temperatures and analyzed by differential scanning calorimetry to determine whether undissolved crystals remain and thus the upper and lower bounds of the equilibrium solution temperature. The new annealing method yielded results consistent with those obtained with the previous scanning method at relatively high temperatures, but revised slightly the previous results at lower temperatures. It also lowered the temperature of measurement closer to the glass transition temperature. For D-mannitol and IMC dissolving in PVP, the polymer's molecular weight has little effect on the weight-based solubility. For IMC and NIF, the dissolving powers of the polymers follow the order PVP > PVP/VA > PVAc. In each polymer studied, NIF is less soluble than IMC. The activities of IMC and NIF dissolved in various polymers are reasonably well fitted to the Flory-Huggins model, yielding the relevant drug-polymer interaction parameters. The new annealing method yields more accurate data than the previous scanning method when solubility equilibrium is slow to achieve. In practice, these two methods can be combined for efficiency. The measured solubilities are not readily anticipated, which underscores the importance of accurate experimental data for developing predictive models.

  20. Method to identify wells that yield water that will be replaced by Colorado River water in Arizona, California, Nevada, and Utah

    USGS Publications Warehouse

    Wilson, Richard P.; Owen-Joyce, Sandra J.

    1994-01-01

    Accounting for the use of Colorado River water is required by the U.S. Supreme Court decree, 1964, Arizona v. California. Water pumped from wells on the flood plain and from certain wells on alluvial slopes outside the flood plain is presumed to be river water and is accounted for as Colorado River water. A method was developed to identify wells outside the f1ood plain of the lower Colorado River that yield water that will be replaced by water from the river. The method provides a uniform criterion of identification for all users pumping water from wells. Wells that have a static water-level elevation equal to or below the accounting surface are presumed to yield water that will be replaced by water from the river. Wells that have a static water-level elevation above the accounting surface are presumed to yield water that will be replaced by water from precipitation and inflow from tributary valleys. The method is based on the concept of a river aquifer and an accounting surface within the river aquifer. The river aquifer consists of permeable, partly saturated sediments and sedimentary rocks that are hydraulically connected to the Colorado River so that water can move between the river and the aquifer in response to withdrawal of water from the aquifer or differences in water-level elevations between the river and the aquifer. The accounting surface represents the elevation and slope of the unconfined static water table in the river aquifer outside the flood plain and reservoirs that would exist if the river were the only source of water to the river aquifer. Maps at a scale of 1:100,000 show the extent and elevation of the accounting surface from the area surrounding Lake Mead to Laguna Dam near Yuma, Arizona.

  1. Rooting the archaebacterial tree: the pivotal role of Thermococcus celer in archaebacterial evolution

    NASA Technical Reports Server (NTRS)

    Achenbach-Richter, L.; Gupta, R.; Zillig, W.; Woese, C. R.

    1988-01-01

    The sequence of the 16S ribosomal RNA gene from the archaebacterium Thermococcus celer shows the organism to be related to the methanogenic archaebacteria rather than to its phenotypic counterparts, the extremely thermophilic archaebacteria. This conclusion turns on the position of the root of the archaebacterial phylogenetic tree, however. The problems encountered in rooting this tree are analyzed in detail. Under conditions that suppress evolutionary noise both the parsimony and evolutionary distance methods yield a root location (using a number of eubacterial or eukaryotic outgroup sequences) that is consistent with that determined by an "internal rooting" method, based upon an (approximate) determination of relative evolutionary rates.

  2. [A New Simple Technique for Producing Labeled Monoclonal Antibodies for Antibody Pair Screening in Sandwich-ELISA].

    PubMed

    Zaripov, M M; Afanasieva, G V; Glukhova, X A; Trizna, Y A; Glukhov, A S; Beletsky, I P; Prusakova, O V

    2015-01-01

    A simple and fast method for obtaining biotin-labeled monoclonal antibodies was developed usingcontent of hybridoma culture supernatant sufficient to select antibody pairs in sandwich ELISA. The method consists in chemical biotinylation of antigen-bound antibodies in a well of ELISA plate. Using as an example target Vaccinia virus A27L protein it was shown that the yield of biotinylated reactant is enough to set comprehensive sandwich ELISA for a moderate size panel of up to 25 monoclonal antibodies with an aim to determine candidate pairs. The technique is a cheap and effective solution since it avoids obtaining preparative amounts of antibodies.

  3. Measurement of the top quark mass with the template method in the [Formula: see text] channel using ATLAS data.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdelalim, A A; Abdesselam, A; Abdinov, O; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Acerbi, E; Acharya, B S; Adamczyk, L; Adams, D L; Addy, T N; Adelman, J; Aderholz, M; Adomeit, S; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Aharrouche, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahsan, M; Aielli, G; Akdogan, T; Åkesson, T P A; Akimoto, G; Akimov, A V; Akiyama, A; Alam, M S; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Aliyev, M; Allbrooke, B M M; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral, P; Amelung, C; Ammosov, V V; Amorim, A; Amorós, G; Amram, N; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Andrieux, M-L; Anduaga, X S; Angerami, A; Anghinolfi, F; Anisenkov, A; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoun, S; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Arfaoui, S; Arguin, J-F; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnault, C; Artamonov, A; Artoni, G; Arutinov, D; Asai, S; Asfandiyarov, R; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astbury, A; Astvatsatourov, A; Aubert, B; Auge, E; Augsten, K; Aurousseau, M; Avolio, G; Avramidou, R; Axen, D; Ay, C; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Badescu, E; Bagnaia, P; Bahinipati, S; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, M D; Baker, S; Banas, E; Banerjee, P; Banerjee, Sw; Banfi, D; Bangert, A; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barashkou, A; Barbaro Galtieri, A; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Barrillon, P; Bartoldus, R; Barton, A E; Bartsch, V; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Beale, S; Beare, B; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, S; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Begel, M; Behar Harpaz, S; Behera, P K; Beimforde, M; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellina, F; Bellomo, M; Belloni, A; Beloborodova, O; Belotskiy, K; Beltramello, O; Ben Ami, S; Benary, O; Benchekroun, D; Benchouk, C; Bendel, M; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernat, P; Bernhard, R; Bernius, C; Berry, T; Bertella, C; Bertin, A; Bertinelli, F; Bertolucci, F; Besana, M I; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bitenc, U; Black, K M; Blair, R E; Blanchard, J-B; Blanchot, G; Blazek, T; Blocker, C; Blocki, J; Blondel, A; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V B; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boelaert, N; Bogaerts, J A; Bogdanchikov, A; Bogouch, A; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Bolnet, N M; Bona, M; Bondarenko, V G; Bondioli, M; Boonekamp, M; Booth, C N; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borri, M; Borroni, S; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Botterill, D; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boveia, A; Boyd, J; Boyko, I R; Bozhko, N I; Bozovic-Jelisavcic, I; Bracinik, J; Braem, A; Branchini, P; Brandenburg, G W; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brelier, B; Bremer, J; Brenner, R; Bressler, S; Breton, D; Britton, D; Brochu, F M; Brock, I; Brock, R; Brodbeck, T J; Brodet, E; Broggi, F; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, W K; Brown, G; Brown, H; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Buanes, T; Buat, Q; Bucci, F; Buchanan, J; Buchanan, N J; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Budick, B; Büscher, V; Bugge, L; Bulekov, O; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Bussey, P; Buszello, C P; Butin, F; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Buttinger, W; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Calvet, S; Camacho Toro, R; Camarri, P; Cambiaghi, M; Cameron, D; Caminada, L M; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Capasso, L; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capriotti, D; Capua, M; Caputo, R; Caramarcu, C; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, B; Caron, S; Carrillo Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda Hernandez, A M; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Cataneo, F; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cauz, D; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cetin, S A; Cevenini, F; Chafaq, A; Chakraborty, D; Chan, K; Chapleau, B; Chapman, J D; Chapman, J W; Chareyre, E; Charlton, D G; Chavda, V; Chavez Barajas, C A; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, T; Chen, X; Cheng, S; Cheplakov, A; Chepurnov, V F; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Cheung, S L; Chevalier, L; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chizhov, M V; Choudalakis, G; Chouridou, S; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciba, K; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciobotaru, M D; Ciocca, C; Ciocio, A; Cirilli, M; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Cleland, W; Clemens, J C; Clement, B; Clement, C; Clifft, R W; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coe, P; Cogan, J G; Coggeshall, J; Cogneras, E; Colas, J; Colijn, A P; Collins, N J; Collins-Tooth, C; Collot, J; Colon, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, M; Consorti, V; Constantinescu, S; Conta, C; Conventi, F; Cook, J; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Costin, T; Côté, D; Coura Torres, R; Courneyea, L; Cowan, G; Cowden, C; Cox, B E; Cranmer, K; Crescioli, F; Cristinziani, M; Crosetti, G; Crupi, R; Crépé-Renaudin, S; Cuciuc, C-M; Cuenca Almenar, C; Cuhadar Donszelmann, T; Curatolo, M; Curtis, C J; Cuthbert, C; Cwetanski, P; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Silva, P V M; Da Via, C; Dabrowski, W; Dai, T; Dallapiccola, C; Dam, M; Dameri, M; Damiani, D S; Danielsson, H O; Dannheim, D; Dao, V; Darbo, G; Darlea, G L; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, E; Davies, M; Davison, A R; Davygora, Y; Dawe, E; Dawson, I; Dawson, J W; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Castro Faria Salgado, P E; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Taille, C; De la Torre, H; De Lotto, B; de Mora, L; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dean, S; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Degenhardt, J; Dehchar, M; Del Papa, C; Del Peso, J; Del Prete, T; Delemontex, T; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Della Pietra, M; Della Volpe, D; Delmastro, M; Delruelle, N; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Deng, J; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Devetak, E; Deviveiros, P O; Dewhurst, A; DeWilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diblen, F; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobinson, R; Dobos, D; Dobson, E; Dodd, J; Doglioni, C; Doherty, T; Doi, Y; Dolejsi, J; Dolenc, I; Dolezal, Z; Dolgoshein, B A; Dohmae, T; Donadelli, M; Donega, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dosil, M; Dotti, A; Dova, M T; Dowell, J D; Doxiadis, A D; Doyle, A T; Drasal, Z; Drees, J; Dressnandt, N; Drevermann, H; Driouichi, C; Dris, M; Dubbert, J; Dube, S; Duchovni, E; Duckeck, G; Dudarev, A; Dudziak, F; Dührssen, M; Duerdoth, I P; Duflot, L; Dufour, M-A; Dunford, M; Duran Yildiz, H; Duxfield, R; Dwuznik, M; Dydak, F; Düren, M; Ebenstein, W L; Ebke, J; Eckweiler, S; Edmonds, K; Edwards, C A; Edwards, N C; Ehrenfeld, W; Ehrich, T; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Engelmann, R; Engl, A; Epp, B; Eppig, A; Erdmann, J; Ereditato, A; Eriksson, D; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evangelakou, D; Evans, H; Fabbri, L; Fabre, C; Fakhrutdinov, R M; Falciano, S; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrington, S M; Farthouat, P; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Favareto, A; Fayard, L; Fazio, S; Febbraro, R; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Fellmann, D; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Ferland, J; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferrer, A; Ferrer, M L; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filippas, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, G; Fischer, P; Fisher, M J; Flechl, M; Fleck, I; Fleckner, J; Fleischmann, P; Fleischmann, S; Flick, T; Flores Castillo, L R; Flowerdew, M J; Fokitis, M; Fonseca Martin, T; Forbush, D A; Formica, A; Forti, A; Fortin, D; Foster, J M; Fournier, D; Foussat, A; Fowler, A J; Fowler, K; Fox, H; Francavilla, P; Franchino, S; Francis, D; Frank, T; Franklin, M; Franz, S; Fraternali, M; Fratina, S; French, S T; Friedrich, F; Froeschl, R; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fuster, J; Gabaldon, C; Gabizon, O; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Gan, K K; Gao, Y S; Gapienko, V A; Gaponenko, A; Garberson, F; Garcia-Sciveres, M; García, C; García Navarro, J E; Gardner, R W; Garelli, N; Garitaonandia, H; Garonne, V; Garvey, J; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gavrilenko, I L; Gay, C; Gaycken, G; Gayde, J-C; Gazis, E N; Ge, P; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giakoumopoulou, V; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gilbert, L M; Gilewsky, V; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Giugni, D; Giunta, M; Giusti, P; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glitza, K W; Glonti, G L; Goddard, J R; Godfrey, J; Godlewski, J; Goebel, M; Göpfert, T; Goeringer, C; Gössling, C; Göttfert, T; Goldfarb, S; Golling, T; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; Gonidec, A; Gonzalez, S; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Gorokhov, S A; Goryachev, V N; Gosdzik, B; Gosselink, M; Gostkin, M I; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Grau, N; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Greenshaw, T; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grinstein, S; Grishkevich, Y V; Grivaz, J-F; Groh, M; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Grybel, K; Guarino, V J; Guest, D; Guicheney, C; Guida, A; Guindon, S; Guler, H; Gunther, J; Guo, B; Guo, J; Gupta, A; Gusakov, Y; Gushchin, V N; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Hadavand, H K; Hadley, D R; Haefner, P; Hahn, F; Haider, S; Hajduk, Z; Hakobyan, H; Hall, D; Haller, J; Hamacher, K; Hamal, P; Hamer, M; Hamilton, A; Hamilton, S; Han, H; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Handel, C; Hanke, P; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansson, P; Hara, K; Hare, G A; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, K; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hassani, S; Hatch, M; Hauff, D; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawes, B M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hawkins, D; Hayakawa, T; Hayashi, T; Hayden, D; Hayward, H S; Haywood, S J; Hazen, E; He, M; Head, S J; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heisterkamp, S; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, R C W; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Henry-Couannier, F; Hensel, C; Henß, T; Hernandez, C M; Hernández Jiménez, Y; Herrberg, R; Hershenhorn, A D; Herten, G; Hertenberger, R; Hervas, L; Hessey, N P; Higón-Rodriguez, E; Hill, D; Hill, J C; Hill, N; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holder, M; Holmgren, S O; Holy, T; Holzbauer, J L; Homma, Y; Hong, T M; Hooft van Huysduynen, L; Horazdovsky, T; Horn, C; Horner, S; Hostachy, J-Y; Hou, S; Houlden, M A; Hoummada, A; Howarth, J; Howell, D F; Hristova, I; Hrivnac, J; Hruska, I; Hryn'ova, T; Hsu, P J; Hsu, S-C; Huang, G S; Hubacek, Z; Hubaut, F; Huegging, F; Huettmann, A; Huffman, T B; Hughes, E W; Hughes, G; Hughes-Jones, R E; Huhtinen, M; Hurst, P; Hurwitz, M; Husemann, U; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Ichimiya, R; Iconomidou-Fayard, L; Idarraga, J; Iengo, P; Igonkina, O; Ikegami, Y; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Imori, M; Ince, T; Inigo-Golfin, J; Ioannou, P; Iodice, M; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishikawa, A; Ishino, M; Ishmukhametov, R; Issever, C; Istin, S; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakubek, J; Jana, D K; Jankowski, E; Jansen, E; Jansen, H; Jantsch, A; Janus, M; Jarlskog, G; Jeanty, L; Jelen, K; Jen-La Plante, I; Jenni, P; Jeremie, A; Jež, P; Jézéquel, S; Jha, M K; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, G; Jin, S; Jinnouchi, O; Joergensen, M D; Joffe, D; Johansen, L G; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T W; Jones, T J; Jonsson, O; Joram, C; Jorge, P M; Joseph, J; Jovin, T; Ju, X; Jung, C A; Jungst, R M; Juranek, V; Jussel, P; Juste Rozas, A; Kabachenko, V V; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kaiser, S; Kajomovitz, E; Kalinin, S; Kalinovskaya, L V; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kaplon, J; Kar, D; Karagounis, M; Karagoz, M; Karnevskiy, M; Karr, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, M; Kataoka, Y; Katsoufis, E; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kayl, M S; Kazanin, V A; Kazarinov, M Y; Keeler, R; Kehoe, R; Keil, M; Kekelidze, G D; Kennedy, J; Kenney, C J; Kenyon, M; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Kholodenko, A G; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, N; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H; Kim, M S; Kim, P C; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; Kirk, J; Kirsch, L E; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kittelmann, T; Kiver, A M; Kladiva, E; Klaiber-Lodewigs, J; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Knecht, N S; Kneringer, E; Knobloch, J; Knoops, E B F G; Knue, A; Ko, B R; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Köneke, K; König, A C; Koenig, S; Köpke, L; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kokott, T; Kolachev, G M; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Kollar, D; Kollefrath, M; Kolya, S D; Komar, A A; Komori, Y; Kondo, T; Kono, T; Kononov, A I; Konoplich, R; Konstantinidis, N; Kootz, A; Koperny, S; Korcyl, K; Kordas, K; Koreshev, V; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotamäki, M J; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasny, M W; Krasznahorkay, A; Kraus, J; Kraus, J K; Kreisel, A; Krejci, F; Kretzschmar, J; Krieger, N; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruth, A; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, T; Kuhn, D; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kummer, C; Kuna, M; Kundu, N; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Labbe, J; Lablak, S; Lacasta, C; Lacava, F; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laisne, E; Lamanna, M; Lampen, C L; Lampl, W; Lancon, E; Landgraf, U; Landon, M P J; Lane, J L; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larionov, A V; Larner, A; Lasseur, C; Lassnig, M; Laurelli, P; Lavrijsen, W; Laycock, P; Lazarev, A B; Le Dortz, O; Le Guirriec, E; Le Maner, C; Le Menedeu, E; Lebel, C; LeCompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, M; Legendre, M; Leger, A; LeGeyt, B C; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leite, M A L; Leitner, R; Lellouch, D; Leltchouk, M; Lemmer, B; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Leontsinis, S; Leroy, C; Lessard, J-R; Lesser, J; Lester, C G; Leung Fook Cheong, A; Levêque, J; Levin, D; Levinson, L J; Levitski, M S; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, H; Li, S; Li, X; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lichtnecker, M; Lie, K; Liebig, W; Lifshitz, R; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipinsky, L; Lipniacka, A; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, C; Liu, D; Liu, H; Liu, J B; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Loken, J; Lombardo, V P; Long, R E; Lopes, L; Lopez Mateos, D; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lo Sterzo, F; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Luijckx, G; Lumb, D; Luminari, L; Lund, E; Lund-Jensen, B; Lundberg, B; Lundberg, J; Lundquist, J; Lungwitz, M; Lutz, G; Lynn, D; Lys, J; Lytken, E; Ma, H; Ma, L L; Macana Goia, J A; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Mackeprang, R; Madaras, R J; Mader, W F; Maenner, R; Maeno, T; Mättig, P; Mättig, S; Magnoni, L; Magradze, E; Mahalalel, Y; Mahboubi, K; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V; Malyukov, S; Mameghani, R; Mamuzic, J; Manabe, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Mangeard, P S; Manhaes de Andrade Filho, L; Manjavidze, I D; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Manz, A; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchiori, G; Marcisovsky, M; Marin, A; Marino, C P; Marroquim, F; Marshall, R; Marshall, Z; Martens, F K; Marti-Garcia, S; Martin, A J; Martin, B; Martin, B; Martin, F F; Martin, J P; Martin, Ph; Martin, T A; Martin, V J; Martin Dit Latour, B; Martin-Haugh, S; Martinez, M; Martinez Outschoorn, V; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massaro, G; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mathes, M; Matricon, P; Matsumoto, H; Matsunaga, H; Matsushita, T; Mattravers, C; Maugain, J M; Maurer, J; Maxfield, S J; Maximov, D A; May, E N; Mayne, A; Mazini, R; Mazur, M; Mazzanti, M; Mazzoni, E; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; McGlone, H; Mchedlidze, G; McLaren, R A; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meera-Lebbai, R; Meguro, T; Mehdiyev, R; Mehlhase, S; Mehta, A; Meier, K; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Menot, C; Meoni, E; Mercurio, K M; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Meyer, T C; Meyer, W T; Miao, J; Michal, S; Micu, L; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Miñano Moya, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Miralles Verge, L; Misiejuk, A; Mitrevski, J; Mitrofanov, G Y; Mitsou, V A; Mitsui, S; Miyagawa, P S; Miyazaki, K; Mjörnmark, J U; Moa, T; Mockett, P; Moed, S; Moeller, V; Mönig, K; Möser, N; Mohapatra, S; Mohr, W; Mohrdieck-Möck, S; Moisseev, A M; Moles-Valls, R; Molina-Perez, J; Monk, J; Monnier, E; Montesano, S; Monticelli, F; Monzani, S; Moore, R W; Moorhead, G F; Mora Herrera, C; Moraes, A; Morange, N; Morel, J; Morello, G; Moreno, D; Moreno Llácer, M; Morettini, P; Morii, M; Morin, J; Morley, A K; Mornacchi, G; Morozov, S V; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Mudrinic, M; Mueller, F; Mueller, J; Mueller, K; Müller, T A; Mueller, T; Muenstermann, D; Muir, A; Munwes, Y; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nadal, J; Nagai, K; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Nanava, G; Napier, A; Narayan, R; Nash, M; Nation, N R; Nattermann, T; Naumann, T; Navarro, G; Neal, H A; Nebot, E; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Nektarijevic, S; Nelson, A; Nelson, S; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neusiedl, A; Neves, R M; Nevski, P; Newman, P R; Nguyen Thi Hong, V; Nickerson, R B; Nicolaidou, R; Nicolas, L; Nicquevert, B; Niedercorn, F; Nielsen, J; Niinikoski, T; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolaev, K; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nishiyama, T; Nisius, R; Nodulman, L; Nomachi, M; Nomidis, I; Nordberg, M; Nordkvist, B; Norton, P R; Novakova, J; Nozaki, M; Nozka, L; Nugent, I M; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; O'Brien, B J; O'Neale, S W; O'Neil, D C; O'Shea, V; Oakes, L B; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Oda, S; Odaka, S; Odier, J; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Ohshita, H; Ohsugi, T; Okada, S; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olcese, M; Olchevski, A G; Oliveira, M; Oliveira Damazio, D; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Omachi, C; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlov, I; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Osuna, C; Otero Y Garzon, G; Ottersbach, J P; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Ouyang, Q; Ovcharova, A; Owen, M; Owen, S; Ozcan, V E; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Pagan Griso, S; Paganis, E; Paige, F; Pais, P; Pajchel, K; Palacino, G; Paleari, C P; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panes, B; Panikashvili, N; Panitkin, S; Pantea, D; Panuskova, M; Paolone, V; Papadelis, A; Papadopoulou, Th D; Paramonov, A; Park, W; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N; Pater, J R; Patricelli, S; Pauly, T; Pecsy, M; Pedraza Morales, M I; Peleganchuk, S V; Peng, H; Pengo, R; Penson, A; Penwell, J; Perantoni, M; Perez, K; Perez Cavalcanti, T; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Persembe, S; Perus, A; Peshekhonov, V D; Peters, K; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Phan, A; Phillips, P W; Piacquadio, G; Piccaro, E; Piccinini, M; Piec, S M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Ping, J; Pinto, B; Pirotte, O; Pizio, C; Plamondon, M; Pleier, M-A; Pleskach, A V; Poblaguev, A; Poddar, S; Podlyski, F; Poggioli, L; Poghosyan, T; Pohl, M; Polci, F; Polesello, G; Policicchio, A; Polini, A; Poll, J; Polychronakos, V; Pomarede, D M; Pomeroy, D; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Posch, C; Pospelov, G E; Pospisil, S; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Prabhu, R; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Pretzl, K; Pribyl, L; Price, D; Price, J; Price, L E; Price, M J; Prieur, D; Primavera, M; Prokofiev, K; Prokoshin, F; Protopopescu, S; Proudfoot, J; Prudent, X; Przybycien, M; Przysiezniak, H; Psoroulas, S; Ptacek, E; Pueschel, E; Purdham, J; Purohit, M; Puzo, P; Pylypchenko, Y; Qian, J; Qian, Z; Qin, Z; Quadt, A; Quarrie, D R; Quayle, W B; Quinonez, F; Raas, M; Radescu, V; Radics, B; Radloff, P; Rador, T; Ragusa, F; Rahal, G; Rahimi, A M; Rahm, D; Rajagopalan, S; Rammensee, M; Rammes, M; Randle-Conde, A S; Randrianarivony, K; Ratoff, P N; Rauscher, F; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reichold, A; Reinherz-Aronis, E; Reinsch, A; Reisinger, I; Reljic, D; Rembser, C; Ren, Z L; Renaud, A; Renkel, P; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richards, A; Richter, R; Richter-Was, E; Ridel, M; Rijpstra, M; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robinson, M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Rodriguez, D; Roe, A; Roe, S; Røhne, O; Rojo, V; Rolli, S; Romaniouk, A; Romano, M; Romanov, V M; Romeo, G; Romero Adam, E; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, A; Rose, M; Rosenbaum, G A; Rosenberg, E I; Rosendahl, P L; Rosenthal, O; Rosselet, L; Rossetti, V; Rossi, E; Rossi, L P; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubinskiy, I; Ruckert, B; Ruckstuhl, N; Rud, V I; Rudolph, C; Rudolph, G; Rühr, F; Ruggieri, F; Ruiz-Martinez, A; Rumiantsev, V; Rumyantsev, L; Runge, K; Rurikova, Z; Rusakovich, N A; Rust, D R; Rutherfoord, J P; Ruwiedel, C; Ruzicka, P; Ryabov, Y F; Ryadovikov, V; Ryan, P; Rybar, M; Rybkin, G; Ryder, N C; Rzaeva, S; Saavedra, A F; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Salamanna, G; Salamon, A; Saleem, M; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Samset, B H; Sanchez, A; Sanchez Martinez, V; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sandvoss, S; Sankey, D P C; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarri, F; Sartisohn, G; Sasaki, O; Sasao, N; Satsounkevitch, I; Sauvage, G; Sauvan, E; Sauvan, J B; Savard, P; Savinov, V; Savu, D O; Sawyer, L; Saxon, D H; Says, L P; Sbarra, C; Sbrizzi, A; Scallon, O; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schäfer, U; Schaepe, S; Schaetzel, S; Schaffer, A C; Schaile, D; Schamberger, R D; Schamov, A G; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schioppa, M; Schlenker, S; Schlereth, J L; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schmitz, M; Schöning, A; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schroeder, C; Schroer, N; Schuh, S; Schuler, G; Schultens, M J; Schultes, J; Schultz-Coulon, H-C; Schulz, H; Schumacher, J W; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwemling, Ph; Schwienhorst, R; Schwierz, R; Schwindling, J; Schwindt, T; Schwoerer, M; Scott, W G; Searcy, J; Sedov, G; Sedykh, E; Segura, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Selbach, K E; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Seuster, R; Severini, H; Sevior, M E; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaver, L; Shaw, K; Sherman, D; Sherwood, P; Shibata, A; Shichi, H; Shimizu, S; Shimojima, M; Shin, T; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simmons, B; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skottowe, H P; Skovpen, K; Skubic, P; Skvorodnev, N; Slater, M; Slavicek, T; Sliwa, K; Sloper, J; Smakhtin, V; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, D; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snow, S W; Snow, J; Snuverink, J; Snyder, S; Soares, M; Sobie, R; Sodomka, J; Soffer, A; Solans, C A; Solar, M; Solc, J; Soldatov, E; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Soni, N; Sopko, V; Sopko, B; Sosebee, M; Soualah, R; Soukharev, A; Spagnolo, S; Spanò, F; Spighi, R; Spigo, G; Spila, F; Spiwoks, R; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staude, A; Stavina, P; Stavropoulos, G; Steele, G; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stevenson, K; Stewart, G A; Stillings, J A; Stockton, M C; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Strube, J; Stugu, B; Stumer, I; Stupak, J; Sturm, P; Styles, N A; Soh, D A; Su, D; Subramania, Hs; Succurro, A; Sugaya, Y; Sugimoto, T; Suhr, C; Suita, K; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Sushkov, S; Susinno, G; Sutton, M R; Suzuki, Y; Suzuki, Y; Svatos, M; Sviridov, Yu M; Swedish, S; Sykora, I; Sykora, T; Szeless, B; Sánchez, J; Ta, D; Tackmann, K; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A; Tamsett, M C; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanaka, Y; Tanasijczuk, A J; Tani, K; Tannoury, N; Tappern, G P; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tatarkhanov, M; Tayalati, Y; Taylor, C; Taylor, F E; Taylor, G N; Taylor, W; Teinturier, M; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Testa, M; Teuscher, R J; Thadome, J; Therhaag, J; Theveneaux-Pelzer, T; Thioye, M; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomson, E; Thomson, M; Thun, R P; Tian, F; Tibbetts, M J; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timoshenko, S; Tipton, P; Tique Aires Viegas, F J; Tisserant, S; Toczek, B; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokunaga, K; Tokushuku, K; Tollefson, K; Tomoto, M; Tompkins, L; Toms, K; Tong, G; Tonoyan, A; Topfel, C; Topilin, N D; Torchiani, I; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Trinh, T N; Tripiana, M F; Trischuk, W; Trivedi, A; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tua, A; Tudorache, A; Tudorache, V; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turlay, E; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ugland, M; Uhlenbrock, M; Uhrmacher, M; Ukegawa, F; Unal, G; Underwood, D G; Undrus, A; Unel, G; Unno, Y; Urbaniec, D; Usai, G; Uslenghi, M; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valenta, J; Valente, P; Valentinetti, S; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; van der Graaf, H; van der Kraaij, E; Van Der Leeuw, R; van der Poel, E; van der Ster, D; van Eldik, N; van Gemmeren, P; van Kesteren, Z; van Vulpen, I; Vanadia, M; Vandelli, W; Vandoni, G; Vaniachine, A; Vankov, P; Vannucci, F; Varela Rodriguez, F; Vari, R; Varnes, E W; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Vegni, G; Veillet, J J; Vellidis, C; Veloso, F; Veness, R; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virchaux, M; Virzi, J; Vitells, O; Viti, M; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vlasov, N; Vogel, A; Vokac, P; Volpi, G; Volpi, M; Volpini, G; von der Schmitt, H; von Loeben, J; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobiev, A P; Vorwerk, V; Vos, M; Voss, R; Voss, T T; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Wagner, W; Wagner, P; Wahlen, H; Wakabayashi, J; Walbersloh, J; Walch, S; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Wang, C; Wang, H; Wang, H; Wang, J; Wang, J; Wang, J C; Wang, R; Wang, S M; Warburton, A; Ward, C P; Warsinsky, M; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Weber, M; Weber, M S; Weber, P; Weidberg, A R; Weigell, P; Weingarten, J; Weiser, C; Wellenstein, H; Wells, P S; Wen, M; Wenaus, T; Wendler, S; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Wessels, M; Weydert, C; Whalen, K; Wheeler-Ellis, S J; Whitaker, S P; White, A; White, M J; Whitehead, S R; Whiteson, D; Whittington, D; Wicek, F; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Wolter, M W; Wolters, H; Wong, W C; Wooden, G; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wraight, K; Wright, C; Wright, M; Wrona, B; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wunstorf, R; Wynne, B M; Xella, S; Xiao, M; Xie, S; Xie, Y; Xu, C; Xu, D; Xu, G; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamaoka, J; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, U K; Yang, Y; Yang, Y; Yang, Z; Yanush, S; Yao, Y; Yasu, Y; Ybeles Smit, G V; Ye, J; Ye, S; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Young, C; Youssef, S; Yu, D; Yu, J; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaets, V G; Zaidan, R; Zaitsev, A M; Zajacova, Z; Zanello, L; Zarzhitsky, P; Zaytsev, A; Zeitnitz, C; Zeller, M; Zeman, M; Zemla, A; Zendler, C; Zenin, O; Ženiš, T; Zinonos, Z; Zenz, S; Zerwas, D; Zevi Della Porta, G; Zhan, Z; Zhang, D; Zhang, H; Zhang, J; Zhang, X; Zhang, Z; Zhao, L; Zhao, T; Zhao, Z; Zhemchugov, A; Zheng, S; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhuravlov, V; Zieminska, D; Zimmermann, R; Zimmermann, S; Zimmermann, S; Ziolkowski, M; Zitoun, R; Živković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; Zolnierowski, Y; Zsenei, A; Zur Nedden, M; Zutshi, V; Zwalinski, L

    The top quark mass has been measured using the template method in the [Formula: see text] channel based on data recorded in 2011 with the ATLAS detector at the LHC. The data were taken at a proton-proton centre-of-mass energy of [Formula: see text] and correspond to an integrated luminosity of 1.04 fb -1 . The analyses in the e +jets and μ +jets decay channels yield consistent results. The top quark mass is measured to be m top =174.5±0.6 stat ±2.3 syst GeV.

  4. Large scale preparation and crystallization of neuron-specific enolase.

    PubMed

    Ishioka, N; Isobe, T; Kadoya, T; Okuyama, T; Nakajima, T

    1984-03-01

    A simple method has been developed for the large scale purification of neuron-specific enolase [EC 4.2.1.11]. The method consists of ammonium sulfate fractionation of brain extract, and two subsequent column chromatography steps on DEAE Sephadex A-50. The chromatography was performed on a short (25 cm height) and thick (8.5 cm inside diameter) column unit that was specially devised for the large scale preparation. The purified enolase was crystallized in 0.05 M imidazole-HCl buffer containing 1.6 M ammonium sulfate (pH 6.39), with a yield of 0.9 g/kg of bovine brain tissue.

  5. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  6. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE PAGES

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...

    2017-06-03

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  7. Meteorological fluctuations define long-term crop yield patterns in conventional and organic production systems

    USDA-ARS?s Scientific Manuscript database

    Periodic variability in meteorological patterns presents significant challenges to crop production consistency and yield stability. Meteorological influences on corn and soybean grain yields were analyzed over an 18-year period at a long-term experiment in Beltsville, Maryland, U.S.A., comparing c...

  8. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    PubMed

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  9. Microgravity processing of particulate reinforced metal matrix composites

    NASA Technical Reports Server (NTRS)

    Morel, Donald E.; Stefanescu, Doru M.; Curreri, Peter A.

    1989-01-01

    The elimination of such gravity-related effects as buoyancy-driven sedimentation can yield more homogeneous microstructures in composite materials whose individual constituents have widely differing densities. A comparison of composite samples consisting of particulate ceramics in a nickel aluminide matrix solidified under gravity levels ranging from 0.01 to 1.8 G indicates that the G force normal to the growth direction plays a fundamental role in determining the distribution of the reinforcement in the matrix. Composites with extremely uniform microstructures can be produced by these methods.

  10. [Free from stress by autogenic therapy. Relaxation technique yielding peace of mind and self-insight].

    PubMed

    Broms, C

    1999-02-10

    The utilisation of self-regulatory capacity is one of the purposes of autogenic therapy, a method consisting of exercises focused on the limbs, lungs, heart, diaphragm and head. The physiological response is muscle relaxation, increased peripheral blood flow, lower heart rate and blood pressure, slower and deeper breathing, and reduced oxygen consumption. Autogenic training is applicable in most pathological conditions associated with stress, and can be used preventively or as a complement to conventional treatment.

  11. Transistor-like behavior of single metalloprotein junctions.

    PubMed

    Artés, Juan M; Díez-Pérez, Ismael; Gorostiza, Pau

    2012-06-13

    Single protein junctions consisting of azurin bridged between a gold substrate and the probe of an electrochemical tunneling microscope (ECSTM) have been obtained by two independent methods that allowed statistical analysis over a large number of measured junctions. Conductance measurements yield (7.3 ± 1.5) × 10(-6)G(0) in agreement with reported estimates using other techniques. Redox gating of the protein with an on/off ratio of 20 was demonstrated and constitutes a proof-of-principle of a single redox protein field-effect transistor.

  12. The effect of lactation number, stage, length, and milking frequency on milk yield in Korean Holstein dairy cows using automatic milking system

    PubMed Central

    Vijayakumar, Mayakrishnan; Park, Ji Hoo; Ki, Kwang Seok; Lim, Dong Hyun; Kim, Sang Bum; Park, Seong Min; Jeong, Ha Yeon; Park, Beom Young; Kim, Tae Il

    2017-01-01

    Objective The aim of the current study was to describe the relationship between milk yield and lactation number, stage, length and milking frequency in Korean Holstein dairy cows using an automatic milking system (AMS). Methods The original data set consisted of observations from April to October 2016 of 780 Holstein cows, with a total of 10,751 milkings. Each time a cow was milked by an AMS during the 24 h, the AMS management system recorded identification numbers of the AMS unit, the cow being milking, date and time of the milking, and milk yield (kg) as measured by the milk meters installed on each AMS unit, date and time of the lactation, lactation stage, milking frequency (NoM). Lactation stage is defined as the number of days milking per cows per lactation. Milk yield was calculated per udder quarter in the AMS and was added to 1 record per cow and trait for each milking. Milking frequency was measured the number of milkings per cow per 24 hour. Results From the study results, a significant relationship was found between the milk yield and lactation number (p<0.001), with the maximum milk yield occurring in the third lactation cows. We recorded the highest milk yield, in a greater lactation length period of early stage (55 to 90 days) at a 4× milking frequency/d, and the lowest milk yield was observed in the later stage (>201 days) of cows. Also, milking frequency had a significant influence on milk yield (p<0.001) in Korean Holstein cows using AMS. Conclusion Detailed knowledge of these factors such as lactation number, stage, length, and milking frequency associated with increasing milk yield using AMS will help guide future recommendations to producers for maximizing milk yield in Korean Dairy industries. PMID:28423887

  13. Statistical approaches to lifetime measurements with restricted observation times

    NASA Astrophysics Data System (ADS)

    Chen, X. C.; Zeng, Q.; Litvinov, Yu. A.; Tu, X. L.; Walker, P. M.; Wang, M.; Wang, Q.; Yue, K.; Zhang, Y. H.

    2017-09-01

    Two generic methods based on frequentism and Bayesianism are presented in this work aiming to adequately estimate decay lifetimes from measured data, while accounting for restricted observation times in the measurements. All the experimental scenarios that can possibly arise from the observation constraints are treated systematically and formulas are derived. The methods are then tested against the decay data of bare isomeric 44+94mRu, which were measured using isochronous mass spectrometry with a timing detector at the CSRe in Lanzhou, China. Applying both methods in three distinct scenarios yields six different but consistent lifetime estimates. The deduced values are all in good agreement with a prediction based on the neutral-atom value modified to take the absence of internal conversion into account. Potential applications of such methods are discussed.

  14. On the interfacial thermodynamics of nanoscale droplets and bubbles

    NASA Astrophysics Data System (ADS)

    Corti, David S.; Kerr, Karl J.; Torabi, Korosh

    2011-07-01

    We present a new self-consistent thermodynamic formalism for the interfacial properties of nanoscale embryos whose interiors do not exhibit bulklike behavior and are in complete equilibrium with the surrounding mother phase. In contrast to the standard Gibbsian analysis, whereby a bulk reference pressure based on the same temperature and chemical potentials of the mother phase is introduced, our approach naturally incorporates the normal pressure at the center of the embryo as an appropriate reference pressure. While the interfacial properties of small embryos that follow from the use of these two reference pressures are different, both methods yield by construction the same reversible work of embryo formation as well as consistency between their respective thermodynamic and mechanical routes to the surface tension. Hence, there is no a priori reason to select one method over another. Nevertheless, we argue, and demonstrate via a density-functional theory (with the local density approximation) analysis of embryo formation in the pure component Lennard-Jones fluid, that our new method generates more physically appealing trends. For example, within the new approach the surface tension at all locations of the dividing surface vanishes at the spinodal where the density profile spanning the embryo and mother phase becomes completely uniform (only the surface tension at the Gibbs surface of tension vanishes in the Gibbsian method at this same limit). Also, for bubbles, the location of the surface of tension now diverges at the spinodal, similar to the divergent behavior exhibited by the equimolar dividing surface (in the Gibbsian method, the location of the surface of tension vanishes instead). For droplets, the new method allows for the appearance of negative surface tensions (the Gibbsian method always yields positive tensions) when the normal pressures within the interior of the embryo become less than the bulk pressure of the surrounding vapor phase. Such a prediction, which is allowed by thermodynamics, is consistent with the interpretation that the mother phase's attempted compression of the droplet is counterbalanced by the negative surface tension, or free energy cost to decrease the interfacial area. Furthermore, for these same droplets, the surface of tension can no longer be meaningfully defined (the surface of tension always remains well defined in the Gibbsian method). Within the new method, the dividing surface at which the surface tension equals zero emerges as a new lengthscale, which has various thermodynamic analogs to and similar behavior as the surface of tension.

  15. An Evaluation of Two Methods for Generating Synthetic HL7 Segments Reflecting Real-World Health Information Exchange Transactions

    PubMed Central

    Mwogi, Thomas S.; Biondich, Paul G.; Grannis, Shaun J.

    2014-01-01

    Motivated by the need for readily available data for testing an open-source health information exchange platform, we developed and evaluated two methods for generating synthetic messages. The methods used HL7 version 2 messages obtained from the Indiana Network for Patient Care. Data from both methods were analyzed to assess how effectively the output reflected original ‘real-world’ data. The Markov Chain method (MCM) used an algorithm based on transitional probability matrix while the Music Box model (MBM) randomly selected messages of particular trigger type from the original data to generate new messages. The MBM was faster, generated shorter messages and exhibited less variation in message length. The MCM required more computational power, generated longer messages with more message length variability. Both methods exhibited adequate coverage, producing a high proportion of messages consistent with original messages. Both methods yielded similar rates of valid messages. PMID:25954458

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi

    ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less

  17. Development of a biologically based fertilizer, incorporating Bacillus megaterium A6, for improved phosphorus nutrition of oilseed rape.

    PubMed

    Hu, Xiaojia; Roberts, Daniel P; Xie, Lihua; Maul, Jude E; Yu, Changbing; Li, Yinshui; Zhang, Shujie; Liao, Xing

    2013-04-01

    Sustainable methods with diminished impact on the environment need to be developed for the production of oilseed rape in China and other regions of the world. A biological fertilizer consisting of Bacillus megaterium A6 cultured on oilseed rape meal improved oilseed rape seed yield (P < 0.0001) relative to the nontreated control in 2 greenhouse pot experiments using natural soil. This treatment resulted in slightly greater yield than oilseed rape meal without strain A6 in 1 of 2 experiments, suggesting a role for strain A6 in improving yield. Strain A6 was capable of solubilizing phosphorus from rock phosphate in liquid culture and produced enzymes capable of mineralizing organic phosphorus (acid phosphatase, phytase) in liquid culture and in the biological fertilizer. The biologically based fertilizer, containing strain A6, improved plant phosphorus nutrition in greenhouse pot experiments resulting in significantly greater available phosphorus in natural soil and in significantly greater plant phosphorus content relative to the nontreated control. Seed yield and available phosphorus in natural soil were significantly greater with a synthetic chemical fertilizer treatment, reduced in phosphorus content, than the biological fertilizer treatment, but a treatment containing the biological fertilizer combined with the synthetic fertilizer provided the significantly greatest seed yield, available phosphorus in natural soil, and plant phosphorus content. These results suggest that the biological fertilizer was capable of improving oilseed rape seed yield, at least in part, through the phosphorus-solubilizing activity of B. megaterium A6.

  18. A Method for Preparing DNA Sequencing Templates Using a DNA-Binding Microplate

    PubMed Central

    Yang, Yu; Hebron, Haroun R.; Hang, Jun

    2009-01-01

    A DNA-binding matrix was immobilized on the surface of a 96-well microplate and used for plasmid DNA preparation for DNA sequencing. The same DNA-binding plate was used for bacterial growth, cell lysis, DNA purification, and storage. In a single step using one buffer, bacterial cells were lysed by enzymes, and released DNA was captured on the plate simultaneously. After two wash steps, DNA was eluted and stored in the same plate. Inclusion of phosphates in the culture medium was found to enhance the yield of plasmid significantly. Purified DNA samples were used successfully in DNA sequencing with high consistency and reproducibility. Eleven vectors and nine libraries were tested using this method. In 10 μl sequencing reactions using 3 μl sample and 0.25 μl BigDye Terminator v3.1, the results from a 3730xl sequencer gave a success rate of 90–95% and read-lengths of 700 bases or more. The method is fully automatable and convenient for manual operation as well. It enables reproducible, high-throughput, rapid production of DNA with purity and yields sufficient for high-quality DNA sequencing at a substantially reduced cost. PMID:19568455

  19. A modular method for the extraction of DNA and RNA, and the separation of DNA pools from diverse environmental sample types

    PubMed Central

    Lever, Mark A.; Torti, Andrea; Eickenbusch, Philip; Michaud, Alexander B.; Šantl-Temkiv, Tina; Jørgensen, Bo Barker

    2015-01-01

    A method for the extraction of nucleic acids from a wide range of environmental samples was developed. This method consists of several modules, which can be individually modified to maximize yields in extractions of DNA and RNA or separations of DNA pools. Modules were designed based on elaborate tests, in which permutations of all nucleic acid extraction steps were compared. The final modular protocol is suitable for extractions from igneous rock, air, water, and sediments. Sediments range from high-biomass, organic rich coastal samples to samples from the most oligotrophic region of the world's oceans and the deepest borehole ever studied by scientific ocean drilling. Extraction yields of DNA and RNA are higher than with widely used commercial kits, indicating an advantage to optimizing extraction procedures to match specific sample characteristics. The ability to separate soluble extracellular DNA pools without cell lysis from intracellular and particle-complexed DNA pools may enable new insights into the cycling and preservation of DNA in environmental samples in the future. A general protocol is outlined, along with recommendations for optimizing this general protocol for specific sample types and research goals. PMID:26042110

  20. Evolution of learning strategies in temporally and spatially variable environments: A review of theory

    PubMed Central

    Aoki, Kenichi; Feldman, Marcus W.

    2013-01-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change – coevolutionary, two-timescale, and information decay – are compared and shown to sometimes yield contradictory results. The so-called Rogers’ paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers’ paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. PMID:24211681

  1. Evolution of learning strategies in temporally and spatially variable environments: a review of theory.

    PubMed

    Aoki, Kenichi; Feldman, Marcus W

    2014-02-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change--coevolutionary, two-timescale, and information decay--are compared and shown to sometimes yield contradictory results. The so-called Rogers' paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers' paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  3. Computational method for determining n and k for a thin film from the measured reflectance, transmittance, and film thickness.

    PubMed

    Bennett, J M; Booty, M J

    1966-01-01

    A computational method of determining n and k for an evaporated film from the measured reflectance, transmittance, and film thickness has been programmed for an IBM 7094 computer. The method consists of modifications to the NOTS multilayer film program. The basic program computes normal incidence reflectance, transmittance, phase change on reflection, and other parameters from the optical constants and thicknesses of all materials. In the modification, n and k for the film are varied in a prescribed manner, and the computer picks from among these values one n and one k which yield reflectance and transmittance values almost equalling the measured values. Results are given for films of silicon and aluminum.

  4. IRRADIATION METHOD OF CONVERTING ORGANIC COMPOUNDS

    DOEpatents

    Allen, A.O.; Caffrey, J.M. Jr.

    1960-10-11

    A method is given for changing the distribution of organic compounds from that produced by the irradiation of bulk alkane hydrocarbons. This method consists of depositing an alkane hydrocarbon on the surface of a substrate material and irradiating with gamma radiation at a dose rate of more than 100,000 rads. The substrate material may be a metal, metal salts, metal oxides, or carbons having a surface area in excess of 1 m/sup 2//g. The hydrocarbons are deposited in layers of from 0.1 to 10 monolayers on the surfaces of these substrates and irradiated. The product yields are found to vary from those which result from the irradiation of bulk hydrocarbons in that there is an increase in the quantity of branched hydrocarbons.

  5. A Convenient Approach to Synthesizing Peptide C-Terminal N-Alkyl Amides

    PubMed Central

    Fang, Wei-Jie; Yakovleva, Tatyana; Aldrich, Jane V.

    2014-01-01

    Peptide C-terminal N-alkyl amides have gained more attention over the past decade due to their biological properties, including improved pharmacokinetic and pharmacodynamic profiles. However, the synthesis of this type of peptide on solid phase by current available methods can be challenging. Here we report a convenient method to synthesize peptide C-terminal N-alkyl amides using the well-known Fukuyama N-alkylation reaction on a standard resin commonly used for the synthesis of peptide C-terminal primary amides, the PAL-PEG-PS (Peptide Amide Linker-polyethylene glycol-polystyrene) resin. The alkylation and oNBS deprotection were conducted under basic conditions and were therefore compatible with this acid labile resin. The alkylation reaction was very efficient on this resin with a number of different alkyl iodides or bromides, and the synthesis of model enkephalin N-alkyl amide analogs using this method gave consistently high yields and purities, demonstrating the applicability of this methodology. The synthesis of N-alkyl amides was more difficult on a Rink amide resin, especially the coupling of the first amino acid to the N-alkyl amine, resulting in lower yields for loading the first amino acid onto the resin. This method can be widely applied in the synthesis of peptide N-alkyl amides. PMID:22252422

  6. Thidiazuron (TDZ) increases fruit set and yield of 'Hosui' and 'Packham's Triumph' pear trees.

    PubMed

    Pasa, Mateus S; Silva, Carina P DA; Carra, Bruno; Brighenti, Alberto F; Souza, André Luiz K DE; Petri, José Luiz

    2017-01-01

    The low fruit set is one of the main factors leading to poor yield of pear orchards in Brazil. The exogenous application of thidiazuron (TDZ) and aminoethoxyvinilglycine (AVG) has shown promising results in some pear cultivars and other temperate fruit trees. The objective of this study was to evaluate the effect of TDZ and AVG on fruit set, yield, and fruit quality of 'Hosui' and 'Packham's Triumph' pears. The study was performed in a commercial orchard located in São Joaquim, SC. Plant material consisted of 'Hosui' and 'Packham's Triumph' pear trees grafted on Pyrus calleryana. Treatments consisted on different rates of TDZ (0 mg L-1, 20 mg L-1, 40 mg L-1 and 60 mg L-1) sprayed at full bloom for both cultivars. An additional treatment of AVG 60 mg L-1 was sprayed one week after full bloom in 'Hosui'. The fruit set, number of fruit per tree, yield, fruit weight, seed number, and fruit quality attributes were assessed. Fruit set and yield of both cultivars are consistently increased by TDZ, within the rates of 20 to 60 mg L-1. Besides, its application increased fruit size of 'Hosui' and did not negatively affect fruit quality attributes of both cultivars.

  7. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleiziffer, Patrick, E-mail: patrick.bleiziffer@fau.de; Krug, Marcel; Görling, Andreas

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel f{sub x} is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel f{sub x} is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation ofmore » EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N{sup 5} with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non-self-consistently with dRPA orbitals and eigenvalues, and the deviation reduces even further if the Coulomb kernel is scaled by a factor of 0.75 in the dRPA to reduce self-correlations in the dRPA correlation potential. For larger systems, such a non-self-consistent EXX-ACFD method is a competitive alternative to high-level wave-function-based methods, yielding higher accuracy than MP2 and CCSD methods while exhibiting a better scaling of the computational effort than CCSD or CCSD(T) methods. Moreover, EXX-ACFD methods were shown to be applicable in situation characterized by static correlation.« less

  9. Activation of Aspen Wood with Carbon Dioxide and Phosphoric Acid for Removal of Total Organic Carbon from Oil Sands Produced Water: Increasing the Yield with Bio-Oil Recycling

    PubMed Central

    Veksha, Andrei; Bhuiyan, Tazul I.; Hill, Josephine M.

    2016-01-01

    Several samples of activated carbon were prepared by physical (CO2) and chemical (H3PO4) activation of aspen wood and tested for the adsorption of organic compounds from water generated during the recovery of bitumen using steam assisted gravity drainage. Total organic carbon removal by the carbon samples increased proportionally with total pore volume as determined from N2 adsorption isotherms at −196 °C. The activated carbon produced by CO2 activation had similar removal levels for total organic carbon from the water (up to 70%) to those samples activated with H3PO4, but lower yields, due to losses during pyrolysis and activation. A method to increase the yield when using CO2 activation was proposed and consisted of recycling bio-oil produced from previous runs to the aspen wood feed, followed by either KOH addition (0.48%) or air pretreatment (220 °C for 3 h) before pyrolysis and activation. By recycling the bio-oil, the yield of CO2 activated carbon (after air pretreatment of the mixture) was increased by a factor of 1.3. Due to the higher carbon yield, the corresponding total organic carbon removal, per mass of wood feed, increased by a factor of 1.2 thus improving the overall process efficiency. PMID:28787817

  10. Direct optical band gap measurement in polycrystalline semiconductors: A critical look at the Tauc method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R., E-mail: krp@northwestern.edu

    2016-08-15

    The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectramore » of Sn-doped In{sub 2}O{sub 3} (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In{sub 2}O{sub 3} single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.« less

  11. Video image analysis as a potential grading system for Uruguayan beef carcasses.

    PubMed

    Vote, D J; Bowling, M B; Cunha, B C N; Belk, K E; Tatum, J D; Montossi, F; Smith, G C

    2009-07-01

    A study was conducted in 2 phases to evaluate the effectiveness of 1) the VIAscan Beef Carcass System (BCSys; hot carcass system) and the CVS BeefCam (chilled carcass system), used independently or in combination, to predict Uruguayan beef carcass fabrication yields; and 2) the CVS BeefCam to segregate Uruguayan beef carcasses into groups that differ in the Warner-Bratzler shear force (WBSF) values of their LM steaks. The results from the meat yield phase of the present study indicated that the prediction of saleable meat yield percentages from Uruguayan beef carcasses by use of the BCSys or CVS BeefCam is similar to, or slightly better than, the use of USDA yield grade calculated to the nearest 0.1 and was much more effective than prediction based on Uruguay National Institute of Meat (INAC) grades. A further improvement in fabrication yield prediction could be obtained by use of a dual-component video image analysis (VIA) system. Whichever method of VIA prediction of fabrication yield is used, a single predicted value of fabrication yield for every carcass removes an impediment to the implementation of a value-based pricing system. Additionally, a VIA method of predicting carcass yield has the advantage over the current INAC classification system in that estimates would be produced by an instrument rather than by packing plant personnel, which would appeal to cattle producers. Results from the tenderness phase of the study indicated that the CVS BeefCam output variable for marbling was not (P > 0.05) able to segregate steer and heifer carcasses into groups that differed in WBSF values. In addition, the results of segregating steer and heifer carcasses according to muscle color output variables indicate that muscle maturity and skeletal maturity were useful for segregating carcasses according to differences in WBSF values of their steaks (P > 0.05). Use of VIA to predict beef carcass fabrication yields could improve accuracy and reduce subjectivity in comparison with use of current INAC grades. Use of VIA to sort carcasses according to muscle color would allow for the marketing of more consistent beef products with respect to tenderness. This would help facilitate the initiation of a value-based marketing system for the Uruguayan beef industry.

  12. Auditing Associative Relations across Two Knowledge Sources

    PubMed Central

    Vizenor, Lowell T.; Bodenreider, Olivier; McCray, Alexa T.

    2009-01-01

    Objectives This paper proposes a novel semantic method for auditing associative relations in biomedical terminologies. We tested our methodology on two Unified Medical Language System (UMLS) knowledge sources. Methods We use the UMLS semantic groups as high-level representations of the domain and range of relationships in the Metathesaurus and in the Semantic Network. A mapping created between Metathesaurus relationships and Semantic Network relationships forms the basis for comparing the signatures of a given Metathesaurus relationship to the signatures of the semantic relationship to which it is mapped. The consistency of Metathesaurus relations is studied for each relationship. Results Of the 177 associative relationships in the Metathesaurus, 84 (48%) exhibit a high degree of consistency with the corresponding Semantic Network relationships. Overall, 63% of the 1.8M associative relations in the Metathesaurus are consistent with relations in the Semantic Network. Conclusion The semantics of associative relationships in biomedical terminologies should be defined explicitly by their developers. The Semantic Network would benefit from being extended with new relationships and with new relations for some existing relationships. The UMLS editing environment could take advantage of the correspondence established between relationships in the Metathesaurus and the Semantic Network. Finally, the auditing method also yielded useful information for refining the mapping of associative relationships between the two sources. PMID:19475724

  13. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  14. Prediction of beta-turns from amino acid sequences using the residue-coupled model.

    PubMed

    Guruprasad, K; Shukla, S

    2003-04-01

    We evaluated the prediction of beta-turns from amino acid sequences using the residue-coupled model with an enlarged representative protein data set selected from the Protein Data Bank. Our results show that the probability values derived from a data set comprising 425 protein chains yielded an overall beta-turn prediction accuracy 68.74%, compared with 94.7% reported earlier on a data set of 30 proteins using the same method. However, we noted that the overall beta-turn prediction accuracy using probability values derived from the 30-protein data set reduces to 40.74% when tested on the data set comprising 425 protein chains. In contrast, using probability values derived from the 425 data set used in this analysis, the overall beta-turn prediction accuracy yielded consistent results when tested on either the 30-protein data set (64.62%) used earlier or a more recent representative data set comprising 619 protein chains (64.66%) or on a jackknife data set comprising 476 representative protein chains (63.38%). We therefore recommend the use of probability values derived from the 425 representative protein chains data set reported here, which gives more realistic and consistent predictions of beta-turns from amino acid sequences.

  15. A simple, cost-effective method for generating murine colonic 3D enteroids and 2D monolayers for studies of primary epithelial cell function.

    PubMed

    Fernando, Elizabeth H; Dicay, Michael; Stahl, Martin; Gordon, Marilyn H; Vegso, Andrew; Baggio, Cristiane; Alston, Laurie; Lopes, Fernando; Baker, Kristi; Hirota, Simon; McKay, Derek M; Vallance, Bruce; MacNaughton, Wallace K

    2017-11-01

    Cancer cell lines have been the mainstay of intestinal epithelial experimentation for decades, due primarily to their immortality and ease of culture. However, because of the inherent biological abnormalities of cancer cell lines, many cellular biologists are currently transitioning away from these models and toward more representative primary cells. This has been particularly challenging, but recent advances in the generation of intestinal organoids have brought the routine use of primary cells within reach of most epithelial biologists. Nevertheless, even with the proliferation of publications that use primary intestinal epithelial cells, there is still a considerable amount of trial and error required for laboratories to establish a consistent and reliable method to culture three-dimensional (3D) intestinal organoids and primary epithelial monolayers. We aim to minimize the time other laboratories spend troubleshooting the technique and present a standard method for culturing primary epithelial cells. Therefore, we have described our optimized, high-yield, cost-effective protocol to grow 3D murine colonoids for more than 20 passages and our detailed methods to culture these cells as confluent monolayers for at least 14 days, enabling a wide variety of potential future experiments. By supporting and expanding on the current literature of primary epithelial culture optimization and detailed use in experiments, we hope to help enable the widespread adoption of these innovative methods and allow consistency of results obtained across laboratories and institutions. NEW & NOTEWORTHY Primary intestinal epithelial monolayers are notoriously difficult to maintain culture, even with the recent advances in the field. We describe, in detail, the protocols required to maintain three-dimensional cultures of murine colonoids and passage these primary epithelial cells to confluent monolayers in a standardized, high-yield and cost-effective manner. Copyright © 2017 the American Physiological Society.

  16. Detection of prostate cancer-specific transcripts in extracellular vesicles isolated from post-DRE urine

    PubMed Central

    Pellegrini, Kathryn L.; Patil, Dattatraya; Douglas, Kristen J.S.; Lee, Grace; Wehrmeyer, Kathryn; Torlak, Mersiha; Clark, Jeremy; Cooper, Colin S.; Moreno, Carlos S.; Sanda, Martin G.

    2018-01-01

    Background The measurement of gene expression in post-digital rectal examination (DRE) urine specimens provides a non-invasive method to determine a patient’s risk of prostate cancer. Many currently available assays use whole urine or cell pellets for the analysis of prostate cancer-associated genes, although the use of extracellular vesicles (EVs) has also recently been of interest. We investigated the expression of prostate-, kidney-, and bladder-specific transcripts and known prostate cancer biomarkers in urine EVs. Methods Cell pellets and EVs were recovered from post-DRE urine specimens, with the total RNA yield and quality determined by Bioanalyzer. The levels of prostate, kidney, and bladder-associated transcripts in EVs were assessed by TaqMan qPCR and targeted sequencing. Results RNA was more consistently recovered from the urine EV specimens, with over 80% of the patients demonstrating higher RNA yields in the EV fraction as compared to urine cell pellets. The median EV RNA yield of 36.4 ng was significantly higher than the median urine cell pellet RNA yield of 4.8 ng. Analysis of the post-DRE urine EVs indicated that prostate-specific transcripts were more abundant than kidney- or bladder-specific transcripts. Additionally, patients with prostate cancer had significantly higher levels of the prostate cancer-associated genes PCA3 and ERG. Conclusions Post-DRE urine EVs are a viable source of prostate-derived RNAs for biomarker discovery and prostate cancer status can be distinguished from analysis of these specimens. Continued analysis of urine EVs offers the potential discovery of novel biomarkers for pre-biopsy prostate cancer detection. PMID:28419548

  17. Analysis of hydrocarbons generated in coalbeds

    NASA Astrophysics Data System (ADS)

    Butala, Steven John M.

    This dissertation describes kinetic calculations using literature data to predict formation rates and product yields of oil and gas at typical low-temperature conditions in coalbeds. These data indicate that gas formation rates from hydrocarbon thermolysis are too low to have generated commercial quantities of natural gas, assuming bulk first-order kinetics. Acid-mineral-catalyzed cracking, transition-metal-catalyzed hydrogenolysis of liquid hydrocarbons, and catalyzed CO2 hydrogenation form gas at high rates. The gaseous product compositions for these reactions are nearly the same as those for typical natural coalbed gases, while those from thermal and catalytic cracking are more representative of atypical coalbed gases. Three Argonne Premium Coals (Upper-Freeport, Pittsburgh #8 and Lewiston-Stockton) were extracted with benzene in both Soxhlet and elevated pressure extraction (EPE) systems. The extracts were compared on the basis of dry mass yield and hydrocarbon profiles obtained by gas chromatography/mass spectrometry. The dry mass yields for the Upper-Freeport coal gave consistent results by both methods, while the yields from the Pittsburgh #8 and Lewiston-Stockton coals were greater by the EPE method. EPE required ˜90 vol. % less solvent compared to Soxhlet extraction. Single-ion-chromatograms of the Soxhlet extracts all exhibited bimodal distributions, while those of the EPE extracts did not. Hydrocarbons analyzed from Greater Green River Basin samples indicate that the natural oils in the basin originated from the coal seams. Analysis of artificially produced oil indicates that hydrous pyrolysis mimics generation of C15+ n-alkanes, but significant variations were found in the branched alkane, low-molecular-weight n-alkanes, and high-molecular-weight aromatic hydrocarbon distributions.

  18. A New Method to Constrain Supernova Fractions Using X-ray Observations of Clusters of Galaxies

    NASA Technical Reports Server (NTRS)

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-01-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112.We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% plus or minus 5.4% to 37.1% plus or minus 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 plus or minus 0.34) x 10(exp 9), to (1.28 plus or minus 0.43) x 10(exp 9), fromsnapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kiloparsecs of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  19. Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton

    Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less

  20. Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional

    DOE PAGES

    Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton; ...

    2017-05-24

    Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less

  1. Combined uranous nitrate production consisting of undivided electrolytic cell and divided electrolytic cell (Electrolysis → Electrolytic cell)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Zhongwei; Yan, Taihong; Zheng, Weifang

    2013-07-01

    The electrochemical reduction of uranyl nitrate is a green, mild way to make uranous ions. Undivided electrolyzers whose maintenance is less but their conversion ratio and current efficiency are low, have been chosen. However, at the beginning of undivided electrolysis, high current efficiency can also be maintained. Divided electrolyzers' conversion ratio and current efficiency is much higher because the re-oxidation of uranous on anode is avoided, but their maintenance costs are more, because in radioactive environment the membrane has to be changed after several operations. In this paper, a combined method of uranous production is proposed which consists of 2more » stages: undivided electrolysis (early stage) and divided electrolysis (late stage) to benefit from the advantages of both electrolysis modes. The performance of the combined method was tested. The results show that in combined mode, after 200 min long electrolysis (80 min undivided electrolysis and 120 min divided electrolysis), U(IV) yield can achieve 92.3% (500 ml feed, U 199 g/l, 72 cm{sup 2} cathode, 120 mA/cm{sup 2}). Compared with divided mode, about 1/3 working time in divided electrolyzer is reduced to achieve the same U(IV) yield. If 120 min long undivided electrolysis was taken, more than 1/2 working time can be reduced in divided electrolyzer, which means that about half of the maintenance cost can also be reduced. (authors)« less

  2. Diffusion quantum Monte Carlo and density functional calculations of the structural stability of bilayer arsenene

    NASA Astrophysics Data System (ADS)

    Kadioglu, Yelda; Santana, Juan A.; Özaydin, H. Duygu; Ersan, Fatih; Aktürk, O. Üzengi; Aktürk, Ethem; Reboredo, Fernando A.

    2018-06-01

    We have studied the structural stability of monolayer and bilayer arsenene (As) in the buckled (b) and washboard (w) phases with diffusion quantum Monte Carlo (DMC) and density functional theory (DFT) calculations. DMC yields cohesive energies of 2.826(2) eV/atom for monolayer b-As and 2.792(3) eV/atom for w-As. In the case of bilayer As, DMC and DFT predict that AA-stacking is the more stable form of b-As, while AB is the most stable form of w-As. The DMC layer-layer binding energies for b-As-AA and w-As-AB are 30(1) and 53(1) meV/atom, respectively. The interlayer separations were estimated with DMC at 3.521(1) Å for b-As-AA and 3.145(1) Å for w-As-AB. A comparison of DMC and DFT results shows that the van der Waals density functional method yields energetic properties of arsenene close to DMC, while the DFT + D3 method closely reproduced the geometric properties from DMC. The electronic properties of monolayer and bilayer arsenene were explored with various DFT methods. The bandgap values vary significantly with the DFT method, but the results are generally qualitatively consistent. We expect the present work to be useful for future experiments attempting to prepare multilayer arsenene and for further development of DFT methods for weakly bonded systems.

  3. Alternative Methods of Accounting for Underreporting and Overreporting When Measuring Dietary Intake-Obesity Relations

    PubMed Central

    Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A

    2011-01-01

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302

  4. Alternative methods of accounting for underreporting and overreporting when measuring dietary intake-obesity relations.

    PubMed

    Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A

    2011-02-15

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.

  5. Gynecomastia: the horizontal ellipse method for its correction.

    PubMed

    Gheita, Alaa

    2008-09-01

    Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.

  6. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    PubMed

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  7. Bioluminescent system for dynamic imaging of cell and animal behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hara-Miyauchi, Chikako; Laboratory for Cell Function Dynamics, Brain Science Institute, RIKEN, Saitama 351-0198; Department of Biophysics and Biochemistry, Graduate School of Health Care Sciences, Tokyo Medical and Dental University, Tokyo 113-8510

    2012-03-09

    Highlights: Black-Right-Pointing-Pointer We combined a yellow variant of GFP and firefly luciferase to make ffLuc-cp156. Black-Right-Pointing-Pointer ffLuc-cp156 showed improved photon yield in cultured cells and transgenic mice. Black-Right-Pointing-Pointer ffLuc-cp156 enabled video-rate bioluminescence imaging of freely-moving animals. Black-Right-Pointing-Pointer ffLuc-cp156 mice enabled tracking real-time drug delivery in conscious animals. -- Abstract: The current utility of bioluminescence imaging is constrained by a low photon yield that limits temporal sensitivity. Here, we describe an imaging method that uses a chemiluminescent/fluorescent protein, ffLuc-cp156, which consists of a yellow variant of Aequorea GFP and firefly luciferase. We report an improvement in photon yield by over threemore » orders of magnitude over current bioluminescent systems. We imaged cellular movement at high resolution including neuronal growth cones and microglial cell protrusions. Transgenic ffLuc-cp156 mice enabled video-rate bioluminescence imaging of freely moving animals, which may provide a reliable assay for drug distribution in behaving animals for pre-clinical studies.« less

  8. Emergent aquatics: stand establishment, management, and species screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, D.C.; Andrews, N.J.; Dubbe, D.R.

    1982-11-01

    Several emergent aquatic species have been identified as potential biomass crops, including Typha spp. (cattail), Scirpus spp. (rush), Sparganium spp. (bur reed), and Phragmites (reed). This report discusses first year results from studies of stand establishment and management, Typha nutrient requirements, wetland species yield comparisons, and Typha micropropagation. In a comparison of the relative effectiveness of seed, seedlings, and rhizomes for stand establishment, rhizomes appeared to be more consistent and productive under a wire variety of conditions. Both rhizomes and seedling established plots grew successfully on excavated peatland sites. First season results from a multiyear fertilizer rate experiment indicate thatmore » fertilizer treatment resulted in significantly increased tissue nutrient concentrations which should carry over into subsequent growing seasons. Shoot density and belowground dry weight were also significantly increased by phosphorus + potassium and potassium applications, respectively. First season yields of selected wetland species from managed paddies generally were comparable to yields reported from natural stands. Several particularly productive clones of Typha spp. have been identified. A method of establishing Typha in tissue culture is described.« less

  9. A new constitutive model for simulation of softening, plateau, and densification phenomena for trabecular bone under compression.

    PubMed

    Lee, Chi-Seung; Lee, Jae-Myung; Youn, BuHyun; Kim, Hyung-Sik; Shin, Jong Ki; Goh, Tae Sik; Lee, Jung Sub

    2017-01-01

    A new type of constitutive model and its computational implementation procedure for the simulation of a trabecular bone are proposed in the present study. A yield surface-independent Frank-Brockman elasto-viscoplastic model is introduced to express the nonlinear material behavior such as softening beyond yield point, plateau, and densification under compressive loads. In particular, the hardening- and softening-dominant material functions are introduced and adopted in the plastic multiplier to describe each nonlinear material behavior separately. In addition, the elasto-viscoplastic model is transformed into an implicit type discrete model, and is programmed as a user-defined material subroutine in commercial finite element analysis code. In particular, the consistent tangent modulus method is proposed to improve the computational convergence and to save computational time during finite element analysis. Through the developed material library, the nonlinear stress-strain relationship is analyzed qualitatively and quantitatively, and the simulation results are compared with the results of compression test on the trabecular bone to validate the proposed constitutive model, computational method, and material library. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Development of a Solid-State Fermentation System for Producing Bioethanol from Food Waste

    NASA Astrophysics Data System (ADS)

    Honda, Hiroaki; Ohnishi, Akihiro; Fujimoto, Naoshi; Suzuki, Masaharu

    Liquid fermentation is the a conventional method of producing bioethanol. However, this method results in the formation of high concentrations waste after distillation and futher treatment requires more energy and is costly(large amounts of costly energy).Saccharification of dried raw garbage was tested for 12 types of Koji starters under the following optimum culture conditions: temperature of 30°C and initial moisture content of 50%.Among all the types, Aspergillus oryzae KBN650 had the highest saccharifying power. The ethanol-producing ability of the raw garbage was investigated for 72 strains of yeast, of which Saccharomyces cerevisiae A30 had the highest ethanol production(yield)under the following optimum conditions: 1 :1 ratio of dried garbage and saccharified garbage by weight, and initial moisture content of 60%. Thus, the solid-state fermentation system consisted of the following 4 processes: moisture control, saccharification, ethanol production and distillation. This system produced 0.6kg of ethanol from 9.6kg of garbage. Moreover the ethanol yield from all sugars was calculated to be 0.37.

  11. Experimental Investigations on Subsequent Yield Surface of Pure Copper by Single-Sample and Multi-Sample Methods under Various Pre-Deformation.

    PubMed

    Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci

    2018-02-10

    Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.

  12. Improved Method for Isolation of Microbial RNA from Biofuel Feedstock for Metatranscriptomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, Hailan; Markillie, Lye Meng; Culley, David E.

    2013-03-28

    Metatranscriptomics—gene express profiling via DNA sequencing—is a powerful tool to identify genes that are ac- tively expressed and might contribute to the phenotype of individual organisms or the phenome (the sum of several phenotypes) of a microbial community. Furthermore, metatranscriptome studies can result in extensive catalogues of genes that encode for enzymes of industrial relevance. In both cases, a major challenge for generating a high quality metatranscriptome is the extreme lability of RNA and its susceptibility to ubiquitous RNAses. The microbial commu- nity (the microbiome) of the cow rumen efficiently degrades lignocelullosic biomass, generates significant amounts of methane, a greenhousemore » gas twenty times more potent than carbon dioxide, and is of general importance for the physio- logical wellbeing of the host animal. Metatranscriptomes of the rumen microbiome from animals kept under different conditions and from various types of rumen-incubated biomass can be expected to provide new insights into these highly interesting phenotypes and subsequently provide the framework for an enhanced understanding of this socio- economically important ecosystem. The ability to isolate large amounts of intact RNA will significantly facilitate accu- rate transcript annotation and expression profiling. Here we report a method that combines mechanical disruption with chemical homogenization of the sample material and consistently yields 1 mg of intact RNA from 1 g of rumen-in- cubated biofuel feedstock. The yield of total RNA obtained with our method exceeds the RNA yield achieved with pre- viously reported isolation techniques, which renders RNA isolated with the method presented here as an ideal starting material for metatranscriptomic analyses and other molecular biology applications that require significant amounts of starting material.« less

  13. Psychometric Properties of the Persian Version of the Social Anxiety - Acceptance and Action Questionnaire.

    PubMed

    Soltani, Esmail; Bahrainian, Seyed Abdolmajid; Masjedi Arani, Abbas; Farhoudian, Ali; Gachkar, Latif

    2016-06-01

    Social anxiety disorder is often related to specific impairment or distress in different areas of life, including occupational, social and family settings. The purpose of the present study was to examine the psychometric properties of the persian version of the social anxiety-acceptance and action questionnaire (SA-AAQ) in university students. In this descriptive cross-sectional study, 324 students from Shahid Beheshti University of Medical Sciences participated via the cluster sampling method during year 2015. Factor analysis by the principle component analysis method, internal consistency analysis, and convergent and divergent validity were conducted to examine the validity of the SA-AAQ. To calculate the reliability of the SA-AAQ, Cronbach's alpha and test-retest reliability were used. The results from factor analysis by principle component analysis method yielded three factors that were named acceptance, action and non-judging of experience. The three-factor solution explained 51.82% of the variance. Evidence for the internal consistency of SA-AAQ was obtained via calculating correlations between SA-AAQ and its subscales. Support for convergent and discriminant validity of the SA-AAQ via its correlations with the acceptance and action questionnaire - II, social interaction anxiety scale, cognitive fusion questionnaire, believability of anxious feelings and thoughts questionnaire, valued living questionnaire and WHOQOL- BREF was obtained. The reliability of the SA-AAQ via calculating Cronbach's alpha and test-retest coefficients yielded values of 0.84 and 0.84, respectively. The Iranian version of the SA-AAQ has acceptable levels of psychometric properties in university students. The SA-AAQ is a valid and reliable measure to be utilized in research investigations and therapeutic interventions.

  14. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  15. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  16. Toxic reagents and expensive equipment: are they really necessary for the extraction of good quality fungal DNA?

    PubMed

    Rodrigues, P; Venâncio, A; Lima, N

    2018-01-01

    The aim of this work was to evaluate a fungal DNA extraction procedure with the lowest inputs in terms of time as well as of expensive and toxic chemicals, but able to consistently produce genomic DNA of good quality for PCR purposes. Two types of fungal biological material were tested - mycelium and conidia - combined with two protocols for DNA extraction using Sodium Dodecyl Sulphate (SDS) and Cetyl Trimethyl Ammonium Bromide as extraction buffers and glass beads for mechanical disruption of cell walls. Our results showed that conidia and SDS buffer was the combination that lead to the best DNA quality and yield, with the lowest variation between samples. This study clearly demonstrates that it is possible to obtain high yield and pure DNA from pigmented conidia without the use of strong cell disrupting procedures and of toxic reagents. There are numerous methods for DNA extraction from fungi. Some rely on expensive commercial kits and/or equipments, unavailable for many laboratories, or make use of toxic chemicals such as chloroform, phenol and mercaptoethanol. This study clearly demonstrates that it is possible to obtain high yields of pure DNA from pigmented conidia without the use of strong and expensive cell disrupting procedures and of toxic reagents. The method herein described is simultaneously inexpensive and adequate to DNA extraction from several different types of fungi. © 2017 The Society for Applied Microbiology.

  17. Estimating tar and nicotine exposure: human smoking versus machine generated smoke yields.

    PubMed

    St Charles, F K; Kabbani, A A; Borgerding, M F

    2010-02-01

    Determine human smoked (HS) cigarette yields of tar and nicotine for smokers using their own brand in their everyday environment. A robust, filter analysis method was used to estimate the tar and nicotine yields for 784 subjects. Seventeen brands were chosen to represent a wide range of styles: 85 and 100 mm lengths; menthol and non-menthol; 17, 23, and 25 mm circumference; with tar yields [Federal Trade Commission (FTC) method] ranging from 1 to 18 mg. Tar bands chosen corresponded to yields of 1-3 mg, 4-6 mg, 7-12 mg, and 13+ mg. A significant difference (p<0.0001) in HS yields of tar and nicotine between tar bands was found. Machine-smoked yields were reasonable predictors of the HS yields for groups of subjects, but the relationship was neither exact nor linear. Neither the FTC, the Massachusetts (MA) nor the Canadian Intensive (CI) machine-smoking methods accurately reflect the HS yields across all brands. The FTC method was closest for the 7-12 mg and 13+ mg products and the MA method was closest for the 1-3mg products. The HS yields for the 4-6 mg products were approximately midway between the FTC and the MA yields. HS nicotine yields corresponded well with published urinary and plasma nicotine biomarker studies. 2009 Elsevier Inc. All rights reserved.

  18. Cupping - is it reproducible? Experiments about factors determining the vacuum.

    PubMed

    Huber, R; Emerich, M; Braeunig, M

    2011-04-01

    Cupping is a traditional method for treating pain which is investigated nowadays in clinical studies. Because the methods for producing the vacuum vary considerably we tested their reproducibility. In a first set of experiments (study 1) four methods for producing the vacuum (lighter flame 2 cm (LF1), lighter flame 4 cm (LF2), alcohol flame (AF) and mechanical suction with a balloon (BA)) have been compared in 50 trials each. The cupping glass was prepared with an outlet and stop-cock, the vacuum was measured with a pressure-gauge after the cup was set to a soft rubber pad. In a second series of experiments (study 2) we investigated the stability of pressures in 20 consecutive trials in two experienced cupping practitioners and ten beginners using method AF. In study 1 all four methods yielded consistent pressures. Large differences in magnitude were, however, observed between methods (mean pressures -200±30 hPa with LF1, -310±30 hPa with LF2, -560±30 hPa with AF, and -270±16 hPa with BA). With method BA the standard deviation was reduced by a factor 2 compared to the flame methods. In study 2 beginners had considerably more difficulty obtaining a stable pressure yield than advanced cupping practitioners, showing a distinct learning curve before reaching expertise levels after about 10-20 trials. Cupping is reproducible if the exact method is described in detail. Mechanical suction with a balloon has the best reproducibility. Beginners need at least 10-20 trials to produce stable pressures. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Anomalous neutron yield in indirect-drive inertial-confinement-fusion due to the formation of collisionless shocks in the corona

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Cai, Hong-Bo; Shan, Lian-Qiang; Zhang, Hua-Sen; Gu, Yu-Qiu; Zhu, Shao-Ping

    2017-06-01

    Observations of anomalous neutron yield in the indirect-drive inertial confinement fusion implosion experiments conducted at SG-III prototype and SG-II upgrade laser facilities are interpreted. The anomalous mechanism results in a neutron yield which is 100-times higher than that predicted by 1D radiation-hydrodynamic simulations. 2D radiation-hydrodynamic simulations show that the supersonic, radially directed gold (Au) plasma jets arising from the laser-hohlraum interactions can collide with the carbon-deuterium (CD) corona plasma of the compressed pellet. It is found that in the interaction front of the high-Z jet with the low-Z corona, with low density  ˜{{10}20}~\\text{c}{{\\text{m}}-3} and high temperature  ˜keV, kinetic effects become important. Particle-in-cell simulations indicate that an electrostatic shock wave can be driven when the high-temperature Au jet expands into the low-temperature CD corona. Deuterium ions with an amount of  ˜1015 can be accelerated to  ˜25 keV by the collisionless shock wave, thus causing efficient neutron productions though the beam-target method by stopping these energetic ions in the corona. The evaluated neutron yield is consistent with the experiments conducted at SG laser facilities.

  20. Proposed method for determining the thickness of glass in solar collector panels

    NASA Technical Reports Server (NTRS)

    Moore, D. M.

    1980-01-01

    An analytical method was developed for determining the minimum thickness for simply supported, rectangular glass plates subjected to uniform normal pressure environmental loads such as wind, earthquake, snow, and deadweight. The method consists of comparing an analytical prediction of the stress in the glass panel to a glass breakage stress determined from fracture mechanics considerations. Based on extensive analysis using the nonlinear finite element structural analysis program ARGUS, design curves for the structural analysis of simply supported rectangular plates were developed. These curves yield the center deflection, center stress and corner stress as a function of a dimensionless parameter describing the load intensity. A method of estimating the glass breakage stress as a function of a specified failure rate, degree of glass temper, design life, load duration time, and panel size is also presented.

  1. An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.

    NASA Technical Reports Server (NTRS)

    Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.

    1973-01-01

    A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.

  2. Single-reactor process for producing liquid-phase organic compounds from biomass

    DOEpatents

    Dumesic, James A.; Simonetti, Dante A.; Kunkes, Edward L.

    2015-12-08

    Disclosed is a method for preparing liquid fuel and chemical intermediates from biomass-derived oxygenated hydrocarbons. The method includes the steps of reacting in a single reactor an aqueous solution of a biomass-derived, water-soluble oxygenated hydrocarbon reactant, in the presence of a catalyst comprising a metal selected from the group consisting of Cr, Mn, Fe, Co, Ni, Cu, Mo, Tc, Ru, Rh, Pd, Ag, W, Re, Os, Ir, Pt, and Au, at a temperature, and a pressure, and for a time sufficient to yield a self-separating, three-phase product stream comprising a vapor phase, an organic phase containing linear and/or cyclic mono-oxygenated hydrocarbons, and an aqueous phase.

  3. Single-reactor process for producing liquid-phase organic compounds from biomass

    DOEpatents

    Dumesic, James A [Verona, WI; Simonetti, Dante A [Middleton, WI; Kunkes, Edward L [Madison, WI

    2011-12-13

    Disclosed is a method for preparing liquid fuel and chemical intermediates from biomass-derived oxygenated hydrocarbons. The method includes the steps of reacting in a single reactor an aqueous solution of a biomass-derived, water-soluble oxygenated hydrocarbon reactant, in the presence of a catalyst comprising a metal selected from the group consisting of Cr, Mn, Fe, Co, Ni, Cu, Mo, Tc, Ru, Rh, Pd, Ag, W, Re, Os, Ir, Pt, and Au, at a temperature, and a pressure, and for a time sufficient to yield a self-separating, three-phase product stream comprising a vapor phase, an organic phase containing linear and/or cyclic mono-oxygenated hydrocarbons, and an aqueous phase.

  4. Molecular dynamics study of the melting curve of NiTi alloy under pressure

    NASA Astrophysics Data System (ADS)

    Zeng, Zhao-Yi; Hu, Cui-E.; Cai, Ling-Cang; Chen, Xiang-Rong; Jing, Fu-Qian

    2011-02-01

    The melting curve of NiTi alloy was predicted by using molecular dynamics simulations combining with the embedded atom model potential. The calculated thermal equation of state consists well with our previous results obtained from quasiharmonic Debye approximation. Fitting the well-known Simon form to our Tm data yields the melting curves for NiTi: 1850(1 + P/21.938)0.328 (for one-phase method) and 1575(1 + P/7.476)0.305 (for two-phase method). The two-phase simulations can effectively eliminate the superheating in one-phase simulations. At 1 bar, the melting temperature of NiTi is 1575 ± 25 K and the corresponding melting slope is 64 K/GPa.

  5. Measurement of the top quark mass with the template method in the $$t\\bar{t} \\to\\mathrm{lepton}+\\mathrm{jets}$$ channel using ATLAS data

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2012-06-21

    The top quark mass has been measured using the template method in the $t\\bar{t} →lepton + jets channel based on data recorded in 2011 with the ATLAS detector at the LHC. The data were taken at a proton-proton centre-of-mass energy of √s = 7 TeV and correspond to an integrated luminosity of 1.04 fb -1. The analyses in the e + jets and μ + jets decay channels yield consistent results. Here the top quark mass is measured to be m top = 174.5±0.6 stat ±2.3 syst GeV.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Li-Min, E-mail: limin.sun@yahoo.com; Huang, Chih-Jen; Faculty of Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung City, Taiwan

    Acute skin reaction during adjuvant radiotherapy for breast cancer is an inevitable process, and its severity is related to the skin dose. A high–skin dose area can be speculated based on the isodose distribution shown on a treatment planning. To determine whether treatment planning can reflect high–skin dose location, 80 patients were collected and their skin doses in different areas were measured using a thermoluminescent dosimeter to locate the highest–skin dose area in each patient. We determined whether the skin dose is consistent with the highest-dose area estimated by the treatment planning of the same patient. The χ{sup 2} andmore » Fisher exact tests revealed that these 2 methods yielded more consistent results when the highest-dose spots were located in the axillary and breast areas but not in the inframammary area. We suggest that skin doses shown on the treatment planning might be a reliable and simple alternative method for estimating the highest skin doses in some areas.« less

  7. Use of a simplified method of optical recording to identify foci of maximal neuron activity in the somatosensory cortex of white rats.

    PubMed

    Inyushin, M Y; Volnova, A B; Lenkov, D N

    2001-01-01

    Eight mongrel white male rats were studied under urethane anesthesia, and neuron activity evoked by mechanical and/or electrical stimulation of the contralateral whiskers was recorded in the primary somatosensory cortex. Recordings were made using a digital USB chamber attached to the printer port of a Pentium 200MMX computer running standard programs. Optical images were obtained in the barrel-field zone using a differential signal, i.e., the difference signal for cortex images in control and experimental animals. The results obtained here showed that subtraction of averaged sequences of frames yielded images consisting of spots reflecting the probable position of activated groups of neurons. The most effective stimulation consisted of natural low-frequency stimulation of the whiskers. The method can be used for preliminary mapping of cortical zones, as it provides for rapid and reproducible testing of the activity of neuron ensembles over large areas of the cortex.

  8. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  9. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  10. Modeling Music Emotion Judgments Using Machine Learning Methods

    PubMed Central

    Vempala, Naresh N.; Russo, Frank A.

    2018-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion. PMID:29354080

  11. Distinguishing short duration noise transients in LIGO data to improve the PyCBC search for gravitational waves from high mass binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Nitz, Alexander H.

    2018-02-01

    ‘Blip glitches’ are a type of short duration transient noise in LIGO data. The cause for the majority of these is currently unknown. Short duration transient noise creates challenges for searches of the highest mass binary black hole systems, as standard methods of applying signal consistency, which look for consistency in the accumulated signal-to-noise of the candidate event, are unable to distinguish many blip glitches from short duration gravitational-wave signals due to similarities in their time and frequency evolution. We demonstrate a straightforward method, employed during Advanced LIGO’s second observing run, including the period of joint observation with the Virgo observatory, to separate the majority of this transient noise from potential gravitational-wave sources. This yields a  ∼20% improvement in the detection rate of high mass binary black hole mergers (> 60 Mȯ ) for the PyCBC analysis.

  12. Modeling Music Emotion Judgments Using Machine Learning Methods.

    PubMed

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  13. SCA with rotation to distinguish common and distinctive information in linked data.

    PubMed

    Schouteden, Martijn; Van Deun, Katrijn; Pattyn, Sven; Van Mechelen, Iven

    2013-09-01

    Often data are collected that consist of different blocks that all contain information about the same entities (e.g., items, persons, or situations). In order to unveil both information that is common to all data blocks and information that is distinctive for one or a few of them, an integrated analysis of the whole of all data blocks may be most useful. Interesting classes of methods for such an approach are simultaneous-component and multigroup factor analysis methods. These methods yield dimensions underlying the data at hand. Unfortunately, however, in the results from such analyses, common and distinctive types of information are mixed up. This article proposes a novel method to disentangle the two kinds of information, by making use of the rotational freedom of component and factor models. We illustrate this method with data from a cross-cultural study of emotions.

  14. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  15. Augmented Reality for Real-Time Detection and Interpretation of Colorimetric Signals Generated by Paper-Based Biosensors.

    PubMed

    Russell, Steven M; Doménech-Sánchez, Antonio; de la Rica, Roberto

    2017-06-23

    Colorimetric tests are becoming increasingly popular in point-of-need analyses due to the possibility of detecting the signal with the naked eye, which eliminates the utilization of bulky and costly instruments only available in laboratories. However, colorimetric tests may be interpreted incorrectly by nonspecialists due to disparities in color perception or a lack of training. Here we solve this issue with a method that not only detects colorimetric signals but also interprets them so that the test outcome is understandable for anyone. It consists of an augmented reality (AR) app that uses a camera to detect the colored signals generated by a nanoparticle-based immunoassay, and that yields a warning symbol or message when the concentration of analyte is higher than a certain threshold. The proposed method detected the model analyte mouse IgG with a limit of detection of 0.3 μg mL -1 , which was comparable to the limit of detection afforded by classical densitometry performed with a nonportable device. When adapted to the detection of E. coli, the app always yielded a "hazard" warning symbol when the concentration of E. coli in the sample was above the infective dose (10 6 cfu mL -1 or higher). The proposed method could help nonspecialists make a decision about drinking from a potentially contaminated water source by yielding an unambiguous message that is easily understood by anyone. The widespread availability of smartphones along with the inexpensive paper test that requires no enzymes to generate the signal makes the proposed assay promising for analyses in remote locations and developing countries.

  16. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    PubMed

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  17. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  18. Evaluation of phase-diversity techniques for solar-image restoration

    NASA Technical Reports Server (NTRS)

    Paxman, Richard G.; Seldin, John H.; Lofdahl, Mats G.; Scharmer, Goran B.; Keller, Christoph U.

    1995-01-01

    Phase-diversity techniques provide a novel observational method for overcomming the effects of turbulence and instrument-induced aberrations in ground-based astronomy. Two implementations of phase-diversity techniques that differ with regard to noise model, estimator, optimization algorithm, method of regularization, and treatment of edge effects are described. Reconstructions of solar granulation derived by applying these two implementations to common data sets are shown to yield nearly identical images. For both implementations, reconstructions from phase-diverse speckle data (involving multiple realizations of turbulence) are shown to be superior to those derived from conventional phase-diversity data (involving a single realization). Phase-diverse speckle reconstructions are shown to achieve near diffraction-limited resolution and are validated by internal and external consistency tests, including a comparison with a reconstruction using a well-accepted speckle-imaging method.

  19. Anomalous effects in the aluminum oxide sputtering yield

    NASA Astrophysics Data System (ADS)

    Schelfhout, R.; Strijckmans, K.; Depla, D.

    2018-04-01

    The sputtering yield of aluminum oxide during reactive magnetron sputtering has been quantified by a new and fast method. The method is based on the meticulous determination of the reactive gas consumption during reactive DC magnetron sputtering and has been deployed to determine the sputtering yield of aluminum oxide. The accuracy of the proposed method is demonstrated by comparing its results to the common weight loss method excluding secondary effects such as redeposition. Both methods exhibit a decrease in sputtering yield with increasing discharge current. This feature of the aluminum oxide sputtering yield is described for the first time. It resembles the discrepancy between published high sputtering yield values determined by low current ion beams and the low deposition rate in the poisoned mode during reactive magnetron sputtering. Moreover, the usefulness of the new method arises from its time-resolved capabilities. The evolution of the alumina sputtering yield can now be measured up to a resolution of seconds. This reveals the complex dynamical behavior of the sputtering yield. A plausible explanation of the observed anomalies seems to originate from the balance between retention and out-diffusion of implanted gas atoms, while other possible causes are commented.

  20. A unified method to process biosolids samples for the recovery of bacterial, viral, and helminths pathogens.

    PubMed

    Alum, Absar; Rock, Channah; Abbaszadegan, Morteza

    2014-01-01

    For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.

  1. Large Area Crop Inventory Experiment (LACIE). Feasibility of assessing crop condition and yield from LANDSAT data

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The author has identified the following significant results. Yield modelling for crop production estimation derived a means of predicting the within-a-year yield and the year-to-year variability of yield over some fixed or randomly located unit of area. Preliminary studies indicated that the requirements for interpreting LANDSAT data for yield may be sufficiently similar to those of signature extension that it is feasible to investigate the automated estimation of production. The concept of an advanced yield model consisting of both spectral and meteorological components was endorsed. Rationale for using meteorological parameters originated from known between season and near harvest dynamics in crop environmental-condition-yield relationships.

  2. Seasonal modulation of the 7Be solar neutrino rate in Borexino

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Lehnert, B.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.; Borexino Collaboration

    2017-06-01

    We present the evidence for the seasonal modulation of the 7Be neutrino interaction rate with the Borexino detector at the Laboratori Nazionali del Gran Sasso in Italy. The period, amplitude, and phase of the observed time evolution of the signal are consistent with its solar origin, and the absence of an annual modulation is rejected at 99.99% C.L. The data are analyzed using three methods: the analytical fit to event rate, the Lomb-Scargle and the Empirical Mode Decomposition techniques, which all yield results in excellent agreement.

  3. Kit for the rapid preparation of .sup.99m Tc red blood cells

    DOEpatents

    Richards, Powell; Smith, Terry D.

    1976-01-01

    A method and sample kit for the preparation of .sup.99m Tc-labeled red blood cells in a closed, sterile system. A partially evacuated tube, containing a freeze-dried stannous citrate formulation with heparin as an anticoagulant, allows whole blood to be automatically drawn from the patient. The radioisotope is added at the end of the labeling sequence to minimize operator exposure. Consistent 97% yields in 20 minutes are obtained with small blood samples. Freeze-dried kits have remained stable after five months.

  4. Arithmetical functions and irrationality of Lambert series

    NASA Astrophysics Data System (ADS)

    Duverney, Daniel

    2011-09-01

    We use a method of Erdös in order to prove the linear independence over Q of the numbers 1, ∑ n = 1+∞1/qn2-1, ∑ n = 1+∞n/qn2-1 for every q∈Z, with |q|≥2. The main idea consists in considering the two above series as Lambert series. This allows to expand them as power series of 1/q. The Taylor coefficients of these expansions are arithmetical functions, whose properties allow to apply an elementary irrationality criterion, which yields the result.

  5. Water quality in the St Croix National Scenic Riverway, Wisconsin

    USGS Publications Warehouse

    Graczyk, D.J.

    1986-01-01

    Yields for suspended sediment, total phosphorus, total nitrogen, and dissolved solids at the study stations were consistently lower than at other stations in the State. Suspendedsediment yields ranged from 1.9 to 13.3 tons per square mile. The average suspended-sediment yield for Wisconsin is 80 tons per square mile. Total phosphorous and the other constituents exhibited the same trend.

  6. Modeling precipitation-runoff relationships to determine water yield from a ponderosa pine forest watershed

    Treesearch

    Assefa S. Desta

    2006-01-01

    A stochastic precipitation-runoff modeling is used to estimate a cold and warm-seasons water yield from a ponderosa pine forested watershed in the north-central Arizona. The model consists of two parts namely, simulation of the temporal and spatial distribution of precipitation using a stochastic, event-based approach and estimation of water yield from the watershed...

  7. Integrated use of surface geophysical methods for site characterization — A case study in North Kingstown, Rhode Island

    USGS Publications Warehouse

    Johnson, Carole D.; Lane, John W.; Brandon, William C.; Williams, Christine A.P.; White, Eric A.

    2010-01-01

    A suite of complementary, non‐invasive surface geophysical methods was used to assess their utility for site characterization in a pilot investigation at a former defense site in North Kingstown, Rhode Island. The methods included frequency‐domain electromagnetics (FDEM), ground‐penetrating radar (GPR), electrical resistivity tomography (ERT), and multi‐channel analysis of surface‐wave (MASW) seismic. The results of each method were compared to each other and to drive‐point data from the site. FDEM was used as a reconnaissance method to assess buried utilities and anthropogenic structures; to identify near‐surface changes in water chemistry related to conductive leachate from road‐salt storage; and to investigate a resistive signature possibly caused by groundwater discharge. Shallow anomalies observed in the GPR and ERT data were caused by near‐surface infrastructure and were consistent with anomalies observed in the FDEM data. Several parabolic reflectors were observed in the upper part of the GPR profiles, and a fairly continuous reflector that was interpreted as bedrock could be traced across the lower part of the profiles. MASW seismic data showed a sharp break in shear wave velocity at depth, which was interpreted as the overburden/bedrock interface. The MASW profile indicates the presence of a trough in the bedrock surface in the same location where the ERT data indicate lateral variations in resistivity. Depths to bedrock interpreted from the ERT, MASW, and GPR profiles were similar and consistent with the depths of refusal identified in the direct‐push wells. The interpretations of data collected using the individual methods yielded non‐unique solutions with considerable uncertainty. Integrated interpretation of the electrical, electromagnetic, and seismic geophysical profiles produced a more consistent and unique estimation of depth to bedrock that is consistent with ground‐truth data at the site. This test case shows that using complementary techniques that measure different properties can be more effective for site characterization than a single‐method investigation.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, K.; Tovesson, F.; Shields, D.

    We developed the SPectrometer for Ion DEtermination in fission Research (SPIDER) for measuring mass yield distributions of fission products from spontaneous and neutron-induced fission. The 2E–2v method of measuring the kinetic energy (E) and velocity (v) of both outgoing fission products has been utilized, with the goal of measuring the mass of the fission products with an average resolution of 1 atomic mass unit (amu). Moreover, the SPIDER instrument, consisting of detector components for time-of-flight, trajectory, and energy measurements, has been assembled and tested using 229Th and 252Cf radioactive decay sources. For commissioning, the fully assembled system measured fission productsmore » from spontaneous fission of 252Cf. Individual measurement resolutions were met for time-of-flight (250 ps FWHM), spacial resolution (2 mm FHWM), and energy (92 keV FWHM for 8.376 MeV). Finally, these mass yield results measured from 252Cf spontaneous fission products are reported from an E–v measurement.« less

  9. Identification of sorghum hybrids with high phenotypic stability using GGE biplot methodology.

    PubMed

    Teodoro, P E; Almeida Filho, J E; Daher, R F; Menezes, C B; Cardoso, M J; Godinho, V P C; Torres, F E; Tardin, F D

    2016-06-10

    The aim of this study was to identify sorghum hybrids that have both high yield and phenotypic stability in Brazilian environments. Seven trials were conducted between February and March 2011. The experimental design was a randomized complete block with 25 treatments and three replicates. The treatments consisted of 20 simple pre-commercial hybrids and five witnesses of grain sorghum. Sorghum genotypes were analyzed by the genotype main effects + genotype environment interaction (GGE) biplot method if significant genotype x environment interaction, adaptability, and phenotypic stability were detected. GGE biplot methodology identified two groups of environments, the first composed of Água Comprida-MG, Montividiu-GO, and Vilhena- RO and the second of Guaíra-SP and Sete Lagoas-MG. The BRS 308 and 1G282 genotypes were found to have high grain yield, adaptability, and phenotypic stability and are thus indicated for cultivation in the first and second groups of environments, respectively.

  10. Lineaments on Skylab photographs: Detection, mapping, and hydrologic significance in central Tennessee

    NASA Technical Reports Server (NTRS)

    Moore, G. K.

    1976-01-01

    An investigation was carried out to determine the feasibility of mapping lineaments on SKYLAB photographs of central Tennessee and to determine the hydrologic significance of these lineaments, particularly as concerns the occurrence and productivity of ground water. Sixty-nine percent more lineaments were found on SKYLAB photographs by stereo viewing than by projection viewing, but longer lineaments were detected by projection viewing. Most SKYLAB lineaments consisted of topographic depressions and they followed or paralleled the streams. The remainder were found by vegetation alinements and the straight sides of ridges. Test drilling showed that the median yield of wells located on SKYLAB lineaments were about six times the median yield of wells located by random drilling. The best single detection method, in terms of potential savings, was stereo viewing. Larger savings might be achieved by locating wells on lineaments detected by both stereo viewing and projection.

  11. Barrier infrared detector

    NASA Technical Reports Server (NTRS)

    Ting, David Z. (Inventor); Khoshakhlagh, Arezou (Inventor); Soibel, Alexander (Inventor); Hill, Cory J. (Inventor); Gunapala, Sarath D. (Inventor)

    2012-01-01

    A superlattice-based infrared absorber and the matching electron-blocking and hole-blocking unipolar barriers, absorbers and barriers with graded band gaps, high-performance infrared detectors, and methods of manufacturing such devices are provided herein. The infrared absorber material is made from a superlattice (periodic structure) where each period consists of two or more layers of InAs, InSb, InSbAs, or InGaAs. The layer widths and alloy compositions are chosen to yield the desired energy band gap, absorption strength, and strain balance for the particular application. Furthermore, the periodicity of the superlattice can be "chirped" (varied) to create a material with a graded or varying energy band gap. The superlattice based barrier infrared detectors described and demonstrated herein have spectral ranges covering the entire 3-5 micron atmospheric transmission window, excellent dark current characteristics operating at least 150K, high yield, and have the potential for high-operability, high-uniformity focal plane arrays.

  12. Self-consistent method for quantifying indium content from X-ray spectra of thick compound semiconductor specimens in a transmission electron microscope.

    PubMed

    Walther, T; Wang, X

    2016-05-01

    Based on Monte Carlo simulations of X-ray generation by fast electrons we calculate curves of effective sensitivity factors for analytical transmission electron microscopy based energy-dispersive X-ray spectroscopy including absorption and fluorescence effects, as a function of Ga K/L ratio for different indium and gallium containing compound semiconductors. For the case of InGaN alloy thin films we show that experimental spectra can thus be quantified without the need to measure specimen thickness or density, yielding self-consistent values for quantification with Ga K and Ga L lines. The effect of uncertainties in the detector efficiency are also shown to be reduced. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  13. Capability of crop water content for revealing variability of winter wheat grain yield and soil moisture under limited irrigation.

    PubMed

    Zhang, Chao; Liu, Jiangui; Shang, Jiali; Cai, Huanjie

    2018-08-01

    Winter wheat (Triticum aestivum L.) is a major crop in the Guanzhong Plain, China. Understanding its water status is important for irrigation planning. A few crop water indicators, such as the leaf equivalent water thickness (EWT: g cm -2 ), leaf water content (LWC: %) and canopy water content (CWC: kg m -2 ), have been estimated using remote sensing techniques for a wide range of crops, yet their suitability and utility for revealing winter wheat growth and soil moisture status have not been well studied. To bridge this knowledge gap, field-scale irrigation experiments were conducted over two consecutive years (2014 and 2015) to investigate relationships of crop water content with soil moisture and grain yield, and to assess the performance of four spectral process methods for retrieving these three crop water indicators. The result revealed that the water indicators were more sensitive to soil moisture variation before the jointing stage. All three water indicators were significantly correlated with soil moisture during the reviving stage, and the correlations were stronger for leaf water indicators than that of the canopy water indicator at the jointing stage. No correlation was observed after the heading stage. All three water indicators showed good capabilities of revealing grain yield variability in jointing stage, with R 2 up to 0.89. CWC had a consistent relationship with grain yield over different growing seasons, but the performances of EWT and LWC were growing-season specific. The partial least squares regression was the most accurate method for estimating LWC (R 2 =0.72; RMSE=3.6%) and comparable capability for EWT and CWC. Finally, the work highlights the usefulness of crop water indicators to assess crop growth, productivity, and soil water status and demonstrates the potential of various spectral processing methods for retrieving crop water contents from canopy reflectance spectrums. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Weathering, landscape equilibrium, and carbon in four watersheds in eastern Puerto Rico: Chapter H in Water quality and landscape processes of four watersheds in eastern Puerto Rico

    USGS Publications Warehouse

    Stallard, Robert F.; Murphy, Sheila F.; Stallard, Robert F.

    2012-01-01

    The U.S. Geological Survey's Water, Energy, and Biogeochemical Budgets (WEBB) program research in eastern Puerto Rico involves a double pair-wise comparison of four montane river basins, two on granitic bedrock and two on fine-grained volcaniclastic bedrock; for each rock type, one is forested and the other is developed. A confounding factor in this comparison is that the developed watersheds are substantially drier than the forested (runoff of 900–1,600 millimeters per year compared with 2,800–3,700 millimeters per year). To reduce the effects of contrasting runoff, the relation between annual runoff and annual constituent yield were used to estimate mean-annual yields at a common, intermediate mean-annual runoff of 1,860 millimeters per year. Upon projection to this intermediate runoff, the ranges of mean-annual yields among all watersheds became more compact or did not substantially change for dissolved bedrock, sodium, silica, chloride, dissolved organic carbon, and calcium. These constituents are the primary indicators of chemical weathering, biological activity on the landscape, and atmospheric inputs; the narrow ranges indicate little preferential influence by either geology or land cover. The projected yields of biologically active constituents (potassium, nitrate, ammonium ion, phosphate), and particulate constituents (suspended bedrock and particulate organic carbon) were considerably greater for developed landscapes compared with forested watersheds, consistent with the known effects of land clearing and human waste inputs. Equilibrium rates of combined chemical and physical weathering were estimated by using a method based on concentrations of silicon and sodium in bedrock, river-borne solids, and river-borne solutes. The observed rates of landscape denudation greatly exceed rates expected for a dynamic equilibrium, except possibly for the forested watershed on volcaniclastic rock. Deforestation and agriculture can explain the accelerated physical erosion in the two developed watersheds. Because there has been no appreciable deforestation, something else, possibly climate or forest-quality change, must explain the accelerated erosion in the forested watersheds on granitic rocks. Particulate organic carbon yields are closely linked to sediment yields. This relation implies that much of the particulate organic carbon transport in the four rivers is being caused by this enhanced erosion aided by landslides and fast carbon recovery. The increase in particulate organic carbon yields over equilibrium is estimated to range from 300 kilomoles per square kilometer per year (6 metric tons carbon per square kilometer per year) to 1,700 kilomoles per square kilometer per year (22 metric tons carbon per square kilometer per year) and is consistent with human-accelerated particulate-organic-carbon erosion and burial observed globally. There is no strong evidence of human perturbation of silicate weathering in the four study watersheds, and differences in dissolved inorganic carbon are consistent with watershed geology. Although dissolved organic carbon is slightly elevated in the developed watersheds, that elevation is not enough to unambiguously demonstrate human causes; more work is needed. Accordingly, the dissolved organic carbon and dissolved inorganic carbon yields of tropical rivers, although large, are of secondary importance in the study of the anthropgenically perturbed carbon cycle.

  15. Molecular elimination of Br2 in photodissociation of CH2BrC(O)Br at 248 nm using cavity ring-down absorption spectroscopy.

    PubMed

    Fan, He; Tsai, Po-Yu; Lin, King-Chuen; Lin, Cheng-Wei; Yan, Chi-Yu; Yang, Shu-Wei; Chang, A H H

    2012-12-07

    The primary elimination channel of bromine molecule in one-photon dissociation of CH(2)BrC(O)Br at 248 nm is investigated using cavity ring-down absorption spectroscopy. By means of spectral simulation, the ratio of nascent vibrational population in v = 0, 1, and 2 levels is evaluated to be 1:(0.5 ± 0.1):(0.2 ± 0.1), corresponding to a Boltzmann vibrational temperature of 581 ± 45 K. The quantum yield of the ground state Br(2) elimination reaction is determined to be 0.24 ± 0.08. With the aid of ab initio potential energy calculations, the obtained Br(2) fragments are anticipated to dissociate on the electronic ground state, yielding vibrationally hot Br(2) products. The temperature-dependence measurements support the proposed pathway via internal conversion. For comparison, the Br(2) yields are obtained analogously from CH(3)CHBrC(O)Br and (CH(3))(2)CBrC(O)Br to be 0.03 and 0.06, respectively. The trend of Br(2) yields among the three compounds is consistent with the branching ratio evaluation by Rice-Ramsperger-Kassel-Marcus method. However, the latter result for each molecule is smaller by an order of magnitude than the yield findings. A non-statistical pathway so-called roaming process might be an alternative to the Br(2) production, and its contribution might account for the underestimate of the branching ratio calculations.

  16. Measurement of the B0 ---> Psi (2S) Lambda0 Branching Fraction on BaBar at the Stanford Linear Accelerator Center (Abstract Only)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivas, Alexander Raymond, Jr.; /Colorado U.

    2005-11-16

    The decays of B{sup 0} mesons to hadronic final states remains a rich area of physics on BaBar. Not only do the c{bar c}-K final states (e.g. B{sup 0} {yields} {psi}(2S)K{sup 0}) allow for the measurement of CP Violation, but the branching fractions provide a sensitive test of the theoretical methods used to account for low energy non-perturbative QCD effects. They present the measurement of the branching fraction for the decay B{sup 0} {yields} {psi}(2S)K{sub s}. The data set consists of 88.8 {+-} 1.0 x 10{sup 6} B{bar b} pairs collected on the e{sup +}e{sup -} {yields} {Upsilon}(4S) resonance onmore » BaBar/PEP-II at the Stanford Linear Accelerator Center (SLAC). This analysis features a modification of present cuts, with respect to those published so far on BaBar, on the K{sub S} {yields} {pi}{sup +}{pi}{sup -} and {psi}(2S) {yields} J/{psi}{pi}{sup +}{pi}{sup -} which aim at reducing the background while keeping the signal intact. Various data selection criteria are studied for the lepton modes (e{sup +}e{sup -} and {mu}{sup +}{mu}{sup -}) of the J/{psi} and {psi}(2S) to improve signal purity as well as study the stability of the resultant branching fractions.« less

  17. A multivariate model and statistical method for validating tree grade lumber yield equations

    Treesearch

    Donald W. Seegrist

    1975-01-01

    Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.

  18. Synthesis and characterization of transparent conductive zinc oxide thin films by sol-gel spin coating method

    NASA Astrophysics Data System (ADS)

    Winarski, David

    Zinc oxide has been given much attention recently as it is promising for various semiconductor device applications. ZnO has a direct band gap of 3.3 eV, high exciton binding energy of 60 meV and can exist in various bulk powder and thin film forms for different applications. ZnO is naturally n-type with various structural defects, which sparks further investigation into the material properties. Although there are many potential applications for this ZnO, an overall lack of understand and control of intrinsic defects has proven difficult to obtain consistent, repeatable results. This work studies both synthesis and characterization of zinc oxide in an effort to produce high quality transparent conductive oxides. The sol-gel spin coating method was used to obtain highly transparent ZnO thin films with high UV absorbance. This research develops a new more consistent method for synthesis of these thin films, providing insight for maintaining quality control for each step in the procedure. A sol-gel spin coating technique is optimized, yielding highly transparent polycrystalline ZnO thin films with tunable electrical properties. Annealing treatment in hydrogen and zinc atmospheres is researched in an effort to increase electrical conductivity and better understand intrinsic properties of the material. These treatment have shown significant effects on the properties of ZnO. Characterization of doped and undoped ZnO synthesized by the sol-gel spin coating method was carried out using scanning electron microscopy, UV-Visible range absorbance, X-ray diffraction, and the Hall Effect. Treatment in hydrogen shows an overall decrease in the number of crystal phases and visible absorbance while zinc seems to have the opposite effect. The Hall Effect has shown that both annealing environments increase the n-type conductivity, yielding a ZnO thin film with a carrier concentration as high as 3.001 x 1021 cm-3.

  19. Apparent annual survival estimates of tropical songbirds better reflect life history variation when based on intensive field methods

    USGS Publications Warehouse

    Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.

    2017-01-01

    AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.

  20. Sustainable biochar effects for low carbon crop production: A 5-crop season field experiment on a low fertility soil from Central China

    NASA Astrophysics Data System (ADS)

    Liu, X.

    2014-12-01

    Biochar's effects on improving soil fertility, enhancing crop productivity and reducing greenhouse gases (GHGs) emission from croplands had been well addressed in numerous short-term experiments with biochar soil amendment (BSA) mostly in a single crop season / cropping year. However, the persistence of these effects, after a single biochar application, has not yet been well known due to limited long-term field studies so far. Large scale BSA in agriculture is often commented on the high cost due to large amount of biochar in a single application. Here, we try to show the persistence of biochar effects on soil fertility and crop productivity improvement as well as GHGs emission reduction, using data from a field experiment with BSA for 5 crop seasons in central North China. A single amendment of biochar was performed at rates of 0 (C0), 20 (C20) and 40 t ha-1 (C40) before sowing of the first crop season. Emissions of CO2, CH4 and N2O were monitored with static closed chamber method throughout the crop growing season for the 1st, 2nd and 5th cropping. Crop yield was measured and topsoil samples were collected at harvest of each crop season. BSA altered most of the soil physic-chemical properties with a significant increase over control in soil organic carbon (SOC) and available potassium (K) content. The increase in SOC and available K was consistent over the 5 crop seasons after BSA. Despite a significant yield increase in the first maize season, enhancement of crop yield was not consistent over crop seasons without corresponding to the changes in soil nutrient availability. BSA did not change seasonal total CO2 efflux but greatly reduced N2O emissions throughout the five seasons. This supported a stable nature of biochar carbon in soil, which played a consistent role in reducing N2O emission, which showed inter-annual variation with changes in temperature and soil moisture conditions. The biochar effect was much more consistent under C40 than under C20 and with GHGs emission than with soil property and crop yield. Thus, our study suggested that biochar amended in dry land could sustain a low carbon production both of maize and wheat in terms of its efficient carbon sequestration, lower GHGs emission intensity and soil improvement over 5 crop seasons after a single amendment.

  1. A method to validate quantitative high-frequency power doppler ultrasound with fluorescence in vivo video microscopy.

    PubMed

    Pinter, Stephen Z; Kim, Dae-Ro; Hague, M Nicole; Chambers, Ann F; MacDonald, Ian C; Lacefield, James C

    2014-08-01

    Flow quantification with high-frequency (>20 MHz) power Doppler ultrasound can be performed objectively using the wall-filter selection curve (WFSC) method to select the cutoff velocity that yields a best-estimate color pixel density (CPD). An in vivo video microscopy system (IVVM) is combined with high-frequency power Doppler ultrasound to provide a method for validation of CPD measurements based on WFSCs in mouse testicular vessels. The ultrasound and IVVM systems are instrumented so that the mouse remains on the same imaging platform when switching between the two modalities. In vivo video microscopy provides gold-standard measurements of vascular diameter to validate power Doppler CPD estimates. Measurements in four image planes from three mice exhibit wide variation in the optimal cutoff velocity and indicate that a predetermined cutoff velocity setting can introduce significant errors in studies intended to quantify vascularity. Consistent with previously published flow-phantom data, in vivo WFSCs exhibited three characteristic regions and detectable plateaus. Selection of a cutoff velocity at the right end of the plateau yielded a CPD close to the gold-standard vascular volume fraction estimated using IVVM. An investigator can implement the WFSC method to help adapt cutoff velocity to current blood flow conditions and thereby improve the accuracy of power Doppler for quantitative microvascular imaging. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  2. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  3. Coating cells with colloidal silica for high yield isolation of plasma membrane sheets and identification of transmembrane proteins.

    PubMed

    Chaney, L K; Jacobson, B S

    1983-08-25

    Plasma membrane (PM) can be isolated by binding to a positively charged solid support. Using this concept, we have developed a novel method of PM isolation using cationic colloidal silica. The method is designed for the comparative study of various physiological states of PM and for transbilayer protein mapping. The procedure consists of coating intact cells with a dense pellicle of silica particles and polyanion. Since cells remain intact during pellicle formation, the external face of the PM is selectively coated. The pellicle greatly enhances PM density and stabilizes it against vesiculation or lateral reorientation. Upon cell lysis, large open sheets of PM are rapidly isolated by centrifugation. PM from Dictyostelium discoideum was prepared by this method. Marker enzymes, cell surface labeling and microscopy demonstrate that the PM was isolated in high yield (70-80%) with a 10-17-fold purification and only low levels of cytoplasmic contamination. The pellicle remains intact during cell lysis and membrane isolation, shielding the external surface of the membranes up to 92% from chemical or enzymatic attack. The PM can thus be labeled selectively from inside and/or outside. Transmembrane proteins were identified in Dictyostelium PM by means of lactoperoxidase iodination and autoradiography.

  4. Effect of acidic and enzymatic pretreatment on the analysis of mountain tea (Sideritis spp.) volatiles via distillation and ultrasound-assisted extraction.

    PubMed

    Dimaki, Virginia D; Iatrou, Gregoris; Lamari, Fotini N

    2017-11-17

    A number of beneficial medicinal properties are attributed to the extract and essential oil of the aerial parts of Sideritis species (Lamiaceae). Hydrodistillation of the aerial parts of wild Sideritis clandestina ssp. peloponnesiaca (an endemic taxon in northern Peloponnesus, Greece) gave a low essential oil yield (<0.12%); about 65 components, mainly α-pinene, β-caryophyllene, β-pinene, globulol, caryophyllene oxide, were identified via GC-MS. Internal and external standards were used for quantification. For miniaturization of the procedure, we studied side-by-side maceration (MAC) and ultrasound-assisted extraction (UAE) methods, as well as the effect of preincubation in acidic medium (pH 4.8) for 75min at 37°C with or without a mixture of cellulase, hemicellulase and pectinase. Maceration and UAE provide consistent chemoprofiling of the main volatile compounds (about 20); UAE has lower demands on time, solvent, plant material (3g) and results in higher yields. Pretreatment with enzymes can increase the respective yields of hydrodistillation and UAE, but this effect is definitely attributed to the concurrent acidic pretreatment. In conclusion, incubation of plant material prior to hydrodistillation or UAE in citrate buffer, pH 4.8, significantly enhances the overall yield and number of components obtained and is recommended for the analysis of Sideritis volatiles. The acidic pre-treatment method was also successfully applied to analysis of cultivated Sideritis raeseri Boiss. & Heldr. in Boiss. ssp. raeseri; α-pinene, α- and γ-terpinene and β-thujene were predominant albeit in different percentages in flowers and leaves. Copyright © 2017. Published by Elsevier B.V.

  5. Can we improve the clinical utility of respiratory rate as a monitored vital sign?

    PubMed

    Chen, Liangyou; Reisner, Andrew T; Gribok, Andrei; McKenna, Thomas M; Reifman, Jaques

    2009-06-01

    Respiratory rate (RR) is a basic vital sign, measured and monitored throughout a wide spectrum of health care settings, although RR is historically difficult to measure in a reliable fashion. We explore an automated method that computes RR only during intervals of clean, regular, and consistent respiration and investigate its diagnostic use in a retrospective analysis of prehospital trauma casualties. At least 5 s of basic vital signs, including heart rate, RR, and systolic, diastolic, and mean arterial blood pressures, were continuously collected from 326 spontaneously breathing trauma casualties during helicopter transport to a level I trauma center. "Reliable" RR data were identified retrospectively using automated algorithms. The diagnostic performances of reliable versus standard RR were evaluated by calculation of the receiver operating characteristic curves using the maximum-likelihood method and comparison of the summary areas under the receiver operating characteristic curves (AUCs). Respiratory rate shows significant data-reliability differences. For identifying prehospital casualties who subsequently receive a respiratory intervention (hospital intubation or tube thoracotomy), standard RR yields an AUC of 0.59 (95% confidence interval, 0.48-0.69), whereas reliable RR yields an AUC of 0.67 (0.57-0.77), P < 0.05. For identifying casualties subsequently diagnosed with a major hemorrhagic injury and requiring blood transfusion, standard RR yields an AUC of 0.60 (0.49-0.70), whereas reliable RR yields 0.77 (0.67-0.85), P < 0.001. Reliable RR, as determined by an automated algorithm, is a useful parameter for the diagnosis of respiratory pathology and major hemorrhage in a trauma population. It may be a useful input to a wide variety of clinical scores and automated decision-support algorithms.

  6. An intercomparison of methods for solving the stochastic collection equation with a focus on cloud radar Doppler spectra in drizzling stratocumulus

    NASA Astrophysics Data System (ADS)

    Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.

    2017-12-01

    Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.

  7. Evaluation of the percentage of ganglion cells in the ganglion cell layer of the rodent retina

    PubMed Central

    Schlamp, Cassandra L.; Montgomery, Angela D.; Mac Nair, Caitlin E.; Schuart, Claudia; Willmer, Daniel J.

    2013-01-01

    Purpose Retinal ganglion cells comprise a percentage of the neurons actually residing in the ganglion cell layer (GCL) of the rodent retina. This estimate is useful to extrapolate ganglion cell loss in models of optic nerve disease, but the values reported in the literature are highly variable depending on the methods used to obtain them. Methods We tested three retrograde labeling methods and two immunostaining methods to calculate ganglion cell number in the mouse retina (C57BL/6). Additionally, a double-stain retrograde staining method was used to label rats (Long-Evans). The number of total neurons was estimated using a nuclear stain and selecting for nuclei that met specific criteria. Cholinergic amacrine cells were identified using transgenic mice expressing Tomato fluorescent protein. Total neurons and total ganglion cell numbers were measured in microscopic fields of 104 µm2 to determine the percentage of neurons comprising ganglion cells in each field. Results Historical estimates of the percentage of ganglion cells in the mouse GCL range from 36.1% to 67.5% depending on the method used. Experimentally, retrograde labeling methods yielded a combined estimate of 50.3% in mice. A retrograde method also yielded a value of 50.21% for rat retinas. Immunolabeling estimates were higher at 64.8%. Immunolabeling may introduce overestimates, however, with non-specific labeling effects, or ectopic expression of antigens in neurons other than ganglion cells. Conclusions Since immunolabeling methods may overestimate ganglion cell numbers, we conclude that 50%, which is consistently derived from retrograde labeling methods, is a reliable estimate of the ganglion cells in the neuronal population of the GCL. PMID:23825918

  8. Electrorheological suspensions of laponite in oil: rheometry studies.

    PubMed

    Parmar, K P S; Méheust, Y; Schjelderupsen, Børge; Fossum, J O

    2008-03-04

    We have studied the effect of an external direct current (DC) electric field ( approximately 1 kV/mm) on the rheological properties of colloidal suspensions consisting of aggregates of laponite particles in a silicone oil. Microscopy observations show that, under application of an electric field greater than a triggering electric field Ec approximately 0.6 kV/mm, laponite aggregates assemble into chain- and/or columnlike structures in the oil. Without an applied electric field, the steady-state shear behavior of such suspensions is Newtonian-like. Under application of an electric field larger than Ec, it changes dramatically as a result of the changes in the microstructure: a significant yield stress is measured, and under continuous shear the fluid is shear-thinning. The rheological properties, in particular the dynamic and static shear stress, were studied as a function of particle volume fraction for various strengths (including null) of the applied electric field. The flow curves at constant shear rate can be scaled with respect to both the particle fraction and electric field strength onto a master curve. This scaling is consistent with simple scaling arguments. The shape of the master curve accounts for the system's complexity; it approaches a standard power-law model at high Mason numbers. Both dynamic and static yield stresses are observed to depend on the particle fraction Phi and electric field E as PhibetaEalpha, with alpha approximately 1.85 and beta approximately 1 and 1.70 for the dynamic and static yield stresses, respectively. The yield stress was also determined as the critical stress at which there occurs a bifurcation in the rheological behavior of suspensions that are submitted to a constant shear stress; a scaling law with alpha approximately 1.84 and beta approximately 1.70 was obtained. The effectiveness of the latter technique confirms that such electrorheological (ER) fluids can be studied in the framework of thixotropic fluids. The method is very reproducible; we suggest that it could be used routinely for studying ER fluids. The measured overall yield stress behavior of the suspensions may be explained in terms of standard conduction models for electrorheological systems. Interesting prospects include using such systems for guided self-assembly of clay nanoparticles.

  9. Fresh and Stored Pollen From Slash and Loblolly Pines Compared For Seed Yields

    Treesearch

    John F. Kraus; Davie L. Hunt

    1970-01-01

    Seed yields showed no consistent differences between fresh and stored pollen from 8 years of controlled pollination on slash pine and 4 years on loblolly pine. Collection of male strobili at the proper stage of pollen maturity was an important factor in obtaining good seed yields from stored pollen. Criteria are described which were useful in determining when to...

  10. Impairment Rating Ambiguity in the United States: The Utah Impairment Guides for Calculating Workers' Compensation Impairments

    PubMed Central

    Hunter, Bradley; Bunkall, Larry D.; Holmes, Edward B.

    2009-01-01

    Since the implementation of workers' compensation, accurately and consistently rating impairment has been a concern for the employee and employer, as well as rating physicians. In an attempt to standardize and classify impairments, the American Medical Association (AMA) publishes the AMA Guides ("Guides"), and recently published its 6th edition of the AMA Guides. Common critiques of the AMA Guides 6th edition are that they are too complex, lacking in evidence-based methods, and rarely yield consistent ratings. Many states mandate use of some edition of the AMA Guides, but few states are adopting the current edition due to the increasing difficulty and frustration with their implementation. A clearer, simpler approach is needed. Some states have begun to develop their own supplemental guides to combat problems in complexity and validity. Likewise studies in Korea show that past methods for rating impairment are outdated and inconsistent, and call for measures to adapt current methods to Korea's specific needs. The Utah Supplemental Guides to the AMA Guides have been effective in increasing consistency in rating impairment. It is estimated that litigation of permanent impairment has fallen below 1% and Utah is now one of the least costly states for obtaining workers' compensation insurance, while maintaining a medical fee schedule above the national average. Utah's guides serve as a model for national or international impairment guides. PMID:19503678

  11. Optimal consistency in microRNA expression analysis using reference-gene-based normalization.

    PubMed

    Wang, Xi; Gardiner, Erin J; Cairns, Murray J

    2015-05-01

    Normalization of high-throughput molecular expression profiles secures differential expression analysis between samples of different phenotypes or biological conditions, and facilitates comparison between experimental batches. While the same general principles apply to microRNA (miRNA) normalization, there is mounting evidence that global shifts in their expression patterns occur in specific circumstances, which pose a challenge for normalizing miRNA expression data. As an alternative to global normalization, which has the propensity to flatten large trends, normalization against constitutively expressed reference genes presents an advantage through their relative independence. Here we investigated the performance of reference-gene-based (RGB) normalization for differential miRNA expression analysis of microarray expression data, and compared the results with other normalization methods, including: quantile, variance stabilization, robust spline, simple scaling, rank invariant, and Loess regression. The comparative analyses were executed using miRNA expression in tissue samples derived from subjects with schizophrenia and non-psychiatric controls. We proposed a consistency criterion for evaluating methods by examining the overlapping of differentially expressed miRNAs detected using different partitions of the whole data. Based on this criterion, we found that RGB normalization generally outperformed global normalization methods. Thus we recommend the application of RGB normalization for miRNA expression data sets, and believe that this will yield a more consistent and useful readout of differentially expressed miRNAs, particularly in biological conditions characterized by large shifts in miRNA expression.

  12. Further experience with the local lymph node assay using standard radioactive and nonradioactive cell count measurements.

    PubMed

    Kolle, Susanne N; Basketter, David; Schrage, Arnhild; Gamer, Armin O; van Ravenzwaay, Bennard; Landsiedel, Robert

    2012-08-01

    In a previous study, the predictive capacity of a modified local lymph node assay (LLNA) based on cell counts, the LNCC, was demonstrated to be closely similar to that of the original assay. In addition, a range of substances, including some technical/commercial materials and a range of agrochemical formulations (n = 180) have also been assessed in both methods in parallel. The results in the LNCC and LLNA were generally consistent, with 86% yielding an identical classification outcome. Discordant results were associated with borderline data and were evenly distributed between the two methods. Potency information derived from each method also demonstrated good consistency (n = 101), with 93% of predictions being close. Skin irritation was observed only infrequently and was most commonly associated with positive results; it was not associated with the discordant results. Where different vehicles were used with the same test material, the effect on sensitizing activity was modest, consistent with historical data. Analysis of positive control data indicated that the LNCC and LLNA displayed similar levels of biological variation. When taken in combination with the previously published results on LLNA Performance Standard chemicals, it is concluded that the LNCC provides a viable non-radioactive alternative to the LLNA for the assessment of substances, including potency predictions, as well as for the evaluation of preparations. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  14. Elastic-plastic models for multi-site damage

    NASA Technical Reports Server (NTRS)

    Actis, Ricardo L.; Szabo, Barna A.

    1994-01-01

    This paper presents recent developments in advanced analysis methods for the computation of stress site damage. The method of solution is based on the p-version of the finite element method. Its implementation was designed to permit extraction of linear stress intensity factors using a superconvergent extraction method (known as the contour integral method) and evaluation of the J-integral following an elastic-plastic analysis. Coarse meshes are adequate for obtaining accurate results supported by p-convergence data. The elastic-plastic analysis is based on the deformation theory of plasticity and the von Mises yield criterion. The model problem consists of an aluminum plate with six equally spaced holes and a crack emanating from each hole. The cracks are of different sizes. The panel is subjected to a remote tensile load. Experimental results are available for the panel. The plasticity analysis provided the same limit load as the experimentally determined load. The results of elastic-plastic analysis were compared with the results of linear elastic analysis in an effort to evaluate how plastic zone sizes influence the crack growth rates. The onset of net-section yielding was determined also. The results show that crack growth rate is accelerated by the presence of adjacent damage, and the critical crack size is shorter when the effects of plasticity are taken into consideration. This work also addresses the effects of alternative stress-strain laws: The elastic-ideally-plastic material model is compared against the Ramberg-Osgood model.

  15. Experiment, monitoring, and gradient methods used to infer climate change effects on plant communities yield consistent patterns.

    PubMed

    Elmendorf, Sarah C; Henry, Gregory H R; Hollister, Robert D; Fosaa, Anna Maria; Gould, William A; Hermanutz, Luise; Hofgaard, Annika; Jónsdóttir, Ingibjörg S; Jónsdóttir, Ingibjörg I; Jorgenson, Janet C; Lévesque, Esther; Magnusson, Borgþór; Molau, Ulf; Myers-Smith, Isla H; Oberbauer, Steven F; Rixen, Christian; Tweedie, Craig E; Walker, Marilyn D; Walker, Marilyn

    2015-01-13

    Inference about future climate change impacts typically relies on one of three approaches: manipulative experiments, historical comparisons (broadly defined to include monitoring the response to ambient climate fluctuations using repeat sampling of plots, dendroecology, and paleoecology techniques), and space-for-time substitutions derived from sampling along environmental gradients. Potential limitations of all three approaches are recognized. Here we address the congruence among these three main approaches by comparing the degree to which tundra plant community composition changes (i) in response to in situ experimental warming, (ii) with interannual variability in summer temperature within sites, and (iii) over spatial gradients in summer temperature. We analyzed changes in plant community composition from repeat sampling (85 plant communities in 28 regions) and experimental warming studies (28 experiments in 14 regions) throughout arctic and alpine North America and Europe. Increases in the relative abundance of species with a warmer thermal niche were observed in response to warmer summer temperatures using all three methods; however, effect sizes were greater over broad-scale spatial gradients relative to either temporal variability in summer temperature within a site or summer temperature increases induced by experimental warming. The effect sizes for change over time within a site and with experimental warming were nearly identical. These results support the view that inferences based on space-for-time substitution overestimate the magnitude of responses to contemporary climate warming, because spatial gradients reflect long-term processes. In contrast, in situ experimental warming and monitoring approaches yield consistent estimates of the magnitude of response of plant communities to climate warming.

  16. Direct PCR amplification of DNA from human bloodstains, saliva, and touch samples collected with microFLOQ® swabs.

    PubMed

    Ambers, Angie; Wiley, Rachel; Novroski, Nicole; Budowle, Bruce

    2018-01-01

    Previous studies have shown that nylon flocked swabs outperform traditional fiber swabs in DNA recovery due to their innovative design and lack of internal absorbent core to entrap cellular materials. The microFLOQ ® Direct swab, a miniaturized version of the 4N6 FLOQSwab ® , has a small swab head that is treated with a lysing agent which allows for direct amplification and DNA profiling from sample collection to final result in less than two hours. Additionally, the microFLOQ ® system subsamples only a minute portion of a stain and preserves the vast majority of the sample for subsequent testing or re-analysis, if desired. The efficacy of direct amplification of DNA from dilute bloodstains, saliva stains, and touch samples was evaluated using microFLOQ ® Direct swabs and the GlobalFiler™ Express system. Comparisons were made to traditional methods to assess the robustness of this alternate workflow. Controlled studies with 1:19 and 1:99 dilutions of bloodstains and saliva stains consistently yielded higher STR peak heights than standard methods with 1ng input DNA from the same samples. Touch samples from common items yielded single source and mixed profiles that were consistent with primary users of the objects. With this novel methodology/workflow, no sample loss occurs and therefore more template DNA is available during amplification. This approach may have important implications for analysis of low quantity and/or degraded samples that plague forensic casework. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Environmental probabilistic quantitative assessment methodologies

    USGS Publications Warehouse

    Crovelli, R.A.

    1995-01-01

    In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author

  18. Passive particle dosimetry. [silver halide crystal growth

    NASA Technical Reports Server (NTRS)

    Childs, C. B.

    1977-01-01

    Present methods of dosimetry are reviewed with emphasis on the processes using silver chloride crystals for ionizing particle dosimetry. Differences between the ability of various crystals to record ionizing particle paths are directly related to impurities in the range of a few ppm (parts per million). To understand the roles of these impurities in the process, a method for consistent production of high purity silver chloride, and silver bromide was developed which yields silver halides with detectable impurity content less than 1 ppm. This high purity silver chloride was used in growing crystals with controlled doping. Crystals were grown by both the Czochalski method and the Bridgman method, and the Bridgman grown crystals were used for the experiments discussed. The distribution coefficients of ten divalent cations were determined for the Bridgman crystals. The best dosimeters were made with silver chloride crystals containing 5 to 10 ppm of lead; other impurities tested did not produce proper dosimeters.

  19. Methods for improved forewarning of critical events across multiple data channels

    DOEpatents

    Hively, Lee M [Philadelphia, TN

    2007-04-24

    This disclosed invention concerns improvements in forewarning of critical events via phase-space dissimilarity analysis of data from mechanical devices, electrical devices, biomedical data, and other physical processes. First, a single channel of process-indicative data is selected that can be used in place of multiple data channels without sacrificing consistent forewarning of critical events. Second, the method discards data of inadequate quality via statistical analysis of the raw data, because the analysis of poor quality data always yields inferior results. Third, two separate filtering operations are used in sequence to remove both high-frequency and low-frequency artifacts using a zero-phase quadratic filter. Fourth, the method constructs phase-space dissimilarity measures (PSDM) by combining of multi-channel time-serial data into a multi-channel time-delay phase-space reconstruction. Fifth, the method uses a composite measure of dissimilarity (C.sub.i) to provide a forewarning of failure and an indicator of failure onset.

  20. Determination of molybdenum in sea and estuarine water with BETA-naphthoin oxime and neutron activation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuathilake, A.I.; Chatt, A.

    1980-05-01

    An analytical method has been developed for the determination of submicrogram quantities of molybdenum in sea and esturaine water. The method consists of preconcentration of molybdenum with BETA-naphthoin oxime followed by the determination of the element employing neutron activation analysis. Various factors that can influence yield and selectivity of the preconcentration process have been investigated in detail. A comparison study between ..cap alpha..-benzoin oxime and BETA-naphthoin oxime in preconcentrating molybdenum has been carried out using a standard steel sample. The method has been applied to determine molybdenum content of sea and estuarine water. A detection limit of 0.32 ..mu..g Momore » L/sup -1/ seawater has been acheived. The precision and accuracy of the method have been evaluated using an intercomparison fresh water and a biological standard reference material. 1 figure, 9 tables.« less

  1. Lamb Shift Measurement in Hydrogen by the Anisotropy Method

    NASA Astrophysics Data System (ADS)

    Drake, G. W. F.; van Wijngaarden, A.; Holuj, F.

    1998-05-01

    The Lamb shift in hydrogen and hydrogenic ions continues to provide one of the most important tests of quantum electrodynamics. A previous measurement in He^+ by the anisotropy method( A. van Wijngaarden, J. Kwela and G. W. F. Drake, Phys. Rev. A 43), 3325 (1991). yields a value that is 70(12) parts per million higher than theory when two-loop binding corrections are included (K. Pachucki et al.), J. Phys. B 29, 117 (1996).. A new high-precision measurement of the Lamb shift in hydrogen by the same method will be reported( Can. J. Phys. 76), February (1998).. The result of 1057.852(15) MHz is consistent with theory and other measurements, thereby confirming that the anisotropy method and its interpretation are valid at the 15 parts per million level of accuracy. The remaining discrepancy for He^+ could be explained by an additional contribution to theory that scales as Z^6.

  2. Cropping system diversification for food production in Mindanao rubber plantations: a rice cultivar mixture and rice intercropped with mungbean

    PubMed Central

    Elazegui, Francisco; Duque, Jo-Anne Lynne Joy E.; Mundt, Christopher C.; Vera Cruz, Casiana M.

    2017-01-01

    Including food production in non-food systems, such as rubber plantations and biofuel or bioenergy crops, may contribute to household food security. We evaluated the potential for planting rice, mungbean, rice cultivar mixtures, and rice intercropped with mungbean in young rubber plantations in experiments in the Arakan Valley of Mindanao in the Philippines. Rice mixtures consisted of two- or three-row strips of cultivar Dinorado, a cultivar with higher value but lower yield, and high-yielding cultivar UPL Ri-5. Rice and mungbean intercropping treatments consisted of different combinations of two- or three-row strips of rice and mungbean. We used generalized linear mixed models to evaluate the yield of each crop alone and in the mixture or intercropping treatments. We also evaluated a land equivalent ratio for yield, along with weed biomass (where Ageratum conyzoides was particularly abundant), the severity of disease caused by Magnaporthe oryzae and Cochliobolus miyabeanus, and rice bug (Leptocorisa acuta) abundance. We analyzed the yield ranking of each cropping system across site-year combinations to determine mean relative performance and yield stability. When weighted by their relative economic value, UPL Ri-5 had the highest mean performance, but with decreasing performance in low-yielding environments. A rice and mungbean intercropping system had the second highest performance, tied with high-value Dinorado but without decreasing relative performance in low-yielding environments. Rice and mungbean intercropped with rubber have been adopted by farmers in the Arakan Valley. PMID:28194318

  3. Optimizing Dense Plasma Focus Neutron Yields With Fast Gas Jets

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Stein, Elizabeth; Higginson, Drew; Kueny, Christopher; Link, Anthony; Schmidt, Andrea

    2017-10-01

    We report a study using the particle-in-cell code LSP to perform fully kinetic simulations modeling dense plasma focus (DPF) devices with high density gas jets on axis. The high-density jets are modeled in the large-eddy Navier-Stokes code CharlesX, which is suitable for modeling both sub-sonic and supersonic gas flow. The gas pattern, which is essentially static on z-pinch time scales, is imported from CharlesX to LSP for neutron yield predictions. Fast gas puffs allow for more mass on axis while maintaining the optimal pressure for the DPF. As the density of a subsonic jet increases relative to the background fill, we find the neutron yield increases, as does the variability in the neutron yield. Introducing perturbations in the jet density via super-sonic flow (also known as Mach diamonds) allow for consistent seeding of the m =0 instability leading to more consistent ion acceleration and higher neutron yields with less variability. Jets with higher on axis density are found to have the greatest yield. The optimal jet configuration and the necessary jet conditions for increasing neutron yield and reducing yield variability are explored. Simulations of realistic jet profiles are performed and compared to the ideal scenario. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL.

  4. Calibrating random forests for probability estimation.

    PubMed

    Dankowski, Theresa; Ziegler, Andreas

    2016-09-30

    Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  5. DYNAMICAL MEASUREMENTS OF THE YOUNG UPPER SCORPIUS TRIPLE NTTS 155808-2219

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, G. N.; McLean, I. S.; Prato, L.

    2012-08-15

    The young, low-mass, triple system NTTS 155808-2219 (ScoPMS 20) was previously identified as a {approx}17 day period single-lined spectroscopic binary (SB) with a tertiary component at 0.21 arcsec. Using high-resolution infrared spectra, acquired with NIRSPEC on Keck II, both with and without adaptive optics (AO), we measured radial velocities (RVs) of all three components. Reanalysis of the single-lined visible light observations, made from 1987 to 1993, also yielded RV detections of the three stars. Combining visible light and infrared data to compute the orbital solution produces orbital parameters consistent with the single-lined solution and a mass ratio of q =more » 0.78 {+-} 0.01 for the SB. We discuss the consistency between our results and previously published data on this system, our RV analysis with both observed and synthetic templates, and the possibility that this system is eclipsing, providing a potential method for the determination of the stars' absolute masses. Over the {approx}20 year baseline of our observations, we have measured the acceleration of the SB's center of mass in its orbit with the tertiary. Long-term, AO imaging of the tertiary will eventually yield dynamical data useful for component mass estimates.« less

  6. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models

    PubMed Central

    Ataman, Meric

    2017-01-01

    Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models. PMID:28727725

  7. Contribution of insect pollinators to crop yield and quality varies with agricultural intensification

    PubMed Central

    Potts, Simon G.; Steffan-Dewenter, Ingolf; Vaissière, Bernard E.; Woyciechowski, Michal; Krewenka, Kristin M.; Tscheulin, Thomas; Roberts, Stuart P.M.; Szentgyörgyi, Hajnalka; Westphal, Catrin; Bommarco, Riccardo

    2014-01-01

    Background. Up to 75% of crop species benefit at least to some degree from animal pollination for fruit or seed set and yield. However, basic information on the level of pollinator dependence and pollinator contribution to yield is lacking for many crops. Even less is known about how insect pollination affects crop quality. Given that habitat loss and agricultural intensification are known to decrease pollinator richness and abundance, there is a need to assess the consequences for different components of crop production. Methods. We used pollination exclusion on flowers or inflorescences on a whole plant basis to assess the contribution of insect pollination to crop yield and quality in four flowering crops (spring oilseed rape, field bean, strawberry, and buckwheat) located in four regions of Europe. For each crop, we recorded abundance and species richness of flower visiting insects in ten fields located along a gradient from simple to heterogeneous landscapes. Results. Insect pollination enhanced average crop yield between 18 and 71% depending on the crop. Yield quality was also enhanced in most crops. For instance, oilseed rape had higher oil and lower chlorophyll contents when adequately pollinated, the proportion of empty seeds decreased in buckwheat, and strawberries’ commercial grade improved; however, we did not find higher nitrogen content in open pollinated field beans. Complex landscapes had a higher overall species richness of wild pollinators across crops, but visitation rates were only higher in complex landscapes for some crops. On the contrary, the overall yield was consistently enhanced by higher visitation rates, but not by higher pollinator richness. Discussion. For the four crops in this study, there is clear benefit delivered by pollinators on yield quantity and/or quality, but it is not maximized under current agricultural intensification. Honeybees, the most abundant pollinator, might partially compensate the loss of wild pollinators in some areas, but our results suggest the need of landscape-scale actions to enhance wild pollinator populations. PMID:24749007

  8. Contribution of insect pollinators to crop yield and quality varies with agricultural intensification.

    PubMed

    Bartomeus, Ignasi; Potts, Simon G; Steffan-Dewenter, Ingolf; Vaissière, Bernard E; Woyciechowski, Michal; Krewenka, Kristin M; Tscheulin, Thomas; Roberts, Stuart P M; Szentgyörgyi, Hajnalka; Westphal, Catrin; Bommarco, Riccardo

    2014-01-01

    Background. Up to 75% of crop species benefit at least to some degree from animal pollination for fruit or seed set and yield. However, basic information on the level of pollinator dependence and pollinator contribution to yield is lacking for many crops. Even less is known about how insect pollination affects crop quality. Given that habitat loss and agricultural intensification are known to decrease pollinator richness and abundance, there is a need to assess the consequences for different components of crop production. Methods. We used pollination exclusion on flowers or inflorescences on a whole plant basis to assess the contribution of insect pollination to crop yield and quality in four flowering crops (spring oilseed rape, field bean, strawberry, and buckwheat) located in four regions of Europe. For each crop, we recorded abundance and species richness of flower visiting insects in ten fields located along a gradient from simple to heterogeneous landscapes. Results. Insect pollination enhanced average crop yield between 18 and 71% depending on the crop. Yield quality was also enhanced in most crops. For instance, oilseed rape had higher oil and lower chlorophyll contents when adequately pollinated, the proportion of empty seeds decreased in buckwheat, and strawberries' commercial grade improved; however, we did not find higher nitrogen content in open pollinated field beans. Complex landscapes had a higher overall species richness of wild pollinators across crops, but visitation rates were only higher in complex landscapes for some crops. On the contrary, the overall yield was consistently enhanced by higher visitation rates, but not by higher pollinator richness. Discussion. For the four crops in this study, there is clear benefit delivered by pollinators on yield quantity and/or quality, but it is not maximized under current agricultural intensification. Honeybees, the most abundant pollinator, might partially compensate the loss of wild pollinators in some areas, but our results suggest the need of landscape-scale actions to enhance wild pollinator populations.

  9. A strategy of combining SILAR with solvothermal process for In2S3 sensitized quantum dot-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Yang, Peizhi; Tang, Qunwei; Ji, Chenming; Wang, Haobo

    2015-12-01

    Pursuit of an efficient strategy for quantum dot-sensitized photoanode has been a persistent objective for enhancing photovoltaic performances of quantum dot-sensitized solar cell (QDSC). We present here the fabrication of the indium sulfide (In2S3) quantum dot-sensitized titanium dioxide (TiO2) photoanode by combining successive ionic layer adsorption and reaction (SILAR) with solvothermal processes. The resultant QDSC consists of an In2S3 sensitized TiO2 photoanode, a liquid polysulfide electrolyte, and a Co0.85Se counter electrode. The optimized QDSC with photoanode prepared with the help of a SILAR method at 20 deposition cycles and solvothermal method yields a maximum power conversion efficiency of 1.39%.

  10. Kramers-Kronig relations in Laser Intensity Modulation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuncer, Enis

    2006-01-01

    In this short paper, the Kramers-Kronig relations for the Laser Intensity Modulation Method (LIMM) are presented to check the self-consistency of experimentally obtained complex current densities. The numerical procedure yields well defined, precise estimates for the real and the imaginary parts of the LIMM current density calculated from its imaginary and real parts, respectively. The procedure also determines an accurate high frequency real current value which appears to be an intrinsic material parameter similar to that of the dielectric permittivity at optical frequencies. Note that the problem considered here couples two different material properties, thermal and electrical, consequently the validitymore » of the Kramers-Kronig relation indicates that the problem is invariant and linear.« less

  11. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  12. Effects of large-angle Coulomb collisions on inertial confinement fusion plasmas.

    PubMed

    Turrell, A E; Sherlock, M; Rose, S J

    2014-06-20

    Large-angle Coulomb collisions affect the rates of energy and momentum exchange in a plasma, and it is expected that their effects will be important in many plasmas of current research interest, including in inertial confinement fusion. Their inclusion is a long-standing problem, and the first fully self-consistent method for calculating their effects is presented. This method is applied to "burn" in the hot fuel in inertial confinement fusion capsules and finds that the yield increases due to an increase in the rate of temperature equilibration between electrons and ions which is not predicted by small-angle collision theories. The equilibration rate increases are 50%-100% for number densities of 10(30)  m(-3) and temperatures around 1 keV.

  13. Minimal-Drift Heading Measurement using a MEMS Gyro for Indoor Mobile Robots.

    PubMed

    Hong, Sung Kyung; Park, Sungsu

    2008-11-17

    To meet the challenges of making low-cost MEMS yaw rate gyros for the precise self-localization of indoor mobile robots, this paper examines a practical and effective method of minimizing drift on the heading angle that relies solely on integration of rate signals from a gyro. The main idea of the proposed approach is consists of two parts; 1) self-identification of calibration coefficients that affects long-term performance, and 2) threshold filter to reject the broadband noise component that affects short-term performance. Experimental results with the proposed phased method applied to Epson XV3500 gyro demonstrate that it effectively yields minimal drift heading angle measurements getting over major error sources in the MEMS gyro output.

  14. Exciton coupling between enones: Quassinoids revisited.

    PubMed

    Pescitelli, Gennaro; Di Bari, Lorenzo

    2017-09-01

    The electronic circular dichroism (ECD) spectra of two previously reported quassinoids containing a pair of enone chromophores are revisited to gain insight into the consistency and applicability of the exciton chirality method. Our study is based on time-dependent Density Functional Theory calculations, transition and orbital analysis, and numerical exciton coupling calculations. In quassin (1) the enone/enone exciton coupling is quasi-degenerate, yielding strong rotational strengths that account for the observed ECD spectrum in the enone π-π* region. In perforalactone C (2) the nondegenerate coupling produces weak rotational strengths, and the ECD spectrum is dominated by other mechanisms of optical activity. We remark the necessity of a careful application of the nondegenerate exciton coupling method in similar cases. © 2017 Wiley Periodicals, Inc.

  15. Eigenvalue computations with the QUAD4 consistent-mass matrix

    NASA Technical Reports Server (NTRS)

    Butler, Thomas A.

    1990-01-01

    The NASTRAN user has the option of using either a lumped-mass matrix or a consistent- (coupled-) mass matrix with the QUAD4 shell finite element. At the Sixteenth NASTRAN Users' Colloquium (1988), Melvyn Marcus and associates of the David Taylor Research Center summarized a study comparing the results of the QUAD4 element with results of other NASTRAN shell elements for a cylindrical-shell modal analysis. Results of this study, in which both the lumped-and consistent-mass matrix formulations were used, implied that the consistent-mass matrix yielded poor results. In an effort to further evaluate the consistent-mass matrix, a study was performed using both a cylindrical-shell geometry and a flat-plate geometry. Modal parameters were extracted for several modes for both geometries leading to some significant conclusions. First, there do not appear to be any fundamental errors associated with the consistent-mass matrix. However, its accuracy is quite different for the two different geometries studied. The consistent-mass matrix yields better results for the flat-plate geometry and the lumped-mass matrix seems to be the better choice for cylindrical-shell geometries.

  16. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  17. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  18. Emotion-independent face recognition

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  19. Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language

    PubMed Central

    2008-01-01

    A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves. PMID:18802499

  20. Hydrothermal deformation of granular quartz sand

    NASA Astrophysics Data System (ADS)

    Karner, Stephen L.; Kronenberg, Andreas K.; Chester, Frederick M.; Chester, Judith S.; Hajash, Andrew

    2008-05-01

    Isotropic and triaxial compression experiments were performed on porous aggregates of St Peter quartz sand to explore the influence of temperature (to 225°C). During isotropic stressing, samples loaded at elevated temperature exhibit the same sigmoidal stress-strain curves and non-linear acoustic emission rates as have previously been observed from room temperature studies on sands, sandstones, and soils. However, results from our hydrothermal experiments show that the critical effective pressure (P*) associated with the onset of significant pore collapse and pervasive cataclastic flow is lower at increased temperature. Samples subjected to triaxial loading at elevated temperature show yield behavior resembling that observed from room temperature studies on granular rocks and soils. When considered in terms of distortional and mean stresses, the yield strength data for a given temperature define an elliptical envelope consistent with critical state and CAP models from soil mechanics. For the conditions we tested, triaxial yield data at low effective pressure are essentially temperature-insensitive whereas yield levels at high effective pressure are lowered as a function of elevated temperature. We interpret our yield data in a manner consistent with Arrhenius behavior expected for thermally assisted subcritical crack growth. Taken together, our results indicate that increased stresses and temperatures associated with subsurface burial will significantly alter the yield strength of deforming granular media in systematic and predictable ways.

  1. Assessment of In-Place Oil Shale Resources of the Green River Formation, Piceance Basin, Western Colorado

    USGS Publications Warehouse

    Johnson, Ronald C.; Mercier, Tracey J.; Brownfield, Michael E.; Pantea, Michael P.; Self, Jesse G.

    2009-01-01

    The U.S. Geological Survey (USGS) recently completed a reassessment of in-place oil shale resources, regardless of richness, in the Eocene Green River Formation in the Piceance Basin, western Colorado. A considerable amount of oil-yield data has been collected after previous in-place assessments were published, and these data were incorporated into this new assessment. About twice as many oil-yield data points were used, and several additional oil shale intervals were included that were not assessed previously for lack of data. Oil yields are measured using the Fischer assay method. The Fischer assay method is a standardized laboratory test for determining the oil yield from oil shale that has been almost universally used to determine oil yields for Green River Formation oil shales. Fischer assay does not necessarily measure the maximum amount of oil that an oil shale can produce, and there are retorting methods that yield more than the Fischer assay yield. However, the oil yields achieved by other technologies are typically reported as a percentage of the Fischer assay oil yield, and thus Fischer assay is still considered the standard by which other methods are compared.

  2. Application of the BMWP-Costa Rica biotic index in aquatic biomonitoring: sensitivity to collection method and sampling intensity.

    PubMed

    Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M

    2014-04-01

    The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.

  3. Refining historical limits method to improve disease cluster detection, New York City, New York, USA.

    PubMed

    Levin-Rector, Alison; Wilson, Elisha L; Fine, Annie D; Greene, Sharon K

    2015-02-01

    Since the early 2000s, the Bureau of Communicable Disease of the New York City Department of Health and Mental Hygiene has analyzed reportable infectious disease data weekly by using the historical limits method to detect unusual clusters that could represent outbreaks. This method typically produced too many signals for each to be investigated with available resources while possibly failing to signal during true disease outbreaks. We made method refinements that improved the consistency of case inclusion criteria and accounted for data lags and trends and aberrations in historical data. During a 12-week period in 2013, we prospectively assessed these refinements using actual surveillance data. The refined method yielded 74 signals, a 45% decrease from what the original method would have produced. Fewer and less biased signals included a true citywide increase in legionellosis and a localized campylobacteriosis cluster subsequently linked to live-poultry markets. Future evaluations using simulated data could complement this descriptive assessment.

  4. Dual-fission chamber and neutron beam characterization for fission product yield measurements using monoenergetic neutrons

    NASA Astrophysics Data System (ADS)

    Bhatia, C.; Fallin, B.; Gooden, M. E.; Howell, C. R.; Kelley, J. H.; Tornow, W.; Arnold, C. W.; Bond, E. M.; Bredeweg, T. A.; Fowler, M. M.; Moody, W. A.; Rundberg, R. S.; Rusev, G.; Vieira, D. J.; Wilhelmy, J. B.; Becker, J. A.; Macri, R.; Ryan, C.; Sheets, S. A.; Stoyer, M. A.; Tonchev, A. P.

    2014-09-01

    A program has been initiated to measure the energy dependence of selected high-yield fission products used in the analysis of nuclear test data. We present out initial work of neutron activation using a dual-fission chamber with quasi-monoenergetic neutrons and gamma-counting method. Quasi-monoenergetic neutrons of energies from 0.5 to 15 MeV using the TUNL 10 MV FM tandem to provide high-precision and self-consistent measurements of fission product yields (FPY). The final FPY results will be coupled with theoretical analysis to provide a more fundamental understanding of the fission process. To accomplish this goal, we have developed and tested a set of dual-fission ionization chambers to provide an accurate determination of the number of fissions occurring in a thick target located in the middle plane of the chamber assembly. Details of the fission chamber and its performance are presented along with neutron beam production and characterization. Also presented are studies on the background issues associated with room-return and off-energy neutron production. We show that the off-energy neutron contribution can be significant, but correctable, while room-return neutron background levels contribute less than <1% to the fission signal.

  5. Reverse-engineering flow-cytometry gating strategies for phenotypic labelling and high-performance cell sorting.

    PubMed

    Becht, Etienne; Simoni, Yannick; Coustan-Smith, Elaine; Maximilien, Evrard; Cheng, Yang; Ng, Lai Guan; Campana, Dario; Newell, Evan

    2018-06-21

    Recent flow and mass cytometers generate datasets of dimensions 20 to 40 and a million single cells. From these, many tools facilitate the discovery of new cell populations associated with diseases or physiology. These new cell populations require the identification of new gating strategies, but gating strategies become exponentially more difficult to optimize when dimensionality increases. To facilitate this step, we developed Hypergate, an algorithm which given a cell population of interest identifies a gating strategy optimized for high yield and purity. Hypergate achieves higher yield and purity than human experts, Support Vector Machines and Random-Forests on public datasets. We use it to revisit some established gating strategies for the identification of innate lymphoid cells, which identifies concise and efficient strategies that allow gating these cells with fewer parameters but higher yield and purity than the current standards. For phenotypic description, Hypergate's outputs are consistent with fields' knowledge and sparser than those from a competing method. Hypergate is implemented in R and available on CRAN. The source code is published at http://github.com/ebecht/hypergate under an Open Source Initiative-compliant licence. Supplementary data are available at Bioinformatics online.

  6. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  7. Isolation of methyl gamma linolenate from Spirulina platensis using flash chromatography and its apoptosis inducing effect.

    PubMed

    Jubie, S; Dhanabal, S P; Chaitanya, M V N L

    2015-08-04

    Isolation of methyl gamma linolenate from Spirulina platensis using flash chromatography and its apoptosis inducing effect against human lung carcinoma A- 549 cell lines. Gamma linolenic acid is an important omega-6 polyunsaturated fatty acid (PUFA) of medicinal interest was isolated from microalgae Spirulina platensis using flash chromatography system (Isolera system) as its methyl ester. The isolated methyl gamma linolenate was characterized by IR, (1)H NMR, (13)C NMR and mass spectral analysis and the data were consistent with the structure. The percentage yield of isolated methyl gamma linolenate is found to be 71% w/w, which is a very good yield in comparison to other conventional methods. It was subjected to in-vitro cytotoxic screening on A-549 lung cancer cell lines using SRB assay and result was compared with standard rutin. It may be concluded that the Flash chromatography system plays a major role in improving the yield for the isolation of methyl gamma linoleate from Spirulina platensis and the isolated molecule is a potent cytotoxic agent towards human lung carcinoma cell lines, however it may be further taken up for an extensive study.

  8. Biodiesel production using lipase immobilized on epoxychloropropane-modified Fe3O4 sub-microspheres.

    PubMed

    Zhang, Qian; Zheng, Zhong; Liu, Changxia; Liu, Chunqiao; Tan, Tianwei

    2016-04-01

    Superparamagnetic Fe3O4 sub-microspheres with diameters of approximately 200 nm were prepared via a solvothermal method, and then modified with epoxychloropropane. Lipase was immobilized on the modified sub-microspheres. The immobilized lipase was used in the production of biodiesel fatty acid methyl esters (FAMEs) from acidified waste cooking oil (AWCO). The effects of the reaction conditions on the biodiesel yield were investigated using a combination of response surface methodology and three-level/three-factor Box-Behnken design (BBD). The optimum synthetic conditions, which were identified using Ridge max analysis, were as follows: immobilized lipase:AWCO mass ratio 0.02:1, fatty acid:methanol molar ratio 1:1.10, hexane:AWCO ratio 1.33:1 (mL/g), and temperature 40 °C. A 97.11% yield was obtained under these conditions. The BBD and experimental data showed that the immobilized lipase could generate biodiesel over a wide temperature range, from 0 to 40 °C. Consistently high FAME yields, in excess of 80%, were obtained when the immobilized lipase was reused in six replicate trials at 10 and 20 °C. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A Practical Method for the Vinylation of Aromatic Halides using Inexpensive Organosilicon Reagents

    PubMed Central

    Denmark, Scott E.; Butler, Christopher R.

    2009-01-01

    The preparation of styrenes by palladium-catalyzed cross-coupling of aromatic iodides and bromides with divinyltetramethyldisiloxane (DVDS) in the presence of inexpensive silanolate activators has been developed. To facilitate the discovery of optimal reaction conditions, Design of Experiment protocols were used. By the guided selection of reagents, stoichiometries, temperatures, and solvents the vinylation reaction was rapidly optimized with three stages consisting of ca. 175 experiments (of a possible 1440 combinations). A variety of aromatic iodides undergo cross-coupling at room temperature in the presence of potassium trimethylsilanoate using Pd(dba)2 in DMF in good yields. Triphenylphosphine oxide is needed to extend catalyst lifetime. Application of these conditions to aryl bromides was accomplished by the development of two complementary protocols. First, the direct implementation of the successful reaction conditions using aryl iodides at elevated temperature in THF provided the corresponding styrenes in good to excellent yields. Alternatively, the use of potassium triethylsilanolate and a bulky “Buchwald-type” ligand allows for the vinylation reactions to occur at or just above room temperature. A wide range of bromides underwent coupling in good yields for each of the protocols described. PMID:18303892

  10. Composition and ethanol production potential of cotton gin residues.

    PubMed

    Agblevor, Foster A; Batz, Sandra; Trumbo, Jessica

    2003-01-01

    Cotton gin residue (CGR) collected from five cotton gins was fractionated and characterized for summative composition. The major fractions of the CGR varied widely between cotton gins and consisted of clean lint (5-12%),hulls (16-48%), seeds (6-24%), motes (16-24%), and leaves (14-30%). The summative composition varied within and between cotton gins and consisted of ash (7.9-14.6%), acid-insoluble material (18-26%), xylan (4-15%),and cellulose (20-38%). Overlimed steam-exploded cotton gin waste was readily fermented to ethanol by Escherichia coli KO11. Ethanol yields were feedstock and severity dependent and ranged from 58 to 92.5% of the theoretical yields. The highest ethanol yield was 191 L (50 gal)/t, and the lowest was 120 L (32 gal)/t.

  11. Whole blood transcriptional profiling comparison between different milk yield of Chinese Holstein cows using RNA-seq data.

    PubMed

    Bai, Xue; Zheng, Zhuqing; Liu, Bin; Ji, Xiaoyang; Bai, Yongsheng; Zhang, Wenguang

    2016-08-22

    The objective of this research was to investigate the variation of gene expression in the blood transcriptome profile of Chinese Holstein cows associated to the milk yield traits. We used RNA-seq to generate the bovine transcriptome from the blood of 23 lactating Chinese Holstein cows with extremely high and low milk yield. A total of 100 differentially expressed genes (DEGs) (p < 0.05, FDR < 0.05) were revealed between the high and low groups. Gene ontology (GO) analysis demonstrated that the 100 DEGs were enriched in specific biological processes with regard to defense response, immune response, inflammatory response, icosanoid metabolic process, and fatty acid metabolic process (p < 0.05). The KEGG pathway analysis with 100 DEGs revealed that the most statistically-significant metabolic pathway was related with Toll-like receptor signaling pathway (p < 0.05). The expression level of four selected DEGs was analyzed by qRT-PCR, and the results indicated that the expression patterns were consistent with the deep sequencing results by RNA-Seq. Furthermore, alternative splicing analysis of 100 DEGs demonstrated that there were different splicing pattern between high and low yielders. The alternative 3' splicing site was the major splicing pattern detected in high yielders. However, in low yielders the major type was exon skipping. This study provides a non-invasive method to identify the DEGs in cattle blood using RNA-seq for milk yield. The revealed 100 DEGs between Holstein cows with extremely high and low milk yield, and immunological pathway are likely involved in milk yield trait. Finally, this study allowed us to explore associations between immune traits and production traits related to milk production.

  12. Crack tip field and fatigue crack growth in general yielding and low cycle fatigue

    NASA Technical Reports Server (NTRS)

    Minzhong, Z.; Liu, H. W.

    1984-01-01

    Fatigue life consists of crack nucleation and crack propagation periods. Fatigue crack nucleation period is shorter relative to the propagation period at higher stresses. Crack nucleation period of low cycle fatigue might even be shortened by material and fabrication defects and by environmental attack. In these cases, fatigue life is largely crack propagation period. The characteristic crack tip field was studied by the finite element method, and the crack tip field is related to the far field parameters: the deformation work density, and the product of applied stress and applied strain. The cyclic carck growth rates in specimens in general yielding as measured by Solomon are analyzed in terms of J-integral. A generalized crack behavior in terms of delta is developed. The relations between J and the far field parameters and the relation for the general cyclic crack growth behavior are used to analyze fatigue lives of specimens under general-yielding cyclic-load. Fatigue life is related to the applied stress and strain ranges, the deformation work density, crack nucleus size, fracture toughness, fatigue crack growth threshold, Young's modulus, and the cyclic yield stress and strain. The fatigue lives of two aluminum alloys correlate well with the deformation work density as depicted by the derived theory. The general relation is reduced to Coffin-Manson low cycle fatigue law in the high strain region.

  13. A new modeling and inference approach for the Systolic Blood Pressure Intervention Trial outcomes.

    PubMed

    Yang, Song; Ambrosius, Walter T; Fine, Lawrence J; Bress, Adam P; Cushman, William C; Raj, Dominic S; Rehman, Shakaib; Tamariz, Leonardo

    2018-06-01

    Background/aims In clinical trials with time-to-event outcomes, usually the significance tests and confidence intervals are based on a proportional hazards model. Thus, the temporal pattern of the treatment effect is not directly considered. This could be problematic if the proportional hazards assumption is violated, as such violation could impact both interim and final estimates of the treatment effect. Methods We describe the application of inference procedures developed recently in the literature for time-to-event outcomes when the treatment effect may or may not be time-dependent. The inference procedures are based on a new model which contains the proportional hazards model as a sub-model. The temporal pattern of the treatment effect can then be expressed and displayed. The average hazard ratio is used as the summary measure of the treatment effect. The test of the null hypothesis uses adaptive weights that often lead to improvement in power over the log-rank test. Results Without needing to assume proportional hazards, the new approach yields results consistent with previously published findings in the Systolic Blood Pressure Intervention Trial. It provides a visual display of the time course of the treatment effect. At four of the five scheduled interim looks, the new approach yields smaller p values than the log-rank test. The average hazard ratio and its confidence interval indicates a treatment effect nearly a year earlier than a restricted mean survival time-based approach. Conclusion When the hazards are proportional between the comparison groups, the new methods yield results very close to the traditional approaches. When the proportional hazards assumption is violated, the new methods continue to be applicable and can potentially be more sensitive to departure from the null hypothesis.

  14. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  15. A permutation-based non-parametric analysis of CRISPR screen data.

    PubMed

    Jia, Gaoxiang; Wang, Xinlei; Xiao, Guanghua

    2017-07-19

    Clustered regularly-interspaced short palindromic repeats (CRISPR) screens are usually implemented in cultured cells to identify genes with critical functions. Although several methods have been developed or adapted to analyze CRISPR screening data, no single specific algorithm has gained popularity. Thus, rigorous procedures are needed to overcome the shortcomings of existing algorithms. We developed a Permutation-Based Non-Parametric Analysis (PBNPA) algorithm, which computes p-values at the gene level by permuting sgRNA labels, and thus it avoids restrictive distributional assumptions. Although PBNPA is designed to analyze CRISPR data, it can also be applied to analyze genetic screens implemented with siRNAs or shRNAs and drug screens. We compared the performance of PBNPA with competing methods on simulated data as well as on real data. PBNPA outperformed recent methods designed for CRISPR screen analysis, as well as methods used for analyzing other functional genomics screens, in terms of Receiver Operating Characteristics (ROC) curves and False Discovery Rate (FDR) control for simulated data under various settings. Remarkably, the PBNPA algorithm showed better consistency and FDR control on published real data as well. PBNPA yields more consistent and reliable results than its competitors, especially when the data quality is low. R package of PBNPA is available at: https://cran.r-project.org/web/packages/PBNPA/ .

  16. Comparison of methods for determining the numbers and species distribution of coliform bacteria in well water samples.

    PubMed

    Niemi, R M; Heikkilä, M P; Lahti, K; Kalso, S; Niemelä, S I

    2001-06-01

    Enumeration of coliform bacteria and Escherichia coli is the most widely used method in the estimation of hygienic quality of drinking water. The yield of target bacteria and the species composition of different populations of coliform bacteria may depend on the method.Three methods were compared. Three membrane filtration methods were used for the enumeration of coliform bacteria in shallow well waters. The yield of confirmed coliform bacteria was highest on Differential Coliform agar, followed by LES Endo agar. Differential Coliform agar had the highest proportion of typical colonies, of which 74% were confirmed as belonging to the Enterobacteriaceae. Of the typical colonies on Lactose Tergitol 7 TTC agar, 75% were confirmed as Enterobacteriaceae, whereas 92% of typical colonies on LES Endo agar belonged to the Enterobacteriaceae. LES Endo agar yielded many Serratia strains, Lactose Tergitol 7 TTC agar yielded numerous strains of Rahnella aquatilis and Enterobacter, whereas Differential Coliform agar yielded the widest range of species. The yield of coliform bacteria varied between methods. Each method compared had a characteristic species distribution of target bacteria and a typical level of interference of non-target bacteria. Identification with routine physiological tests to distinct species was hampered by the slight differences between species. High yield and sufficient selectivity are difficult to achieve simultaneously, especially if the target group is diverse. The results showed that several aspects of method performance should be considered, and that the target group must be distinctly defined to enable method comparisons.

  17. Experimental study on the dynamic mechanical behaviors of polycarbonate

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Gao, Yubo; Cai, Xuanming; Ye, Nan; Huang, Wei; Hypervelocity Impact Research Center Team

    2015-06-01

    Polycarbonate (PC) is a widely used engineering material in aerospace field, since it has excellent mechanical and optical property. In present study, both compress and tensile tests of PC were conducted at high strain rates by using a split Hopkinson pressure bar. The high-speed camera and 2D digital speckle correlation method (DIC) were used to analyze the dynamic deformation behavior of PC. Meanwhile, the plate impact experiment was carried out to measure the equation of state of PC in a single-stage gas gun, which consists of asymmetric impact technology, manganin gauges, PVDF, electromagnetic particle velocity gauges. The results indicate that the yield stress of PC increased with the strain rates. The strain softening occurred when the stress over yield point except the tensile tests in the strain rates of 1076s-1 and 1279s-1. The ZWT model can describe the constitutive behaviors of PC accurately in different strain rates by contrast with the results of 2D-DIC. At last, The D-u Hugoniot curve of polycarbonate in high pressure was fitted by the least square method. And the final results showed more closely to Cater and Mash than other previous data.

  18. Comparing Sanger sequencing and high-throughput metabarcoding for inferring photobiont diversity in lichens.

    PubMed

    Paul, Fiona; Otte, Jürgen; Schmitt, Imke; Dal Grande, Francesco

    2018-06-05

    The implementation of HTS (high-throughput sequencing) approaches is rapidly changing our understanding of the lichen symbiosis, by uncovering high bacterial and fungal diversity, which is often host-specific. Recently, HTS methods revealed the presence of multiple photobionts inside a single thallus in several lichen species. This differs from Sanger technology, which typically yields a single, unambiguous algal sequence per individual. Here we compared HTS and Sanger methods for estimating the diversity of green algal symbionts within lichen thalli using 240 lichen individuals belonging to two species of lichen-forming fungi. According to HTS data, Sanger technology consistently yielded the most abundant photobiont sequence in the sample. However, if the second most abundant photobiont exceeded 30% of the total HTS reads in a sample, Sanger sequencing generally failed. Our results suggest that most lichen individuals in the two analyzed species, Lasallia hispanica and L. pustulata, indeed contain a single, predominant green algal photobiont. We conclude that Sanger sequencing is a valid approach to detect the dominant photobionts in lichen individuals and populations. We discuss which research areas in lichen ecology and evolution will continue to benefit from Sanger sequencing, and which areas will profit from HTS approaches to assessing symbiont diversity.

  19. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  20. Purifying Nucleic Acids from Samples of Extremely Low Biomass

    NASA Technical Reports Server (NTRS)

    La Duc, Myron; Osman, Shariff; Venkateswaran, Kasthuri

    2008-01-01

    A new method is able to circumvent the bias to which one commercial DNA extraction method falls prey with regard to the lysing of certain types of microbial cells, resulting in a truncated spectrum of microbial diversity. By prefacing the protocol with glass-bead-beating agitation (mechanically lysing a much more encompassing array of cell types and spores), the resulting microbial diversity detection is greatly enhanced. In preliminary studies, a commercially available automated DNA extraction method is effective at delivering total DNA yield, but only the non-hardy members of the bacterial bisque were represented in clone libraries, suggesting that this method was ineffective at lysing the hardier cell types. To circumvent such a bias in cells, yet another extraction method was devised. In this technique, samples are first subjected to a stringent bead-beating step, and then are processed via standard protocols. Prior to being loaded into extraction vials, samples are placed in micro-centrifuge bead tubes containing 50 micro-L of commercially produced lysis solution. After inverting several times, tubes are agitated at maximum speed for two minutes. Following agitation, tubes are centrifuged at 10,000 x g for one minute. At this time, the aqueous volumes are removed from the bead tubes and are loaded into extraction vials to be further processed via extraction regime. The new method couples two independent methodologies in such as way as to yield the highest concentration of PCR-amplifiable DNA with consistent and reproducible results and with the most accurate and encompassing report of species richness.

  1. A novel rheometer design for yield stress fluids

    Treesearch

    Joseph R. Samaniuk; Timothy W. Shay; Thatcher W. Root; Daniel J. Klingenberg; C. Tim Scott

    2014-01-01

    An inexpensive, rapid method for measuring the rheological properties of yield stress fluids is described and tested. The method uses an auger that does not rotate during measurements, and avoids material and instrument-related difficulties, for example, wall slip and the presence of large particles, associated with yield stress fluids. The method can be used...

  2. Branching fractions for {chi}{sub cJ{yields}}pp{pi}{sup 0}, pp{eta}, and pp{omega}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onyisi, P. U. E.; Rosner, J. L.; Alexander, J. P.

    2010-07-01

    Using a sample of 25.9x10{sup 6} {psi}(2S) decays acquired with the CLEO-c detector at the CESR e{sup +}e{sup -} collider, we report branching fractions for the decays {chi}{sub cJ{yields}}pp{pi}{sup 0}, pp{eta}, and pp{omega}, with J=0, 1, 2. Our results for B({chi}{sub cJ{yields}}pp{pi}{sup 0}) and B({chi}{sub cJ{yields}}pp{eta}) are consistent with, but more precise than, previous measurements. Furthermore, we include the first measurement of B({chi}{sub cJ{yields}}pp{omega}).

  3. Impact of sulphate geoengineering on rice yield in China

    NASA Astrophysics Data System (ADS)

    Zhan, Pei; Zhu, Wenquan; Zheng, Zhoutao; Zhang, Donghai; Li, Nan

    2017-04-01

    Sulphate geoengineering is one of the mostly discussed mitigation methods against global warming for its feasibility and inexpensiveness. With SO2 consistently injected into the stratosphere to balance the radiative force caused by anthropogenic emission, sulphate engineering will significantly influence the climate over the planet and moreover, affect agriculture productivity. In our study, BNU-ESM model was used to simulate the impact of sulphate engineering on climate and ORYZA(v3) model was used to simulate the impact of climate change on rice yield/production in China. Firstly, the ORYZA(v3) model was evaluated and calibrated using daily climate data, management data and county-level yield record during 1981-2010 in 19 provinces in China. Then climate anomalies of sulphate geoengineering simulated by BNU-ESM model was used to perturb the observed climate data over 318 stations evenly distribute in China during 1981-2010. In our study, a 30-year climate record of anomalies were extracted from BNU-ESM model to match the observed climate data, which consisted of a 15-year geoengineering record and a 15-year post-geoengineering record. Lastly, the perturbed climate data was used in calibrated-ORYZA(v3) model to simulate the rice yield over the 318 stations, which were later averaged into corresponding provincial yield. The results showed that (1) geoengineering would balance solar radiation for approximate 140 W ṡ m-2 per year (about 0.9 K per year in temperature), which would meet the pre-concerted goal of geoengineering but it would take only about 3 years for temperature to recover after the termination of geoengineering. In spite of this, there would be a declining of vapour pressure for about 0.12 KPa per year during geoengineering period, and it would take about 15 years to recover during post-geoengineering period. The simulation showed that geoengineering would have a little declining impact on average precipitation and would not have much impact on wind speed. (2) rice production in China would decline 7.67% (22.64 Mt) on average during the 15 years of geoengineering, when it comes to the last five years of geoengineering, this number would increase to 16.67% (40.38 Mt). While during the 15 years of post-geoengeering, rice production in China would decline 5.18% when compared with baseline. (3) When geoengineering was turned on, yield of 12 provinces, including all 7 coastal provinces in China, exhibited increasing trend. During this period, inland provinces showed both decreasing and increasing trend, where provinces that are near to the ocean were more likely to decrease in yield and provinces which were close to the interior were more likely to increase in yield.

  4. Unimolecular Thermal Fragmentation of Ortho-Benzyne

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X.; Maccarone, A. T.; Nimlos, M. R.

    2007-01-01

    The ortho-benzyne diradical, o-C{sub 6}H{sub 4} has been produced with a supersonic nozzle and its subsequent thermal decomposition has been studied. As the temperature of the nozzle is increased, the benzyne molecule fragments: o-C{sub 6}H{sub 4} + {Delta} {yields} products. The thermal dissociation products were identified by three experimental methods: (i) time-of-flight photoionization mass spectrometry, (ii) matrix-isolation Fourier transform infrared absorption spectroscopy, and (iii) chemical ionization mass spectrometry. At the threshold dissociation temperature, o-benzyne cleanly decomposes into acetylene and diacetylene via an apparent retro-Diels-Alder process: o-C{sub 6}H{sub 4} + {Delta} {yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH. The experimentalmore » {Delta}{sub rxn}H{sub 298}(o-C{sub 6}H{sub 4} {yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH) is found to be 57 {+-} 3 kcal mol{sup -1}. Further experiments with the substituted benzyne, 3,6-(CH{sub 3}){sub 2}-o-C{sub 6}H{sub 2}, are consistent with a retro-Diels-Alder fragmentation. But at higher nozzle temperatures, the cracking pattern becomes more complicated. To interpret these experiments, the retro-Diels-Alder fragmentation of o-benzyne has been investigated by rigorous ab initio electronic structure computations. These calculations used basis sets as large as [C(7s6p5d4f3g2h1i)/H(6s5p4d3f2g1h)] (cc-pV6Z) and electron correlation treatments as extensive as full coupled cluster through triple excitations (CCSDT), in cases with a perturbative term for connected quadruples [CCSDT(Q)]. Focal point extrapolations of the computational data yield a 0 K barrier for the concerted, C{sub 2v}-symmetric decomposition of o-benzyne, E{sub b}(o-C{sub 6}H{sub 4} {yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH) = 88.0 {+-} 0.5 kcal mol{sup -1}. A barrier of this magnitude is consistent with the experimental results. A careful assessment of the thermochemistry for the high temperature fragmentation of benzene is presented: C{sub 6}H{sub 6} {yields} H+[C{sub 6}H{sub 5}] {yields} H+[o-C{sub 6}H{sub 4}] {yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH. Benzyne may be an important intermediate in the thermal decomposition of many alkylbenzenes (arenes). High engine temperatures above 1500 K may crack these alkylbenzenes to a mixture of alkyl radicals and phenyl radicals. The phenyl radicals will then dissociate first to benzyne and then to acetylene and diacetylene.« less

  5. Unimolecular thermal fragmentation of ortho-benzene.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X.; Maccarone, A. T.; Nimlos, M. R.

    2007-01-01

    The ortho-benzyne diradical, o-C{sub 6}H{sub 4} has been produced with a supersonic nozzle and its subsequent thermal decomposition has been studied. As the temperature of the nozzle is increased, the benzyne molecule fragments o-C{sub 6}H{sub 4}{sup +} {Delta} {yields} products. The thermal dissociation products were identified by three experimental methods: (i) time-of-flight photoionization mass spectrometry, (ii) matrix-isolation Fourier transform infrared absorption spectroscopy, and (iii) chemical ionization mass spectrometry. At the threshold dissociation temperature, o-benzyne cleanly decomposes into acetylene and diacetylene via an apparent retro-Diels-Alder process: o-C{sub 6}H{sub 4}{sup +}{Delta}{yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH. The experimental {Delta}{sub rxn}H{submore » 298}(o-C{sub 6}H{sub 4} {yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH) is found to be 57 {+-} 3 kcal mol{sup -1}. Further experiments with the substituted benzyne, 3,6-(CH{sub 3}){sub 2}-o-C{sub 6}H{sub 2}, are consistent with a retro-Diels-Alder fragmentation. But at higher nozzle temperatures, the cracking pattern becomes more complicated. To interpret these experiments, the retro-Diels-Alder fragmentation of o-benzyne has been investigated by rigorous ab initio electronic structure computations. These calculations used basis sets as large as [C(7s6p5d4f3g2h1i)/H(6s5p4d3f2g1h)] (cc-pV6Z) and electron correlation treatments as extensive as full coupled cluster through triple excitations (CCSDT), in cases with a perturbative term for connected quadruples [CCSDT(Q)]. Focal point extrapolations of the computational data yield a 0 K barrier for the concerted, C{sub 2v}-symmetric decomposition of o-benzyne, E{sub b}(o-C{sub 6}H{sub 4} {yields} HC {triple_bond} CH+HC {triple_bond} C-C {triple_bond} CH) = 88.0 {+-} 0.5 kcal mol{sup -1}. A barrier of this magnitude is consistent with the experimental results. A careful assessment of the thermochemistry for the high temperature fragmentation of benzene is presented: C{sub 6}H{sub 6} {yields} H+[C{sub 6}H{sub 5}] {yields} H+[o-C{sub 6}H{sub 4}] {yields} HC {triple_bond} CH-HC {triple_bond} C-C {triple_bond} CH. Benzyne may be an important intermediate in the thermal decomposition of many alkylbenzenes (arenes). High engine temperatures above 1500 K may crack these alkylbenzenes to a mixture of alkyl radicals and phenyl radicals. The phenyl radicals will then dissociate first to benzyne and then to acetylene and diacetylene.« less

  6. Persistence of biological nitrogen fixation in high latitude grass-clover grasslands under different management practices

    NASA Astrophysics Data System (ADS)

    Tzanakakis, Vasileios; Sturite, Ievina; Dörsch, Peter

    2016-04-01

    Biological nitrogen fixation (BNF) can substantially contribute to N supply in permanent grasslands, improving N yield and forage quality, while reducing inorganic N inputs. Among the factors critical to the performance of BNF in grass-legume mixtures are selected grass and legume species, proportion of legumes, the soil-climatic conditions, in particular winter conditions, and management practices (e.g. fertilization and compaction). In high latitude grasslands, low temperatures can reduce the performance of BNF by hampering the legumés growth and by suppressing N2 fixation. Estimation of BNF in field experiments is not straightforward. Different methods have been developed providing different results. In the present study, we evaluated the performance of BNF, in a newly established field experiment in North Norway over four years. The grassland consisted of white clover (Trifolium repens L.) and red clover (Trifolium pretense L.) sawn in three proportions (0, 15 and 30% in total) together with timothy (Pheum pretense L.) and meadow fescue (Festuca pratensis L.). Three levels of compaction were applied each year (no tractor, light tractor, heavy tractor) together with two different N rates (110 kg N/ha as cattle slurry or 170 kg N/ha as cattle slurry and inorganic N fertilizer). We applied two different methods, the 15N natural abundance and the difference method, to estimate BNF in the first harvest of each year. Overall, the difference method overestimated BNF relative to the 15N natural abundance method. BNF in the first harvest was compared to winter survival of red and white clover plants, which decreased with increasing age of the grassland. However, winter conditions did not seem to affect the grassland's ability to fix N in spring. The fraction of N derived from the atmosphere (NdfA) in white and red clover was close to 100% in each spring, indicating no suppression of BNF. BNF increased the total N yield of the grasslands by up to 75%, mainly due to high N-yields in red clover. However, the total biomass and N yield of red clover decreased dramatically throughout the following years, reflecting the negative cumulative effect of N fertilization and compaction. Overall, BNF by clover can contribute substantially to N supply to northern grasslands, but better cultivation strategies are needed to improve the persistence of clover.

  7. Investigation of Atmospheric Effects on Retrieval of Sun-Induced Fluorescence Using Hyperspectral Imagery.

    PubMed

    Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei

    2016-04-06

    Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators-depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance-to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O₂-A and O₂-B bands (111.4% and 77.1% in the O₂-A band; and 27.5% and 32.6% in the O₂-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R² = 0.91 for Damm vs. SCOPE SIF; R² = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence.

  8. Investigation of Atmospheric Effects on Retrieval of Sun-Induced Fluorescence Using Hyperspectral Imagery

    PubMed Central

    Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei

    2016-01-01

    Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators—depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance—to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O2-A and O2-B bands (111.4% and 77.1% in the O2-A band; and 27.5% and 32.6% in the O2-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R2 = 0.91 for Damm vs. SCOPE SIF; R2 = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence. PMID:27058542

  9. New method to enhance the extraction yield of rutin from Sophora japonica using a novel ultrasonic extraction system by determining optimum ultrasonic frequency.

    PubMed

    Liao, Jianqing; Qu, Baida; Liu, Da; Zheng, Naiqin

    2015-11-01

    A new method has been proposed for enhancing extraction yield of rutin from Sophora japonica, in which a novel ultrasonic extraction system has been developed to perform the determination of optimum ultrasonic frequency by a two-step procedure. This study has systematically investigated the influence of a continuous frequency range of 20-92 kHz on rutin yields. The effects of different operating conditions on rutin yields have also been studied in detail such as solvent concentration, solvent to solid ratio, ultrasound power, temperature and particle size. A higher extraction yield was obtained at the ultrasonic frequency of 60-62 kHz which was little affected under other extraction conditions. Comparative studies between existing methods and the present method were done to verify the effectiveness of this method. Results indicated that the new extraction method gave a higher extraction yield compared with existing ultrasound-assisted extraction (UAE) and soxhlet extraction (SE). Thus, the potential use of this method may be promising for extraction of natural materials on an industrial scale in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Exponential Formulae and Effective Operations

    NASA Technical Reports Server (NTRS)

    Mielnik, Bogdan; Fernandez, David J. C.

    1996-01-01

    One of standard methods to predict the phenomena of squeezing consists in splitting the unitary evolution operator into the product of simpler operations. The technique, while mathematically general, is not so simple in applications and leaves some pragmatic problems open. We report an extended class of exponential formulae, which yield a quicker insight into the laboratory details for a class of squeezing operations, and moreover, can be alternatively used to programme different type of operations, as: (1) the free evolution inversion; and (2) the soft simulations of the sharp kicks (so that all abstract results involving the kicks of the oscillator potential, become realistic laboratory prescriptions).

  11. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  12. Native Chemical Ligation Strategy to Overcome Side Reactions during Fmoc-Based Synthesis of C-Terminal Cysteine-Containing Peptides.

    PubMed

    Lelièvre, Dominique; Terrier, Victor P; Delmas, Agnès F; Aucagne, Vincent

    2016-03-04

    The Fmoc-based solid phase synthesis of C-terminal cysteine-containing peptides is problematic, due to side reactions provoked by the pronounced acidity of the Cα proton of cysteine esters. We herein describe a general strategy consisting of the postsynthetic introduction of the C-terminal Cys through a key chemoselective native chemical ligation reaction with N-Hnb-Cys peptide crypto-thioesters. This method was successfully applied to the demanding peptide sequences of two natural products of biological interest, giving remarkably high overall yields compared to that of a state of the art strategy.

  13. The attractor dimension of solar decimetric radio pulsations

    NASA Technical Reports Server (NTRS)

    Kurths, J.; Benz, A. O.; Aschwanden, M. J.

    1991-01-01

    The temporal characteristics of decimetric pulsations and related radio emissions during solar flares are analyzed using statistical methods recently developed for nonlinear dynamic systems. The results of the analysis is consistent with earlier reports on low-dimensional attractors of such events and yield a quantitative description of their temporal characteristics and hidden order. The estimated dimensions of typical decimetric pulsations are generally in the range of 3.0 + or - 0.5. Quasi-periodic oscillations and sudden reductions may have dimensions as low as 2. Pulsations of decimetric type IV continua have typically a dimension of about 4.

  14. Growth and Yield Responses of Cowpea to Inoculation and Phosphorus Fertilization in Different Environments

    PubMed Central

    Kyei-Boahen, Stephen; Savala, Canon E. N.; Chikoye, David; Abaidoo, Robert

    2017-01-01

    Cowpea (Vigna unguiculata) is a major source of dietary protein and essential component of the cropping systems in semi-arid regions of Sub-Saharan Africa. However, yields are very low due to lack of improved cultivars, poor management practices, and limited inputs use. The objectives of this study were to assess the effects of rhizobia inoculant and P on nodulation, N accumulation and yield of two cowpea cultivars in Mozambique. Field study was conducted in three contrasting environments during the 2013/2014 and 2014/2015 seasons using randomized complete block design with four replications and four treatments. Treatments consisted of seed inoculation, application of 40 kg P2O5 ha-1, inoculation + P, and a non-inoculated control. The most probable number (MPN) technique was used to estimate the indigenous bradyrhizobia populations at the experimental sites. The rhizobia numbers at the sites varied from 5.27 × 102 to 1.07 × 103 cells g-1 soil. Inoculation increased nodule number by 34–76% and doubled nodule dry weight (78 to 160 mg plant-1). P application improved nodulation and interacted positively with the inoculant. Inoculation, P, and inoculant + P increased shoot dry weight, and shoot and grain N content across locations but increases in number of pods plant-1, seeds pod-1, and 100-seed weight were not consistent among treatments across locations. Shoot N content was consistently high for the inoculated plants and also for the inoculated + P fertilized plants, whereas the non-inoculated control plants had the lowest tissue N content. P uptake in shoot ranged from 1.72 to 3.77 g kg-1 and was higher for plants that received P fertilizer alone. Inoculation and P either alone or in combination consistently increased cowpea grain yield across locations with yields ranging from 1097 kg ha-1 for the non-inoculated control to 1674 kg ha-1 for the inoculant + P treatment. Grain protein concentration followed a similar trend as grain yield and ranged from 223 to 252 g kg-1 but a negative correlation between grain yield and protein concentration was observed. Inoculation increased net returns by $104–163 ha-1 over that for the control. The results demonstrate the potential of improving cowpea grain yield, quality and profitability using inoculant, although the cost-benefit for using P at the current fertilizer price is not attractive except when applied together with inoculant at low P site. PMID:28515729

  15. A comparison of two adaptive multivariate analysis methods (PLSR and ANN) for winter wheat yield forecasting using Landsat-8 OLI images

    NASA Astrophysics Data System (ADS)

    Chen, Pengfei; Jing, Qi

    2017-02-01

    An assumption that the non-linear method is more reasonable than the linear method when canopy reflectance is used to establish the yield prediction model was proposed and tested in this study. For this purpose, partial least squares regression (PLSR) and artificial neural networks (ANN), represented linear and non-linear analysis method, were applied and compared for wheat yield prediction. Multi-period Landsat-8 OLI images were collected at two different wheat growth stages, and a field campaign was conducted to obtain grain yields at selected sampling sites in 2014. The field data were divided into a calibration database and a testing database. Using calibration data, a cross-validation concept was introduced for the PLSR and ANN model construction to prevent over-fitting. All models were tested using the test data. The ANN yield-prediction model produced R2, RMSE and RMSE% values of 0.61, 979 kg ha-1, and 10.38%, respectively, in the testing phase, performing better than the PLSR yield-prediction model, which produced R2, RMSE, and RMSE% values of 0.39, 1211 kg ha-1, and 12.84%, respectively. Non-linear method was suggested as a better method for yield prediction.

  16. New findings on green sweet pepper (Capsicum annum) pectins: Rhamnogalacturonan and type I and II arabinogalactans.

    PubMed

    do Nascimento, Georgia Erdmann; Iacomini, Marcello; Cordeiro, Lucimara M C

    2017-09-01

    Polysaccharides were extracted from sweet pepper (Capsicum annum) with hot water and named ANW (9% yield). Starch was precipitated by freeze-thaw treatment, while pectic polysaccharides (8% yield) remained soluble and consisted of GalA (67.0%), Rha (1.6%), Ara (6.4%), Xyl (0.3%), Gal (6.7%) and Glc (4.4%). A highly methoxylated homogalacturonan (HG, degree of methylesterification of 85% and degree of acetylation of 5%), and type I and type II arabinogalactans (AG-I and AG-II) were observed in NMR analyses. These were fractionated with Fehling's solution to give HG (5.5% yield) and AG fractions (0.6% yield). AG-I and AG-II were further separated by ultrafiltration. AG-II (0.2% yield) consisted of Ara (17.1%), Gal (36.0%), Rha (5.6%) and GalA (12.0%), had a molecular weight of 5.3×10 4 g/mol and methylation and 1 H/ 13 C HSQC-DEPT-NMR analyses showed that it was anchored in type I rhamnogalacturonan. This is the first study that reports the presence of AG-I and AG-II in sweet pepper fruits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The quantitative significance of Syntrophaceae and syntrophic partnerships in methanogenic degradation of crude oil alkanes

    PubMed Central

    Gray, N D; Sherry, A; Grant, R J; Rowan, A K; Hubert, C R J; Callbeck, C M; Aitken, C M; Jones, D M; Adams, J J; Larter, S R; Head, I M

    2011-01-01

    Libraries of 16S rRNA genes cloned from methanogenic oil degrading microcosms amended with North Sea crude oil and inoculated with estuarine sediment indicated that bacteria from the genera Smithella (Deltaproteobacteria, Syntrophaceace) and Marinobacter sp. (Gammaproteobacteria) were enriched during degradation. Growth yields and doubling times (36 days for both Smithella and Marinobacter) were determined using qPCR and quantitative data on alkanes, which were the predominant hydrocarbons degraded. The growth yield of the Smithella sp. [0.020 g(cell-C)/g(alkane-C)], assuming it utilized all alkanes removed was consistent with yields of bacteria that degrade hydrocarbons and other organic compounds in methanogenic consortia. Over 450 days of incubation predominance and exponential growth of Smithella was coincident with alkane removal and exponential accumulation of methane. This growth is consistent with Smithella's occurrence in near surface anoxic hydrocarbon degrading systems and their complete oxidation of crude oil alkanes to acetate and/or hydrogen in syntrophic partnership with methanogens in such systems. The calculated growth yield of the Marinobacter sp., assuming it grew on alkanes, was [0.0005 g(cell-C)/g(alkane-C)] suggesting that it played a minor role in alkane degradation. The dominant methanogens were hydrogenotrophs (Methanocalculus spp. from the Methanomicrobiales). Enrichment of hydrogen-oxidizing methanogens relative to acetoclastic methanogens was consistent with syntrophic acetate oxidation measured in methanogenic crude oil degrading enrichment cultures. qPCR of the Methanomicrobiales indicated growth characteristics consistent with measured rates of methane production and growth in partnership with Smithella. PMID:21914097

  18. The quantitative significance of Syntrophaceae and syntrophic partnerships in methanogenic degradation of crude oil alkanes.

    PubMed

    Gray, N D; Sherry, A; Grant, R J; Rowan, A K; Hubert, C R J; Callbeck, C M; Aitken, C M; Jones, D M; Adams, J J; Larter, S R; Head, I M

    2011-11-01

    Libraries of 16S rRNA genes cloned from methanogenic oil degrading microcosms amended with North Sea crude oil and inoculated with estuarine sediment indicated that bacteria from the genera Smithella (Deltaproteobacteria, Syntrophaceace) and Marinobacter sp. (Gammaproteobacteria) were enriched during degradation. Growth yields and doubling times (36 days for both Smithella and Marinobacter) were determined using qPCR and quantitative data on alkanes, which were the predominant hydrocarbons degraded. The growth yield of the Smithella sp. [0.020 g(cell-C)/g(alkane-C)], assuming it utilized all alkanes removed was consistent with yields of bacteria that degrade hydrocarbons and other organic compounds in methanogenic consortia. Over 450 days of incubation predominance and exponential growth of Smithella was coincident with alkane removal and exponential accumulation of methane. This growth is consistent with Smithella's occurrence in near surface anoxic hydrocarbon degrading systems and their complete oxidation of crude oil alkanes to acetate and/or hydrogen in syntrophic partnership with methanogens in such systems. The calculated growth yield of the Marinobacter sp., assuming it grew on alkanes, was [0.0005 g(cell-C)/g(alkane-C)] suggesting that it played a minor role in alkane degradation. The dominant methanogens were hydrogenotrophs (Methanocalculus spp. from the Methanomicrobiales). Enrichment of hydrogen-oxidizing methanogens relative to acetoclastic methanogens was consistent with syntrophic acetate oxidation measured in methanogenic crude oil degrading enrichment cultures. qPCR of the Methanomicrobiales indicated growth characteristics consistent with measured rates of methane production and growth in partnership with Smithella. © 2011 Society for Applied Microbiology and Blackwell Publishing Ltd.

  19. A demand-centered, hybrid life-cycle methodology for city-scale greenhouse gas inventories.

    PubMed

    Ramaswami, Anu; Hillman, Tim; Janson, Bruce; Reiner, Mark; Thomas, Gregg

    2008-09-01

    Greenhouse gas (GHG) accounting for individual cities is confounded by spatial scale and boundary effects that impact the allocation of regional material and energy flows. This paper develops a demand-centered, hybrid life-cycle-based methodology for conducting city-scale GHG inventories that incorporates (1) spatial allocation of surface and airline travel across colocated cities in larger metropolitan regions, and, (2) life-cycle assessment (LCA) to quantify the embodied energy of key urban materials--food, water, fuel, and concrete. The hybrid methodology enables cities to separately report the GHG impact associated with direct end-use of energy by cities (consistent with EPA and IPCC methods), as well as the impact of extra-boundary activities such as air travel and production of key urban materials (consistent with Scope 3 protocols recommended by the World Resources Institute). Application of this hybrid methodology to Denver, Colorado, yielded a more holistic GHG inventory that approaches a GHG footprint computation, with consistency of inclusions across spatial scale as well as convergence of city-scale per capita GHG emissions (approximately 25 mt CO2e/person/year) with state and national data. The method is shown to have significant policy impacts, and also demonstrates the utility of benchmarks in understanding energy use in various city sectors.

  20. Cosmic structure and dynamics of the local Universe

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Erdoǧdu, Pirin; Nuza, Sebastián. E.; Khalatyan, Arman; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-11-01

    We present a cosmography analysis of the local Universe based on the recently released Two-Micron All-Sky Redshift Survey catalogue. Our method is based on a Bayesian Networks Machine Learning algorithm (the KIGEN-code) which self-consistently samples the initial density fluctuations compatible with the observed galaxy distribution and a structure formation model given by second-order Lagrangian perturbation theory (2LPT). From the initial conditions we obtain an ensemble of reconstructed density and peculiar velocity fields which characterize the local cosmic structure with high accuracy unveiling non-linear structures like filaments and voids in detail. Coherent redshift-space distortions are consistently corrected within 2LPT. From the ensemble of cross-correlations between the reconstructions and the galaxy field and the variance of the recovered density fields, we find that our method is extremely accurate up to k˜ 1 h Mpc-1 and still yields reliable results down to scales of about 3-4 h-1 Mpc. The motion of the Local Group we obtain within ˜80 h-1 Mpc (vLG = 522 ± 86 km s-1, lLG = 291° ± 16°, bLG = 34° ± 8°) is in good agreement with measurements derived from the cosmic microwave background and from direct observations of peculiar motions and is consistent with the predictions of ΛCDM.

  1. A new statistical distance scale for planetary nebulae

    NASA Astrophysics Data System (ADS)

    Ali, Alaa; Ismail, H. A.; Alsolami, Z.

    2015-05-01

    In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.

  2. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  3. Mechanical, thermal and morphological characterization of polycarbonate/oxidized carbon nanofiber composites produced with a lean 2-step manufacturing process.

    PubMed

    Lively, Brooks; Kumar, Sandeep; Tian, Liu; Li, Bin; Zhong, Wei-Hong

    2011-05-01

    In this study we report the advantages of a 2-step method that incorporates an additional process pre-conditioning step for rapid and precise blending of the constituents prior to the commonly used melt compounding method for preparing polycarbonate/oxidized carbon nanofiber composites. This additional step (equivalent to a manufacturing cell) involves the formation of a highly concentrated solid nano-nectar of polycarbonate/carbon nanofiber composite using a solution mixing process followed by melt mixing with pure polycarbonate. This combined method yields excellent dispersion and improved mechanical and thermal properties as compared to the 1-step melt mixing method. The test results indicated that inclusion of carbon nanofibers into composites via the 2-step method resulted in dramatically reduced ( 48% lower) coefficient of thermal expansion compared to that of pure polycarbonate and 30% lower than that from the 1-step processing, at the same loading of 1.0 wt%. Improvements were also found in dynamic mechanical analysis and flexural mechanical properties. The 2-step approach is more precise and leads to better dispersion, higher quality, consistency, and improved performance in critical application areas. It is also consistent with Lean Manufacturing principles in which manufacturing cells are linked together using less of the key resources and creates a smoother production flow. Therefore, this 2-step process can be more attractive for industry.

  4. Optimization of the Ethanol Recycling Reflux Extraction Process for Saponins Using a Design Space Approach

    PubMed Central

    Gong, Xingchu; Zhang, Ying; Pan, Jianyang; Qu, Haibin

    2014-01-01

    A solvent recycling reflux extraction process for Panax notoginseng was optimized using a design space approach to improve the batch-to-batch consistency of the extract. Saponin yields, total saponin purity, and pigment yield were defined as the process critical quality attributes (CQAs). Ethanol content, extraction time, and the ratio of the recycling ethanol flow rate and initial solvent volume in the extraction tank (RES) were identified as the critical process parameters (CPPs) via quantitative risk assessment. Box-Behnken design experiments were performed. Quadratic models between CPPs and process CQAs were developed, with determination coefficients higher than 0.88. As the ethanol concentration decreases, saponin yields first increase and then decrease. A longer extraction time leads to higher yields of the ginsenosides Rb1 and Rd. The total saponin purity increases as the ethanol concentration increases. The pigment yield increases as the ethanol concentration decreases or extraction time increases. The design space was calculated using a Monte-Carlo simulation method with an acceptable probability of 0.90. Normal operation ranges to attain process CQA criteria with a probability of more than 0.914 are recommended as follows: ethanol content of 79–82%, extraction time of 6.1–7.1 h, and RES of 0.039–0.040 min−1. Most of the results of the verification experiments agreed well with the predictions. The verification experiment results showed that the selection of proper operating ethanol content, extraction time, and RES within the design space can ensure that the CQA criteria are met. PMID:25470598

  5. Quantum Yields in Mixed-Conifer Forests and Ponderosa Pine Plantations

    NASA Astrophysics Data System (ADS)

    Wei, L.; Marshall, J. D.; Zhang, J.

    2008-12-01

    Most process-based physiological models require canopy quantum yield of photosynthesis as a starting point to simulate carbon sequestration and subsequently gross primary production (GPP). The quantum yield is a measure of photosynthetic efficiency expressed in moles of CO2 assimilated per mole of photons absorbed; the process is influenced by environmental factors. In the summer 2008, we measured quantum yields on both sun and shade leaves for four conifer species at five sites within Mica Creek Experimental Watershed (MCEW) in northern Idaho and one conifer species at three sites in northern California. The MCEW forest is typical of mixed conifer stands dominated by grand fir (Abies grandis (Douglas ex D. Don) Lindl.). In northern California, the three sites with contrasting site qualities are ponderosa pine (Pinus ponderosa C. Lawson var. ponderosa) plantations that were experimentally treated with vegetation control, fertilization, and a combination of both. We found that quantum yields in MCEW ranged from ~0.045 to ~0.075 mol CO2 per mol incident photon. However, there were no significant differences between canopy positions, or among sites or tree species. In northern California, the mean value of quantum yield of three sites was 0.051 mol CO2/mol incident photon. No significant difference in quantum yield was found between canopy positions, or among treatments or sites. The results suggest that these conifer species maintain relatively consistent quantum yield in both MCEW and northern California. This consistency simplifies the use of a process-based model to accurately predict forest productivity in these areas.

  6. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution.

    PubMed

    Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V

    2008-05-01

    The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.

  7. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  8. Questa baseline and pre-mining ground-water quality investigation. 21. Hydrology and water balance of the Red River basin, New Mexico 1930-2004

    USGS Publications Warehouse

    Naus, Cheryl A.; McAda, Douglas P.; Myers, Nathan C.

    2006-01-01

    A study of the hydrology of the Red River Basin of northern New Mexico, including development of a pre- mining water balance, contributes to a greater understanding of processes affecting the flow and chemistry of water in the Red River and its alluvial aquifer. Estimates of mean annual precipitation for the Red River Basin ranged from 22.32 to 25.19 inches. Estimates of evapotranspiration for the Red River Basin ranged from 15.02 to 22.45 inches or 63.23 to 94.49 percent of mean annual precipitation. Mean annual yield from the Red River Basin estimated using regression equations ranged from 45.26 to 51.57 cubic feet per second. Mean annual yield from the Red River Basin estimated by subtracting evapotranspiration from mean annual precipitation ranged from 55.58 to 93.15 cubic feet per second. In comparison, naturalized 1930-2004 mean annual streamflow at the Red River near Questa gage was 48.9 cubic feet per second. Although estimates developed using regression equations appear to be a good representation of yield from the Red River Basin as a whole, the methods that consider evapotranspiration may more accurately represent yield from smaller basins that have a substantial amount of sparsely vegetated scar area. Hydrograph separation using the HYSEP computer program indicated that subsurface flow for 1930-2004 ranged from 76 to 94 percent of streamflow for individual years with a mean of 87 percent of streamflow. By using a chloride mass-balance method, ground-water recharge was estimated to range from 7 to 17 percent of mean annual precipitation for water samples from wells in Capulin Canyon and the Hansen, Hottentot, La Bobita, and Straight Creek Basins and was 21 percent of mean annual precipitation for water samples from the Red River. Comparisons of mean annual basin yield and measured streamflow indicate that streamflow does not consistently increase as cumulative estimated mean annual basin yield increases. Comparisons of estimated mean annual yield and measured streamflow profiles indicates that, in general, the river is gaining ground water from the alluvium in the reach from the town of Red River to between Hottentot and Straight Creeks, and from Columbine Creek to near Thunder Bridge. The river is losing water to the alluvium from upstream of the mill area to Columbine Creek. Interpretations of ground- and surface-water interactions based on comparisons of mean annual basin yield and measured streamflow are supported further with water-level data from piezometers, wells, and the Red River.

  9. Towards a multiconfigurational method of increments

    NASA Astrophysics Data System (ADS)

    Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.

    2018-06-01

    The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.

  10. Principles for valid histopathologic scoring in research

    PubMed Central

    Gibson-Corley, Katherine N.; Olivier, Alicia K.; Meyerholz, David K.

    2013-01-01

    Histopathologic scoring is a tool by which semi-quantitative data can be obtained from tissues. Initially, a thorough understanding of the experimental design, study objectives and methods are required to allow the pathologist to appropriately examine tissues and develop lesion scoring approaches. Many principles go into the development of a scoring system such as tissue examination, lesion identification, scoring definitions and consistency in interpretation. Masking (a.k.a. “blinding”) of the pathologist to experimental groups is often necessary to constrain bias and multiple mechanisms are available. Development of a tissue scoring system requires appreciation of the attributes and limitations of the data (e.g. nominal, ordinal, interval and ratio data) to be evaluated. Incidence, ordinal and rank methods of tissue scoring are demonstrated along with key principles for statistical analyses and reporting. Validation of a scoring system occurs through two principal measures: 1) validation of repeatability and 2) validation of tissue pathobiology. Understanding key principles of tissue scoring can help in the development and/or optimization of scoring systems so as to consistently yield meaningful and valid scoring data. PMID:23558974

  11. Influence of inocula and grains on sclerotia biomass and carotenoid yield of Penicillium sp. PT95 during solid-state fermentation.

    PubMed

    Han, Jian-Rong; Yuan, Jing-Ming

    2003-10-01

    Various inocula and grains were evaluated for carotenoid production by solid-state fermentation using Penicillium sp. PT95. Millet medium was more effective in both sclerotia growth and carotenoid production than other grain media. An inoculum in the form of sclerotia yielded higher sclerotia biomass compared to either a spore inoculum or a mycelial pellet inoculum. Adding wheat bran to grain medium favored the formation of sclerotia. However, neither the inoculum type nor addition of wheat bran resulted in a significant change in the carotenoid content of sclerotia. Among grain media supplemented with wheat bran (wheat bran:grain =1:4 w/w, dry basis), a medium consisting of rice and wheat bran gave the highest sclerotia biomass (15.10 g/100 g grain), a medium consisting of buckwheat and wheat bran gave the highest content of carotenoid in sclerotia (0.826 mg/g dry sclerotia), and a medium consisting of millet and wheat bran gave the highest carotenoid yield (11.457 mg/100 g grain).

  12. Biochemical indicators for the bioavailability of organic carbon in ground water

    USGS Publications Warehouse

    Chapelle, F.H.; Bradley, P.M.; Goode, D.J.; Tiedeman, C.; Lacombe, P.J.; Kaiser, K.; Benner, R.

    2009-01-01

    The bioavailability of total organic carbon (TOC) was examined in ground water from two hydrologically distinct aquifers using biochemical indicators widely employed in chemical oceanography. Concentrations of total hydrolyzable neutral sugars (THNS), total hydrolyzable amino acids (THAA), and carbon-normalized percentages of TOC present as THNS and THAA (referred to as "yields") were assessed as indicators of bioavailability. A shallow coastal plain aquifer in Kings Bay, Georgia, was characterized by relatively high concentrations (425 to 1492 ??M; 5.1 to 17.9 mg/L) of TOC but relatively low THNS and THAA yields (???0.2%-1.0%). These low yields are consistent with the highly biodegraded nature of TOC mobilized from relatively ancient (Pleistocene) sediments overlying the aquifer. In contrast, a shallow fractured rock aquifer in West Trenton, New Jersey, exhibited lower TOC concentrations (47 to 325 ??M; 0.6 to 3.9 mg/L) but higher THNS and THAA yields (???1% to 4%). These higher yields were consistent with the younger, and thus more bioavailable, TOC being mobilized from modern soils overlying the aquifer. Consistent with these apparent differences in TOC bioavailability, no significant correlation between TOC and dissolved inorganic carbon (DIC), a product of organic carbon mineralization, was observed at Kings Bay, whereas a strong correlation was observed at West Trenton. In contrast to TOC, THNS and THAA concentrations were observed to correlate with DIC at the Kings Bay site. These observations suggest that biochemical indicators such as THNS and THAA may provide information concerning the bioavailability of organic carbon present in ground water that is not available from TOC measurements alone.

  13. On the method of lumens

    PubMed Central

    Shera, Christopher A.

    2014-01-01

    Parent and Allen [(2007). J. Acoust. Soc. Am. 122, 918–931] introduced the “method of lumens” to compute the plane-wave reflectance in a duct terminated with a nonuniform impedance. The method involves splitting the duct into multiple, fictitious subducts (lumens), solving for the reflectance in each subduct, and then combining the results. The method of lumens has considerable intuitive appeal and is easily implemented in the time domain. Previously applied only in a complex acoustical setting where proper evaluation is difficult (i.e., in a model of the ear canal and tympanic membrane), the method is tested here by using it to compute the reflectance from an area constriction in an infinite lossless duct considered in the long-wavelength limit. Neither the original formulation of the method—shown here to violate energy conservation except when the termination impedance is uniform—nor a reformulation consistent with basic physical constraints yields the correct solution to this textbook problem in acoustics. The results are generalized and the nature of the errors illuminated. PMID:25480060

  14. Measurement of Postmortem Pupil Size: A New Method with Excellent Reliability and Its Application to Pupil Changes in the Early Postmortem Period.

    PubMed

    Fleischer, Luise; Sehner, Susanne; Gehl, Axel; Riemer, Martin; Raupach, Tobias; Anders, Sven

    2017-05-01

    Measurement of postmortem pupil width is a potential component of death time estimation. However, no standardized measurement method has been described. We analyzed a total of 71 digital images for pupil-iris ratio using the software ImageJ. Images were analyzed three times by four different examiners. In addition, serial images from 10 cases were taken between 2 and 50 h postmortem to detect spontaneous pupil changes. Intra- and inter-rater reliability of the method was excellent (ICC > 0.95). The method is observer independent and yields consistent results, and images can be digitally stored and re-evaluated. The method seems highly eligible for forensic and scientific purposes. While statistical analysis of spontaneous pupil changes revealed a significant polynomial of quartic degree for postmortem time (p = 0.001), an obvious pattern was not detected. These results do not indicate suitability of spontaneous pupil changes for forensic death time estimation, as formerly suggested. © 2016 American Academy of Forensic Sciences.

  15. High Density Linkage Map Construction and Mapping of Yield Trait QTLs in Maize (Zea mays) Using the Genotyping-by-Sequencing (GBS) Technology

    PubMed Central

    Su, Chengfu; Wang, Wei; Gong, Shunliang; Zuo, Jinghui; Li, Shujiang; Xu, Shizhong

    2017-01-01

    Increasing grain yield is the ultimate goal for maize breeding. High resolution quantitative trait loci (QTL) mapping can help us understand the molecular basis of phenotypic variation of yield and thus facilitate marker assisted breeding. The aim of this study is to use genotyping-by-sequencing (GBS) for large-scale SNP discovery and simultaneous genotyping of all F2 individuals from a cross between two varieties of maize that are in clear contrast in yield and related traits. A set of 199 F2 progeny derived from the cross of varieties SG-5 and SG-7 were generated and genotyped by GBS. A total of 1,046,524,604 reads with an average of 5,258,918 reads per F2 individual were generated. This number of reads represents an approximately 0.36-fold coverage of the maize reference genome Zea_mays.AGPv3.29 for each F2 individual. A total of 68,882 raw SNPs were discovered in the F2 population, which, after stringent filtering, led to a total of 29,927 high quality SNPs. Comparative analysis using these physically mapped marker loci revealed a higher degree of synteny with the reference genome. The SNP genotype data were utilized to construct an intra-specific genetic linkage map of maize consisting of 3,305 bins on 10 linkage groups spanning 2,236.66 cM at an average distance of 0.68 cM between consecutive markers. From this map, we identified 28 QTLs associated with yield traits (100-kernel weight, ear length, ear diameter, cob diameter, kernel row number, corn grains per row, ear weight, and grain weight per plant) using the composite interval mapping (CIM) method and 29 QTLs using the least absolute shrinkage selection operator (LASSO) method. QTLs identified by the CIM method account for 6.4% to 19.7% of the phenotypic variation. Small intervals of three QTLs (qCGR-1, qKW-2, and qGWP-4) contain several genes, including one gene (GRMZM2G139872) encoding the F-box protein, three genes (GRMZM2G180811, GRMZM5G828139, and GRMZM5G873194) encoding the WD40-repeat protein, and one gene (GRMZM2G019183) encoding the UDP-Glycosyltransferase. The work will not only help to understand the mechanisms that control yield traits of maize, but also provide a basis for marker-assisted selection and map-based cloning in further studies. PMID:28533786

  16. Nanoscale Roughness of Faults Explained by the Scale-Dependent Yield Stress of Geologic Materials

    NASA Astrophysics Data System (ADS)

    Thom, C.; Brodsky, E. E.; Carpick, R. W.; Goldsby, D. L.; Pharr, G.; Oliver, W.

    2017-12-01

    Despite significant differences in their lithologies and slip histories, natural fault surfaces exhibit remarkably similar scale-dependent roughness over lateral length scales spanning 7 orders of magnitude, from microns to tens of meters. Recent work has suggested that a scale-dependent yield stress may result in such a characteristic roughness, but experimental evidence in favor of this hypothesis has been lacking. We employ an atomic force microscope (AFM) operating in intermittent-contact mode to map the topography of the Corona Heights fault surface. Our experiments demonstrate that the Corona Heights fault exhibits isotropic self-affine roughness with a Hurst exponent of 0.75 +/- 0.05 at all wavelengths from 60 nm to 10 μm. If yield stress controls roughness, then the roughness data predict that yield strength varies with length scale as λ-0.25 +/ 0.05. To test the relationship between roughness and yield stress, we conducted nanoindentation tests on the same Corona Heights sample and a sample of the Yair Fault, a carbonate fault surface that has been previously characterized by AFM. A diamond Berkovich indenter tip was used to indent the samples at a nominally constant strain rate (defined as the loading rate divided by the load) of 0.2 s-1. The continuous stiffness method (CSM) was used to measure the indentation hardness (which is proportional to yield stress) and the elastic modulus of the sample as a function of depth in each test. For both samples, the yield stress decreases with increasing size of the indents, a behavior consistent with that observed for many engineering materials and recently for other geologic materials such as olivine. The magnitude of this "indentation size effect" is best described by a power-law with exponents of -0.12 +/- 0.06 and -0.18 +/- 0.08 for the Corona Heights and Yair Faults, respectively. These results demonstrate a link between surface roughness and yield stress, and suggest that fault geometry is the physical manifestation of a scale-dependent yield stress.

  17. Classification of burn wounds using support vector machines

    NASA Astrophysics Data System (ADS)

    Acha, Begona; Serrano, Carmen; Palencia, Sergio; Murillo, Juan Jose

    2004-05-01

    The purpose of this work is to improve a previous method developed by the authors for the classification of burn wounds into their depths. The inputs of the system are color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. Our previous work consisted in segmenting the burn wound from the rest of the image and classifying the burn into its depth. In this paper we focus on the classification problem only. We already proposed to use a Fuzzy-ARTMAP neural network (NN). However, we may take advantage of new powerful classification tools such as Support Vector Machines (SVM). We apply the five-folded cross validation scheme to divide the database into training and validating sets. Then, we apply a feature selection method for each classifier, which will give us the set of features that yields the smallest classification error for each classifier. Features used to classify are first-order statistical parameters extracted from the L*, u* and v* color components of the image. The feature selection algorithms used are the Sequential Forward Selection (SFS) and the Sequential Backward Selection (SBS) methods. As data of the problem faced here are not linearly separable, the SVM was trained using some different kernels. The validating process shows that the SVM method, when using a Gaussian kernel of variance 1, outperforms classification results obtained with the rest of the classifiers, yielding an error classification rate of 0.7% whereas the Fuzzy-ARTMAP NN attained 1.6 %.

  18. Magnetic resonance imaging: A tool to monitor and optimize enzyme distribution during porcine pancreas distention for islet isolation

    PubMed Central

    Scott, WE; Weegman, BP; Balamurugan, AN; Ferrer-Fabrega, J; Anazawa, T; Karatzas, T; Jie, T; Hammer, BE; Matsumoto, S; Avgoustiniatos, ES; Maynard, KS; Sutherland, DER; Hering, BJ; Papas, KK

    2014-01-01

    Background Porcine islet xenotransplantation is emerging as a potential alternative for allogeneic clinical islet transplantation. Optimization of porcine islet isolation in terms of yield and quality is critical for the success and cost effectiveness of this approach. Incomplete pancreas distension and inhomogeneous enzyme distribution have been identified as key factors for limiting viable islet yield per porcine pancreas. The aim of this study was to explore the utility of Magnetic Resonance Imaging (MRI) as a tool to investigate the homogeneity of enzyme delivery in porcine pancreata. Traditional and novel methods for enzyme delivery aimed at optimizing enzyme distribution were examined. Methods Pancreata were procured from Landrace pigs via en bloc viscerectomy. The main pancreatic duct was then cannulated with an 18g winged catheter and MRI performed at 1.5 T. Images were collected before and after ductal infusion of chilled MRI contrast agent (gadolinium) in physiological saline. Results Regions of the distal aspect of the splenic lobe and portions of the connecting lobe and bridge exhibited reduced delivery of solution when traditional methods of distension were utilized. Use of alternative methods of delivery (such as selective re-cannulation and distension of identified problem regions) resolved these issues and MRI was successfully utilized as a guide and assessment tool for improved delivery. Conclusion Current methods of porcine pancreas distension do not consistently deliver enzyme uniformly or adequately to all regions of the pancreas. Novel methods of enzyme delivery should be investigated and implemented for improved enzyme distribution. MRI serves as a valuable tool to visualize and evaluate the efficacy of current and prospective methods of pancreas distension and enzyme delivery. PMID:24986758

  19. GAMA/H-ATLAS: a meta-analysis of SFR indicators - comprehensive measures of the SFR-M* relation and cosmic star formation history at z < 0.4

    NASA Astrophysics Data System (ADS)

    Davies, L. J. M.; Driver, S. P.; Robotham, A. S. G.; Grootes, M. W.; Popescu, C. C.; Tuffs, R. J.; Hopkins, A.; Alpaslan, M.; Andrews, S. K.; Bland-Hawthorn, J.; Bremer, M. N.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Croom, S.; da Cunha, E.; Dunne, L.; Lara-López, M. A.; Liske, J.; Loveday, J.; Moffett, A. J.; Owers, M.; Phillipps, S.; Sansom, A. E.; Taylor, E. N.; Michalowski, M. J.; Ibar, E.; Smith, M.; Bourne, N.

    2016-09-01

    We present a meta-analysis of star formation rate (SFR) indicators in the Galaxy And Mass Assembly (GAMA) survey, producing 12 different SFR metrics and determining the SFR-M* relation for each. We compare and contrast published methods to extract the SFR from each indicator, using a well-defined local sample of morphologically selected spiral galaxies, which excludes sources which potentially have large recent changes to their SFR. The different methods are found to yield SFR-M* relations with inconsistent slopes and normalizations, suggesting differences between calibration methods. The recovered SFR-M* relations also have a large range in scatter which, as SFRs of the targets may be considered constant over the different time-scales, suggests differences in the accuracy by which methods correct for attenuation in individual targets. We then recalibrate all SFR indicators to provide new, robust and consistent luminosity-to-SFR calibrations, finding that the most consistent slopes and normalizations of the SFR-M* relations are obtained when recalibrated using the radiation transfer method of Popescu et al. These new calibrations can be used to directly compare SFRs across different observations, epochs and galaxy populations. We then apply our calibrations to the GAMA II equatorial data set and explore the evolution of star formation in the local Universe. We determine the evolution of the normalization to the SFR-M* relation from 0 < z < 0.35 - finding consistent trends with previous estimates at 0.3 < z < 1.2. We then provide the definitive z < 0.35 cosmic star formation history, SFR-M* relation and its evolution over the last 3 billion years.

  20. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  1. Anti-Fungal activity of essential oil from Baeckea frutescens L against Pleuratus ostreatus

    NASA Astrophysics Data System (ADS)

    Jemi, Renhart; Barus, Ade Irma; Nuwa, Sarinah, Luhan, Gimson

    2017-11-01

    Ujung Atap is an herb that have distinctive odor on its leaves. The plant's essential oil contains bioactive compounds but has not been investigated its anti-fungal activity against Pleurotus ostreatus. Essential oil from Ujung Atap leaves is one environmentally friendly natural preservative. This study consisted of distillation Ujung Atap leaves with boiled method, determining the number of acid, essential oil ester, and anti-fungal activity against Pleurotus ostreatus. Analysis of the data to calculate anti-fungal activity used probit analysis method to determine the IC50. Results for the distillation of leaves Ujung Atap produce essential oil yield of 0.071% and the average yield of the acid number and the ester of essential oils Ujung Atap leaves are 5.24 and 12.15. Anti-fungal activity Pleurotus ostreatus at a concentration of 1000 µg/mL, 100 µg/mL, 75 µg/mL, 50 µg/mL and 100 µg/mL BA defunct or fungi was declared dead, while at a concentration of 25 µg/mL, 10 µg/mL and 5 µg/mL still occur inhibitory processes. Results obtained probit analysis method IC50 of 35.48 mg/mL; means the essential oil of Ujung Atap leaf can inhibit fungal growth by 50 percent to 35.48 µg/mL concentration.

  2. Automated cell disruption is a reliable and effective method of isolating RNA from fresh snap-frozen normal and malignant oral mucosa samples.

    PubMed

    Van der Vorst, Sébastien; Dekairelle, Anne-France; Irenge, Léonid; Hamoir, Marc; Robert, Annie; Gala, Jean-Luc

    2009-01-01

    This study compared automated vs. manual tissue grinding in terms of RNA yield obtained from oral mucosa biopsies. A total of 20 patients undergoing uvulectomy for sleep-related disorders and 10 patients undergoing biopsy for head and neck squamous cell carcinoma were enrolled in the study. Samples were collected, snap-frozen in liquid nitrogen, and divided into two parts of similar weight. Sample grinding was performed on one sample from each pair, either manually or using an automated cell disruptor. The performance and efficacy of each homogenization approach was compared in terms of total RNA yield (spectrophotometry, fluorometry), mRNA quantity [densitometry of specific TP53 amplicons and TP53 quantitative reverse-transcribed real-time PCR (qRT-PCR)], and mRNA quality (functional analysis of separated alleles in yeast). Although spectrophotometry and fluorometry results were comparable for both homogenization methods, TP53 expression values obtained by amplicon densitometry and qRT-PCR were significantly and consistently better after automated homogenization (p<0.005) for both uvula and tumor samples. Functional analysis of separated alleles in yeast results was better with the automated technique for tumor samples. Automated tissue homogenization appears to be a versatile, quick, and reliable method of cell disruption and is especially useful in the case of small malignant samples, which show unreliable results when processed by manual homogenization.

  3. Radioligand binding analysis of α 2 adrenoceptors with [11C]yohimbine in brain in vivo: Extended Inhibition Plot correction for plasma protein binding.

    PubMed

    Phan, Jenny-Ann; Landau, Anne M; Jakobsen, Steen; Wong, Dean F; Gjedde, Albert

    2017-11-22

    We describe a novel method of kinetic analysis of radioligand binding to neuroreceptors in brain in vivo, here applied to noradrenaline receptors in rat brain. The method uses positron emission tomography (PET) of [ 11 C]yohimbine binding in brain to quantify the density and affinity of α 2 adrenoceptors under condition of changing radioligand binding to plasma proteins. We obtained dynamic PET recordings from brain of Spraque Dawley rats at baseline, followed by pharmacological challenge with unlabeled yohimbine (0.3 mg/kg). The challenge with unlabeled ligand failed to diminish radioligand accumulation in brain tissue, due to the blocking of radioligand binding to plasma proteins that elevated the free fractions of the radioligand in plasma. We devised a method that graphically resolved the masking of unlabeled ligand binding by the increase of radioligand free fractions in plasma. The Extended Inhibition Plot introduced here yielded an estimate of the volume of distribution of non-displaceable ligand in brain tissue that increased with the increase of the free fraction of the radioligand in plasma. The resulting binding potentials of the radioligand declined by 50-60% in the presence of unlabeled ligand. The kinetic unmasking of inhibited binding reflected in the increase of the reference volume of distribution yielded estimates of receptor saturation consistent with the binding of unlabeled ligand.

  4. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  5. [Reproductive function of the male rat after a flight on the Kosmos-1129 biosatellite].

    PubMed

    Serova, L V; Denisova, L A; Apanasenko, Z I; Kuznetsova, M A; Meĭzerov, E S

    1982-01-01

    Male rats that were flown for 18.5 days on Cosmos-1129 were mated postflight with intact females. The mating 5 days postflight when the ejaculate consisted of spermatozoids that were exposed to zero-g effects in the mature stage yielded the litter which lagged behind the controls with respect to the growth and development during the first postnatal month. The mating 2.5-3 months postflight when the ejaculate consisted of spermatozoids that were exposed to zero-g effects at the stem cell stage yielded the litter which did not differ from the control.

  6. An Evaluation of Partial Digestion Protocols for the Extraction and Measurement of Trace Metals of Environmental Concern in Marine and Estuarine Sediments

    NASA Astrophysics Data System (ADS)

    Winters, S. J.; Krahforst, C.; Sherman, L.; Kehm, K.

    2013-12-01

    As part of a broad study of the fate and transport of trace metals in estuarine sediments (Krahforst et al., 2013), the efficacy of commonly-used partial digestion protocols, including ISO 11466 (treatment with aqua regia), EPA 3050B (nitric acid followed by H2O2) and a modified rock digestion method ('RD' method- H2O2 followed by nitric), were evaluated for two NIST SRM materials, marine sediment 2702 and estuarine sediment 1646a. Unlike so-called total sediment digestions, the methods studied in this work do not employ hydrofluoric acid and are thought to leave silicates substantially or wholly intact. These methods can in principle compliment studies based on total digestions by providing information about trace metals in phases that are potentially more labile in the marine environment. Samples were digested in ~150 mg aliquots. Application of ISO 11466 and EPA 3050B followed published protocols except that digestions were carried out in trace-metal clean 15 mL capped Teflon vessels in an Al block digester and, at the end of the procedure, the supernatant was decanted from undigested material following repeated centrifugation in 2% nitric acid. Digested solutions were analyzed for Al, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Ag, Cd, Sn and Pb content by ICPMS. All elements were analyzed in collision reaction cell mode to minimize isobaric interferences, except Cd and Ag, which were analyzed in standard mode. Instrument performance was monitored in-run by analyzing the SRM 1643e and several quality-check standards. Two repeated digestions of SRM 2702 and SRM 1646a using EPA 3050B produced identical yields, within the standard deviation of repeated analyses (0 - 5%), for all analyzed elements except Cu, which varied by 30% for SRM 2702. The same was true for ISO 11466, although the standard deviation of repeated analyses for this digestion series tended to be larger (< ~15%). The RD method, which consists of pre-treatment with H2O2 followed by repeated treatments with nitric acid, produced the highest average yields for all elements, ranging from 50% of the Al in SRM 2702 up to ~100% for Cd and Pb. The higher recoveries for the RD method may indicate that pre-treatment with H2O2 more effectively removes organics compared with the conventional methods. Yields for ISO 11466 digestions typically range from 5 - 15% higher than EPA 3050B for all studied elements. Comparisons between the two sediments demonstrated that the acid-extractable fraction differs for several elements. For example results from all three digestion methods confirm a ~40% difference in yield for Mn between SRM 2702 and SRM 1646a. Overall, the results indicate that yields for trace element analyses of marine and estuarine sediments resulting from partial digestion are sensitive to the digestion technique, and in particular the methods employed for removal of organic phases. This work was supported by NSF Grant EAR-0922733 and a Maryland Sea Grant Program Development Award.

  7. Comparison of Methods for Determining the Mechanical Properties of Semiconducting Polymer Films for Stretchable Electronics.

    PubMed

    Rodriquez, Daniel; Kim, Jae-Han; Root, Samuel E; Fei, Zhuping; Boufflet, Pierre; Heeney, Martin; Kim, Taek-Soo; Lipomi, Darren J

    2017-03-15

    This paper describes a comparison of two characterization techniques for determining the mechanical properties of thin-film organic semiconductors for applications in soft electronics. In the first method, the film is supported by water (film-on-water, FOW), and a stress-strain curve is obtained using a direct tensile test. In the second method, the film is supported by an elastomer (film-on-elastomer, FOE), and is subjected to three tests to reconstruct the key features of the stress-strain curve: the buckling test (tensile modulus), the onset of buckling (yield point), and the crack-onset strain (strain at fracture). The specimens used for the comparison are four poly(3-hexylthiophene) (P3HT) samples of increasing molecular weight (M n = 15, 40, 63, and 80 kDa). The methods produced qualitatively similar results for mechanical properties including the tensile modulus, the yield point, and the strain at fracture. The agreement was not quantitative because of differences in mode of loading (tension vs compression), strain rate, and processing between the two methods. Experimental results are corroborated by coarse-grained molecular dynamics simulations, which lead to the conclusion that in low molecular weight samples (M n = 15 kDa), fracture occurs by chain pullout. Conversely, in high molecular weight samples (M n > 25 kDa), entanglements concentrate the stress to few chains; this concentration is consistent with chain scission as the dominant mode of fracture. Our results provide a basis for comparing mechanical properties that have been measured by these two techniques, and provide mechanistic insight into fracture modes in this class of materials.

  8. Breast mass segmentation in mammograms combining fuzzy c-means and active contours

    NASA Astrophysics Data System (ADS)

    Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana

    2018-04-01

    Segmentation of breast masses in mammograms is a challenging issue due to the nature of mammography and the characteristics of masses. In fact, mammographic images are poor in contrast and breast masses have various shapes and densities with fuzzy and ill-defined borders. In this paper, we propose a method based on a modified Chan-Vese active contour model for mass segmentation in mammograms. We conduct the experiment on mass Regions of Interest (ROI) extracted from the MIAS database. The proposed method consists of mainly three stages: Firstly, the ROI is preprocessed to enhance the contrast. Next, two fuzzy membership maps are generated from the preprocessed ROI based on fuzzy C-Means algorithm. These fuzzy membership maps are finally used to modify the energy of the Chan-Vese model and to perform the final segmentation. Experimental results indicate that the proposed method yields good mass segmentation results.

  9. Esculin hydrolysis by Gram positive bacteria. A rapid test and it's comparison with other methods.

    PubMed

    Qadri, S M; Smith, J C; Zubairi, S; DeSilva, M I

    1981-01-01

    A number of bacteria hydrolyze esculin enzymatically to esculetin. This characteristic is used by taxonomists and clinical microbiologists in the differentiation and identification of bacteria, especially to distinguish Lance-field group D streptococci from non-group D organisms and Listeria monocytogenes from morphologically similar Erysipelothrix rhusipoathiae and diphtheroids. Conventional methods used for esculin hydrolysis require 4--48 h for completion. We developed and evaluated a medium which gives positive results more rapidly. The 2,330 isolates used in this study consisted of 1,680 esculin positive and 650 esculin negative organisms. The sensitivity and specificity of this method were compared with the PathoTec esculin hydrolysis strip and the procedure of Vaughn and Levine (VL). Of the 1,680 esculin positive organisms, 97% gave positive reactions within 30 minutes with the rapid test whereas PathoTec required 3--4 h incubation for the same number of organisms to yield a positive reaction.

  10. High-performance liquid chromatographic determination of the beta2-selective adrenergic agonist fenoterol in human plasma after fluorescence derivatization.

    PubMed

    Kramer, S; Blaschke, G

    2001-02-10

    A sensitive high-performance liquid chromatographic method has been developed for the determination of the beta2-selective adrenergic agonist fenoterol in human plasma. To improve the sensitivity of the method, fenoterol was derivatized with N-(chloroformyl)-carbazole prior to HPLC analysis yielding highly fluorescent derivatives. The assay involves protein precipitation with acetonitrile, liquid-liquid-extraction of fenoterol from plasma with isobutanol under alkaline conditions followed by derivatization with N-(chloroformyl)-carbazole. Reversed-phase liquid chromatographic determination of the fenoterol derivative was performed using a column-switching system consisting of a LiChrospher 100 RP 18 and a LiChrospher RP-Select B column with acetonitrile, methanol and water as mobile phase. The limit of quantitation in human plasma was 376 pg fenoterol/ml. The method was successfully applied for the assay of fenoterol in patient plasma.

  11. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    PubMed

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  12. Measurement of subcellular texture by optical Gabor-like filtering with a digital micromirror device

    PubMed Central

    Pasternack, Robert M.; Qian, Zhen; Zheng, Jing-Yi; Metaxas, Dimitris N.; White, Eileen; Boustany, Nada N.

    2010-01-01

    We demonstrate an optical Fourier processing method to quantify object texture arising from subcellular feature orientation within unstained living cells. Using a digital micromirror device as a Fourier spatial filter, we measured cellular responses to two-dimensional optical Gabor-like filters optimized to sense orientation of nonspherical particles, such as mitochondria, with a width around 0.45 μm. Our method showed significantly rounder structures within apoptosis-defective cells lacking the proapoptotic mitochondrial effectors Bax and Bak, when compared with Bax/Bak expressing cells functional for apoptosis, consistent with reported differences in mitochondrial shape in these cells. By decoupling spatial frequency resolution from image resolution, this method enables rapid analysis of nonspherical submicrometer scatterers in an under-sampled large field of view and yields spatially localized morphometric parameters that improve the quantitative assessment of biological function. PMID:18830354

  13. DIRBoost-an algorithm for boosting deformable image registration: application to lung CT intra-subject registration.

    PubMed

    Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W

    2014-04-01

    We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A new method to derive electronegativity from resonant inelastic x-ray scattering.

    PubMed

    Carniato, S; Journel, L; Guillemin, R; Piancastelli, M N; Stolte, W C; Lindle, D W; Simon, M

    2012-10-14

    Electronegativity is a well-known property of atoms and substituent groups. Because there is no direct way to measure it, establishing a useful scale for electronegativity often entails correlating it to another chemical parameter; a wide variety of methods have been proposed over the past 80 years to do just that. This work reports a new approach that connects electronegativity to a spectroscopic parameter derived from resonant inelastic x-ray scattering. The new method is demonstrated using a series of chlorine-containing compounds, focusing on the Cl 2p(-1)LUMO(1) electronic states reached after Cl 1s → LUMO core excitation and subsequent KL radiative decay. Based on an electron-density analysis of the LUMOs, the relative weights of the Cl 2p(z) atomic orbital contributing to the Cl 2p(3/2) molecular spin-orbit components are shown to yield a linear electronegativity scale consistent with previous approaches.

  15. White constancy method for mobile displays

    NASA Astrophysics Data System (ADS)

    Yum, Ji Young; Park, Hyun Hee; Jang, Seul Ki; Lee, Jae Hyang; Kim, Jong Ho; Yi, Ji Young; Lee, Min Woo

    2014-03-01

    In these days, consumer's needs for image quality of mobile devices are increasing as smartphone is widely used. For example, colors may be perceived differently when displayed contents under different illuminants. Displayed white in incandescent lamp is perceived as bluish, while same content in LED light is perceived as yellowish. When changed in perceived white under illuminant environment, image quality would be degraded. Objective of the proposed white constancy method is restricted to maintain consistent output colors regardless of the illuminants utilized. Human visual experiments are performed to analyze viewers'perceptual constancy. Participants are asked to choose the displayed white in a variety of illuminants. Relationship between the illuminants and the selected colors with white are modeled by mapping function based on the results of human visual experiments. White constancy values for image control are determined on the predesigned functions. Experimental results indicate that propsed method yields better image quality by keeping the display white.

  16. Non-orthogonal internally contracted multi-configurational perturbation theory (NICPT): Dynamic electron correlation for large, compact active spaces

    NASA Astrophysics Data System (ADS)

    Kähler, Sven; Olsen, Jeppe

    2017-11-01

    A computational method is presented for systems that require high-level treatments of static and dynamic electron correlation but cannot be treated using conventional complete active space self-consistent field-based methods due to the required size of the active space. Our method introduces an efficient algorithm for perturbative dynamic correlation corrections for compact non-orthogonal MCSCF calculations. In the algorithm, biorthonormal expansions of orbitals and CI-wave functions are used to reduce the scaling of the performance determining step from quadratic to linear in the number of configurations. We describe a hierarchy of configuration spaces that can be chosen for the active space. Potential curves for the nitrogen molecule and the chromium dimer are compared for different configuration spaces. Already the most compact spaces yield qualitatively correct potentials that with increasing size of configuration spaces systematically approach complete active space results.

  17. Pressure Flammability Thresholds in Oxygen of Selected Aerospace Materials

    NASA Technical Reports Server (NTRS)

    Hirsch, David; Williams, Jim; Harper, Susana; Beeson, Harold; Ruff, Gary; Pedley, Mike

    2010-01-01

    The experimental approach consisted of concentrating the testing in the flammability transition zone following the Bruceton Up-and-Down Method. For attribute data, the method has been shown to be very repeatable and most efficient. Other methods for characterization of critical levels (Karberand Probit) were also considered. The data yielded the upward limiting pressure index (ULPI), the pressure level where approx.50% of materials self-extinguish in a given environment.Parametric flammability thresholds other than oxygen concentration can be determined with the methodology proposed for evaluating the MOC when extinguishment occurs. In this case, a pressure threshold in 99.8% oxygen was determined with the methodology and found to be 0.4 to 0.9 psia for typical spacecraft materials. Correlation of flammability thresholds obtained with chemical, hot wire, and other ignition sources will be conducted to provide recommendations for using alternate ignition sources to evaluate flammability of aerospace materials.

  18. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    PubMed

    Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat

    2017-05-01

    Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  19. Synthesis of 4-hydroxy-3-methylchalcone from Reimer-Tiemann reaction product and its antibacterial activity test

    NASA Astrophysics Data System (ADS)

    Hapsari, M.; Windarti, T.; Purbowatiningrum; Ngadiwiyana; Ismiyarto

    2018-04-01

    A 4-hydroxy-3-methylchalcone has been synthesized from 4-hydroxy-3-methylbenzaldehyde as the Reimer-Tiemann reaction product. This research consists of three steps involve synthesize of 4-hydroxy-3-methylbenzaldehyde from ortho-cresol, synthesize of chalcone derivatives from 4-hydroxy-3-methylbenzaldehyde and 4-hydroxy-3-methoxybenzaldehyde or vanillin for the comparison, the last is antibacterial activity test of both chalcone derivatives against Escherichia coli (negative gram) and Staphylococcus aureus (positive gram) bacteria using disc diffusion method. Results of Reimer-Tiemann reaction is 4-hydroxy-3-methylbenzaldehyde compound in an orange colour solid form which has 43% yields and melting point 110-114°C. A 4-hydroxy-3-methylbenzaldehyde then reacted with acetophenone in a base condition and form 4-hydroxy-3-methylchalcone compound in a yellow colour solid form which has 40% yields and melting point 83-86°C. The antibacterial activity of the 4-hydroxy-3-methylchalcone against gram-positive bacteria Staphylococcus aureus is better than the 4-hydroxy-3-methoxychalcone.

  20. Sources and reactivities of marine-derived organic matter in coastal sediments as determined by alkaline CuO oxidation

    NASA Astrophysics Data System (ADS)

    Goñi, Miguel A.; Hedges, John I.

    1995-07-01

    Alkaline CuO oxidation of ubiquitous biochemicals such as proteins, polysaccharides, and lipids, yields specific products, including fatty acids, diacids, and carboxylated phenols. Oxidation of a variety of marine organisms, including macrophytes, phytoplankton, zooplankton, and bacteria, yields these CuO products in characteristic patterns that can often differentiate these biological sources. Sediments from Skan Bay (Unalaska Island, Alaska) display organic carbon and total nitrogen profiles which are consistent with three kinetically distinct pools of organic matter. The CuO fingerprints of these sediments distinguish these three pools at the molecular level, indicating a highly labile, fatty acid-rich surface organic layer of likely bacterial origin, intermediately reactive kelp debris and a background of phytoplankton remains that predominates at depth. The CuO method, which has been previously applied only to characterize cutin and lignin constituents of vascular land plants, also provides information on other types of abundant biochemicals, including those indicative of marine sources.

  1. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  2. Discrimination and quantification of Fe and Ni abundances in Genesis solar wind implanted collectors using X-ray standing wave fluorescence yield depth profiling with internal referencing

    DOE PAGES

    Choi, Y.; Eng, P.; Stubbs, J.; ...

    2016-08-21

    In this paper, X-ray standing wave fluorescence yield depth profiling was used to determine the solar wind implanted Fe and Ni fluences in a silicon-on-sapphire (SoS) Genesis collector (60326). An internal reference standardization method was developed based on fluorescence from Si and Al in the collector materials. Measured Fe fluence agreed well with that measured previously by us on a sapphire collector (50722) as well as SIMS results by Jurewicz et al. Measured Ni fluence was higher than expected by a factor of two; neither instrumental errors nor solar wind fractionation effects are considered significant perturbations to this value. Impuritymore » Ni within the epitaxial Si layer, if present, could explain the high Ni fluences and therefore needs further investigation. As they stand, these results are consistent with minor temporally-variable Fe and Ni fractionation on the timescale of a year.« less

  3. Modeling residence-time distribution in horizontal screw hydrolysis reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sievers, David A.; Stickel, Jonathan J.

    The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less

  4. Materials Screening for the Discovery of New Half-Heuslers: Machine Learning versus ab Initio Methods.

    PubMed

    Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio

    2018-01-18

    Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.

  5. Discrimination and quantification of Fe and Ni abundances in Genesis solar wind implanted collectors using X-ray standing wave fluorescence yield depth profiling with internal referencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.; Eng, P.; Stubbs, J.

    In this paper, X-ray standing wave fluorescence yield depth profiling was used to determine the solar wind implanted Fe and Ni fluences in a silicon-on-sapphire (SoS) Genesis collector (60326). An internal reference standardization method was developed based on fluorescence from Si and Al in the collector materials. Measured Fe fluence agreed well with that measured previously by us on a sapphire collector (50722) as well as SIMS results by Jurewicz et al. Measured Ni fluence was higher than expected by a factor of two; neither instrumental errors nor solar wind fractionation effects are considered significant perturbations to this value. Impuritymore » Ni within the epitaxial Si layer, if present, could explain the high Ni fluences and therefore needs further investigation. As they stand, these results are consistent with minor temporally-variable Fe and Ni fractionation on the timescale of a year.« less

  6. Crystal Growth and Scintillation Properties of Ho-Doped Lu3Al5O12 Single Crystals

    NASA Astrophysics Data System (ADS)

    Sugiyama, Makoto; Yanagida, Takayuki; Fujimoto, Yutaka; Totsuka, Daisuke; Yokota, Yuui; Kurosawa, Shunsuke; Futami, Yoshisuke; Yoshikawa, Akira

    2012-10-01

    The crystals of 0.1, 0.5, 1, and 3% Ho doped Lu3Al5O12 (Ho:LuAG) grown by the micro-pulling-down method were examined for their scintillation properties. At wavelengths longer than 300 nm, Ho:LuAG crystals demonstrated around 60% transparency with many absorption peaks attributed to Ho3+ 4f10-4 f10 transitions. When excited by 241Am α-ray to obtain radio luminescence spectra, broad host emission and four sharp Ho3+ 4f10-4 f10 emission peaks were detected in the visible region. Light yields and decay time profiles of the samples irradiated by 137Cs γ-ray were measured using photomultiplier tubes R7600 (Hamamatsu). Ho 0.5%:LuAG showed the highest light yield of 3100 ±310 photons/MeV among the present samples. The decay time profiles were well reproduced by two components exponential approximation consisting of 0.5-1 μs and 3-6 μs.

  7. Modeling residence-time distribution in horizontal screw hydrolysis reactors

    DOE PAGES

    Sievers, David A.; Stickel, Jonathan J.

    2017-10-12

    The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less

  8. Protein Dynamics from NMR: The Slowly Relaxing Local Structure Analysis Compared with Model-Free Analysis

    PubMed Central

    Meirovitch, Eva; Shapiro, Yury E.; Polimeno, Antonino; Freed, Jack H.

    2009-01-01

    15N-1H spin relaxation is a powerful method for deriving information on protein dynamics. The traditional method of data analysis is model-free (MF), where the global and local N-H motions are independent and the local geometry is simplified. The common MF analysis consists of fitting single-field data. The results are typically field-dependent, and multi-field data cannot be fit with standard fitting schemes. Cases where known functional dynamics has not been detected by MF were identified by us and others. Recently we applied to spin relaxation in proteins the Slowly Relaxing Local Structure (SRLS) approach which accounts rigorously for mode-mixing and general features of local geometry. SRLS was shown to yield MF in appropriate asymptotic limits. We found that the experimental spectral density corresponds quite well to the SRLS spectral density. The MF formulae are often used outside of their validity ranges, allowing small data sets to be force-fitted with good statistics but inaccurate best-fit parameters. This paper focuses on the mechanism of force-fitting and its implications. It is shown that MF force-fits the experimental data because mode-mixing, the rhombic symmetry of the local ordering and general features of local geometry are not accounted for. Combined multi-field multi-temperature data analyzed by MF may lead to the detection of incorrect phenomena, while conformational entropy derived from MF order parameters may be highly inaccurate. On the other hand, fitting to more appropriate models can yield consistent physically insightful information. This requires that the complexity of the theoretical spectral densities matches the integrity of the experimental data. As shown herein, the SRLS densities comply with this requirement. PMID:16821820

  9. Creativity of Junior High School’s Students in Designing Earthquake Resistant Buildings

    NASA Astrophysics Data System (ADS)

    Fitriani, D. N.; Kaniawati, I.; Ramalis, T. R.

    2017-09-01

    This research was stimulated by the present the territory of Indonesia is largely an area prone to earthquakes and the issue that human resources and disaster response planning process is still less competent and not optimal. In addition, the construction of houses and public facilities has not been in accordance with earthquake-resistant building standards. This study aims to develop students’ creativity through earthquake resistant building model’s projects. The research method used is descriptive qualitative method. The sample is one of the 7th grades consisting of 32 students in one of the junior high schools, Indonesia. Data was collected using an observation sheets and student worksheet. Results showed that students’ creativity in designing earthquake resistant building models varies greatly and yields new solutions to solve problems.

  10. Microhartree precision in density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia

    2018-04-01

    To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.

  11. Advanced lattice Boltzmann scheme for high-Reynolds-number magneto-hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    De Rosis, Alessandro; Lévêque, Emmanuel; Chahine, Robert

    2018-06-01

    Is the lattice Boltzmann method suitable to investigate numerically high-Reynolds-number magneto-hydrodynamic (MHD) flows? It is shown that a standard approach based on the Bhatnagar-Gross-Krook (BGK) collision operator rapidly yields unstable simulations as the Reynolds number increases. In order to circumvent this limitation, it is here suggested to address the collision procedure in the space of central moments for the fluid dynamics. Therefore, an hybrid lattice Boltzmann scheme is introduced, which couples a central-moment scheme for the velocity with a BGK scheme for the space-and-time evolution of the magnetic field. This method outperforms the standard approach in terms of stability, allowing us to simulate high-Reynolds-number MHD flows with non-unitary Prandtl number while maintaining accuracy and physical consistency.

  12. Evaluating direct medical expenditures estimation methods of adults using the medical expenditure panel survey: an example focusing on head and neck cancer.

    PubMed

    Coughlan, Diarmuid; Yeh, Susan T; O'Neill, Ciaran; Frick, Kevin D

    2014-01-01

    To inform policymakers of the importance of evaluating various methods for estimating the direct medical expenditures for a low-incidence condition, head and neck cancer (HNC). Four methods of estimation have been identified: 1) summing all health care expenditures, 2) estimating disease-specific expenditures consistent with an attribution approach, 3) estimating disease-specific expenditures by matching, and 4) estimating disease-specific expenditures by using a regression-based approach. A literature review of studies (2005-2012) that used the Medical Expenditure Panel Survey (MEPS) was undertaken to establish the most popular expenditure estimation methods. These methods were then applied to a sample of 120 respondents with HNC, derived from pooled data (2003-2008). The literature review shows that varying expenditure estimation methods have been used with MEPS but no study compared and contrasted all four methods. Our estimates are reflective of the national treated prevalence of HNC. The upper-bound estimate of annual direct medical expenditures of adult respondents with HNC between 2003 and 2008 was $3.18 billion (in 2008 dollars). Comparable estimates arising from methods focusing on disease-specific and incremental expenditures were all lower in magnitude. Attribution yielded annual expenditures of $1.41 billion, matching method of $1.56 billion, and regression method of $1.09 billion. This research demonstrates that variation exists across and within expenditure estimation methods applied to MEPS data. Despite concerns regarding aspects of reliability and consistency, reporting a combination of the four methods offers a degree of transparency and validity to estimating the likely range of annual direct medical expenditures of a condition. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  13. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  14. A Palladium Iodide-Catalyzed Oxidative Aminocarbonylation-Heterocyclization Approach to Functionalized Benzimidazoimidazoles.

    PubMed

    Veltri, Lucia; Giofrè, Salvatore V; Devo, Perry; Romeo, Roberto; Dobbs, Adrian P; Gabriele, Bartolo

    2018-02-02

    A novel carbonylative approach to the synthesis of functionalized 1H-benzo[d]imidazo[1,2-a]imidazoles is presented. The method consists of the oxidative aminocarbonylation of N-substituted-1-(prop-2-yn-1-yl)-1H-benzo[d]imidazol-2-amines, carried out in the presence of secondary nucleophilic amines, to give the corresponding alkynylamide intermediates, followed by in situ conjugated addition and double-bond isomerization, to give 2-(1-alkyl-1H-benzo[d]imidazo[1,2-a]imidazol-2-yl)acetamides. Products were obtained in good to excellent yields (64-96%) and high turnover numbers (192-288 mol of product per mol of catalyst) under relatively mild conditions (100 °C under 20 atm of a 4:1 mixture of CO-air), using a simple catalytic system, consisting of PdI 2 (0.33 mol %) in conjunction with KI (0.33 equiv).

  15. A microscopic derivation of nuclear collective rotation-vibration model and its application to nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulshani, P., E-mail: matlap@bell.net

    We derive a microscopic version of the successful phenomenological hydrodynamic model of Bohr-Davydov-Faessler-Greiner for collective rotation-vibration motion of an axially symmetric deformed nucleus. The derivation is not limited to small oscillation amplitude. The nuclear Schrodinger equation is canonically transformed to collective co-ordinates, which is then linearized using a constrained variational method. The associated constraints are imposed on the wavefunction rather than on the particle co-ordinates. The approach yields three self-consistent, time-reversal invariant, cranking-type Schrodinger equations for the rotation-vibration and intrinsic motions, and a self-consistency equation. For harmonic oscillator mean-field potentials, these equations are solved in closed forms for excitation energy,more » cut-off angular momentum, and other nuclear properties for the ground-state rotational band in some deformed nuclei. The results are compared with measured data.« less

  16. Next-generation sequencing yields the complete mitochondrial genome of the flathead mullet, Mugil cephalus cryptic species in East Australia (Teleostei: Mugilidae).

    PubMed

    Shen, Kang-Ning; Chen, Ching-Hung; Hsiao, Chung-Der; Durand, Jean-Dominique

    2016-09-01

    In this study, the complete mitogenome sequence of a cryptic species from East Australia (Mugil sp. H) belonging to the worldwide Mugil cephalus species complex (Teleostei: Mugilidae) has been sequenced by next-generation sequencing method. The assembled mitogenome, consisting of 16,845 bp, had the typical vertebrate mitochondrial gene arrangement, including 13 protein-coding genes, 22 transfer RNAs, 2 ribosomal RNAs genes and a non-coding control region of D-loop. D-loop consists of 1067 bp length, and is located between tRNA-Pro and tRNA-Phe. The overall base composition of East Australia M. cephalus is 28.4% for A, 29.3% for C, 15.4% for G and 26.9% for T. The complete mitogenome may provide essential and important DNA molecular data for further phylogenetic and evolutionary analysis for flathead mullet species complex.

  17. Onset of η-nuclear binding in a pionless EFT approach

    NASA Astrophysics Data System (ADS)

    Barnea, N.; Bazak, B.; Friedman, E.; Gal, A.

    2017-08-01

    ηNNN and ηNNNN bound states are explored in stochastic variational method (SVM) calculations within a pionless effective field theory (EFT) approach at leading order. The theoretical input consists of regulated NN and NNN contact terms, and a regulated energy dependent ηN contact term derived from coupled-channel models of the N* (1535) nucleon resonance. A self consistency procedure is applied to deal with the energy dependence of the ηN subthreshold input, resulting in a weak dependence of the calculated η-nuclear binding energies on the EFT regulator. It is found, in terms of the ηN scattering length aηN, that the onset of binding η 3He requires a minimal value of ReaηN close to 1 fm, yielding then a few MeV η binding in η 4He. The onset of binding η 4He requires a lower value of ReaηN, but exceeding 0.7 fm.

  18. Influence of spacing and depth of planting to growth and yield of arrowroot (Marantha arundinacea)

    NASA Astrophysics Data System (ADS)

    Qodliyati, M.; Supriyono; Nyoto, S.

    2018-03-01

    This study was conducted to determine the optimum spacing and depth of planting to the growth and yield of arrowroot. This research was conducted at the Experimental Field of Agriculture Faculty, Sebelas Maret University on Jumantono, Karanganyar. This research was conducted using Randomized Completely Block Design (RCBD) with two treatment factors of plant spacing and depth of planting. Plant spacing consists of 3 levels, including J1 (30×30 cm), J2 (30×40 cm) and J3 (30×50 cm). Depth of planting consists of 2 levels which are K1 (10 cm) and K2 (20 cm). Data were analyzed by DMRT (Duncan’s Multiple Range Test) at 5% significance level. The results showed that spacing of 30×50 cm have significantly higher plant height, tuber (common names of rhizome) length, and tuber weight per plant. The depth of 20 cm gives a higher yield on the number of tubers per plant and tuber weight per plot variables. Both treatments have no significant interaction on growth and yield.

  19. Fall rice straw management and winter flooding treatment effects on a subsequent soybean crop

    USGS Publications Warehouse

    Anders, M.M.; Windham, T.E.; McNew, R.W.; Reinecke, K.J.

    2005-01-01

    The effects of fall rice (Oryza sativa L.) straw management and winter flooding on the yield and profitability of subsequent irrigated and dryland soybean [Glycine max (L.) Merr.] crops were studied for 3 years. Rice straw treatments consisted of disking, rolling, or standing stubble. Winter flooding treatments consisted of maintaining a minimum water depth of 10 cm by pumping water when necessary, impounding available rainfall, and draining fields to prevent flooding. The following soybean crop was managed as a conventional-tillage system or no-till system. Tillage system treatments were further divided into irrigated or dryland. Results indicated that there were no significant effects from either fall rice straw management or winter flooding treatments on soybean seed yields. Soybean seed yields for, the conventional tillage system were significantly greater than those for the no-till system for the first 2 yrs and not different in the third year. Irrigated soybean seed yields were significantly greater than those from dryland plots for all years. Net economic returns averaged over the 3 yrs were greatest ($390.00 ha-1) from the irrigated no-till system.

  20. Purification and Characterization of Axial Filaments from Treponema phagedenis Biotype reiterii (the Reiter Treponeme)

    PubMed Central

    Bharier, Michael; Allis, David

    1974-01-01

    Axial filaments have been purified from Treponema phagedenis biotype reiterii (the Reiter treponeme) and partially characterized chemically. The preparations consist largely of protein but also contain small amounts of hexose (3%). Filaments dissociate to subunits in acid, alkali, urea, guanidine, and various detergents. Amino acid analyses show an overall resemblance to other spirochetal axial filaments and to bacterial flagella. Dissociated filaments migrate as a single band upon acrylamide gel electrophoresis at pH 4.3 (in 4 M urea and 10 3 M ethylenediaminetetraacetate) and at pH 12, but in sodium dodecyl sulfate gels, three bands are obtained under a wide variety of conditions. Two of these bands migrate very close together, with molecular weights of 33,000 ± 500. The other band has a molecular weight of 36,500 ± 500. Analysis of axial filaments by the dansyl chloride method yields both methionine and glutamic acid as amino terminal end groups. Sedimentation equilibrium measurements on dissociated axial filaments in 7 M guanidine hydrochloride yield plots of log C against ϰ2 which vary with the speed and initial protein concentration used. Molecular weight values calculated from these plots are consistent with a model in which axial filament subunits are heterogeneous with respect to molecular weight in the approximate range of 32,000 to 36,000. Images PMID:4436261

  1. Comparative Performance of Patient Health Questionnaire-9 and Edinburgh Postnatal Depression Scale for Screening Antepartum Depression

    PubMed Central

    Zhong, Qiuyue; Gelaye, Bizu; Rondon, Marta; Sánchez, Sixto E; García, Pedro J; Sánchez, Elena; Barrios, Yasmin V; Simon, Gregory E.; Henderson, David C.; Cripe, Swee May; Williams, Michelle A

    2014-01-01

    Objective We sought to evaluate the psychometric properties of two widely used screening scales: the Patient Health Questionnaire (PHQ-9) and Edinburgh Postnatal Depression Scale (EPDS) among pregnant Peruvian women. Methods This cross-sectional study included 1,517 women receiving prenatal care from February 2012 to March 2013. A structured interview was used to collect data using PHQ-9 and EPDS. We examined reliability, construct and concurrent validity between two scales using internal consistency indices, factor structures, correlations, and Cohen’s kappa. Results Both scales had good internal consistency (Cronbach’s alpha > 0.8). Correlation between PHQ-9 and EPDS scores was fair (rho=0.52). Based on exploratory factor analysis (EFA), both scales yielded a two-factor structure. EFA including all items from PHQ-9 and EPDS yielded four factors, namely, “somatization”, “depression and suicidal ideation”, “anxiety and depression”, and “anhedonia”. The agreement between the two scales was generally fair at different cutoff scores with the highest Cohen’s kappa being 0.46. Conclusions Both the PHQ-9 and EPDS are reliable and valid scales for antepartum depression assessment. The PHQ-9 captures somatic symptoms, while EPDS detects depressive symptoms comorbid with anxiety during early pregnancy. Our findings suggest simultaneous administration of both scales may improve identification of antepartum depressive disorders in clinical settings. PMID:24766996

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, J.P., E-mail: chengjp@zju.edu.cn; Chen, X.; Ma, R.

    Flower-like Co{sub 3}O{sub 4} hierarchical microspheres composed of self-assembled porous nanoplates have been prepared by a two-step method without employing templates. The first step involves the synthesis of flower-like Co(OH){sub 2} microspheres by a solution route at low temperatures. The second step includes the calcination of the as-prepared Co(OH){sub 2} microspheres at 200 deg. C for 1 h, causing their decomposition to form porous Co{sub 3}O{sub 4} microspheres without destruction of their original morphology. The samples were characterized by scanning electron microscope, transmission electron microscope, X-ray diffractormeter and Fourier transform infrared spectroscope. Some experimental factors including solution temperature and surfactantmore » on the morphologies of the final products have been investigated. The magnetic properties of Co{sub 3}O{sub 4} microspheres were also investigated. - Graphical Abstract: Flower-like Co{sub 3}O{sub 4} microspheres are composed of self-assembled nanoplates and these nanoplates appear to be closely packed in the microspheres. These nanoplates consist of a large number of nanocrystallites less than 5 nm in size with a porous structure, in which the connection between nanocrystallites is random. Research Highlights: {yields} Flower-like Co{sub 3}O{sub 4} hierarchical microspheres composed of self-assembled porous nanoplates have been prepared by a two-step method without employing templates. {yields} Layered Co(OH){sub 2} microspheres were prepared with an appropriate approach under low temperatures for 1 h reaction. {yields} Calcination caused Co(OH){sub 2} decomposition to form porous Co{sub 3}O{sub 4} microspheres without destruction of their original morphology.« less

  3. Catalytic conversion of cellulose to liquid hydrocarbon fuels by progressive removal of oxygen to facilitate separation processes and achieve high selectivities

    DOEpatents

    Dumesic, James A.; Ruiz, Juan Carlos Serrano; West, Ryan M.

    2015-06-30

    Described is a method to make liquid chemicals. The method includes deconstructing cellulose to yield a product mixture comprising levulinic acid and formic acid, converting the levulinic acid to .gamma.-valerolactone, and converting the .gamma.-valerolactone to pentanoic acid. Alternatively, the .gamma.-valerolactone can be converted to a mixture of n-butenes. The pentanoic acid can be decarboxylated yield 1-butene or ketonized to yield 5-nonanone. The 5-nonanone can be hydrodeoxygenated to yield nonane, or 5-nonanone can be reduced to yield 5-nonanol. The 5-nonanol can be dehydrated to yield nonene, which can be dimerized to yield a mixture of C.sub.9 and C.sub.18 olefins, which can be hydrogenated to yield a mixture of alkanes.

  4. Catalytic conversion of cellulose to liquid hydrocarbon fuels by progressive removal of oxygen to facilitate separation processes and achieve high selectivities

    DOEpatents

    Dumesic, James A [Verona, WI; Ruiz, Juan Carlos Serrano [Madison, WI; West, Ryan M [Madison, WI

    2014-01-07

    Described is a method to make liquid chemicals. The method includes deconstructing cellulose to yield a product mixture comprising levulinic acid and formic acid, converting the levulinic acid to .gamma.-valerolactone, and converting the .gamma.-valerolactone to pentanoic acid. Alternatively, the .gamma.-valerolactone can be conveted to a mixture of n-butenes. The pentanoic acid can be decarboxylated yield 1-butene or ketonized to yield 5-nonanone. The 5-nonanone can be hydrodeoxygenated to yield nonane, or 5-nonanone can be reduced to yield 5-nonanol. The 5-nonanol can be dehydrated to yield nonene, which can be dimerized to yield a mixture of C.sub.9 and C.sub.18 olefins, which can be hydrogenated to yield a mixture of alkanes.

  5. Coupling constant for N*(1535)N{rho}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin

    2008-05-15

    The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.

  6. Magnetic resonance imaging: a tool to monitor and optimize enzyme distribution during porcine pancreas distention for islet isolation.

    PubMed

    Scott, William E; Weegman, Bradley P; Balamurugan, Appakalai N; Ferrer-Fabrega, Joana; Anazawa, Takayuki; Karatzas, Theodore; Jie, Tun; Hammer, Bruce E; Matsumoto, Shuchiro; Avgoustiniatos, Efstathios S; Maynard, Kristen S; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K

    2014-01-01

    Porcine islet xenotransplantation is emerging as a potential alternative for allogeneic clinical islet transplantation. Optimization of porcine islet isolation in terms of yield and quality is critical for the success and cost-effectiveness of this approach. Incomplete pancreas distention and inhomogeneous enzyme distribution have been identified as key factors for limiting viable islet yield per porcine pancreas. The aim of this study was to explore the utility of magnetic resonance imaging (MRI) as a tool to investigate the homogeneity of enzyme delivery in porcine pancreata. Traditional and novel methods for enzyme delivery aimed at optimizing enzyme distribution were examined. Pancreata were procured from Landrace pigs via en bloc viscerectomy. The main pancreatic duct was then cannulated with an 18-g winged catheter and MRI performed at 1.5-T. Images were collected before and after ductal infusion of chilled MRI contrast agent (gadolinium) in physiological saline. Regions of the distal aspect of the splenic lobe and portions of the connecting lobe and bridge exhibited reduced delivery of solution when traditional methods of distention were utilized. Use of alternative methods of delivery (such as selective re-cannulation and distention of identified problem regions) resolved these issues, and MRI was successfully utilized as a guide and assessment tool for improved delivery. Current methods of porcine pancreas distention do not consistently deliver enzyme uniformly or adequately to all regions of the pancreas. Novel methods of enzyme delivery should be investigated and implemented for improved enzyme distribution. MRI serves as a valuable tool to visualize and evaluate the efficacy of current and prospective methods of pancreas distention and enzyme delivery. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.

  7. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  8. STRONG LENS TIME DELAY CHALLENGE. II. RESULTS OF TDC1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Kai; Treu, Tommaso; Marshall, Phil

    2015-02-10

    We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted of analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g., COSMOGRAIL) and expected from future synoptic surveys (e.g., LSST), and include simulated systematic errors. Seven teams participated in TDC1, submitting results from 78 different method variants. After describing each method, we compute and analyze basic statistics measuring accuracy (ormore » bias) A, goodness of fit χ{sup 2}, precision P, and success rate f. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e., give |A| < 0.03, P < 0.03, and χ{sup 2} < 1.5, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range f = 20%-40%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher f than does sparser sampling. Taking the results of TDC1 at face value, we estimate that LSST should provide around 400 robust time-delay measurements, each with P < 0.03 and |A| < 0.01, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that A and f depend mostly on season length, while P depends mostly on cadence and campaign duration.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dholabhai, Pratik P., E-mail: pratik.dholabhai@asu.ed; Anwar, Shahriar, E-mail: anwar@asu.ed; Adams, James B., E-mail: jim.adams@asu.ed

    Kinetic lattice Monte Carlo (KLMC) model is developed for investigating oxygen vacancy diffusion in praseodymium-doped ceria. The current approach uses a database of activation energies for oxygen vacancy migration, calculated using first-principles, for various migration pathways in praseodymium-doped ceria. Since the first-principles calculations revealed significant vacancy-vacancy repulsion, we investigate the importance of that effect by conducting simulations with and without a repulsive interaction. Initially, as dopant concentrations increase, vacancy concentration and thus conductivity increases. However, at higher concentrations, vacancies interfere and repel one another, and dopants trap vacancies, creating a 'traffic jam' that decreases conductivity, which is consistent with themore » experimental findings. The modeled effective activation energy for vacancy migration slightly increased with increasing dopant concentration in qualitative agreement with the experiment. The current methodology comprising a blend of first-principle calculations and KLMC model provides a very powerful fundamental tool for predicting the optimal dopant concentration in ceria related materials. -- graphical abstract: Ionic conductivity in praseodymium doped ceria as a function of dopant concentration calculated using the kinetic lattice Monte Carlo vacancy-repelling model, which predicts the optimal composition for achieving maximum conductivity. Display Omitted Research highlights: {yields} KLMC method calculates the accurate time-dependent diffusion of oxygen vacancies. {yields} KLMC-VR model predicts a dopant concentration of {approx}15-20% to be optimal in PDC. {yields} At higher dopant concentration, vacancies interfere and repel one another, and dopants trap vacancies. {yields} Activation energy for vacancy migration increases as a function of dopant content« less

  10. Structured Syncope Care Pathways Based on Lean Six Sigma Methodology Optimises Resource Use with Shorter Time to Diagnosis and Increased Diagnostic Yield

    PubMed Central

    Martens, Leon; Goode, Grahame; Wold, Johan F. H.; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas

    2014-01-01

    Aims To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Methods Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. Results With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Conclusions Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield. PMID:24927475

  11. [Prediction of the side-cut product yield of atmospheric/vacuum distillation unit by NIR crude oil rapid assay].

    PubMed

    Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi

    2014-10-01

    In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.

  12. Development of steel foam processing methods and characterization of metal foam

    NASA Astrophysics Data System (ADS)

    Park, Chanman

    2000-10-01

    Steel foam was synthesized by a powder metallurgical route, resulting in densities less than half that of steel. Process parameters for foam synthesis were investigated, and two standard powder formulations were selected consisting of Fe-2.5% C and 0.2 wt% foaming agent (either MgCO3 or SrCO3). Compression tests were performed on annealed and pre-annealed foam samples of different density to determine mechanical response and energy absorption behavior. The stress-strain response was strongly affected by annealing, which reduced the carbon content and converted much of the pearlitic structure to ferrite. Different powder blending methods and melting times were employed and the effects on the geometric structure of steel foam were examined. Dispersion of the foaming agent affected the pore size distribution of the expanded foams. With increasing melt time, pores coalesced, leading to the eventual collapse of the foam. Inserting interlayer membranes in the powder compacts inhibited coalescence of pores and produced foams with more uniform cell size and distribution. The closed-cell foam samples exhibited anisotropy in compression, a phenomenon that was caused primarily by the ellipsoidal cell shapes within the foam. Yield strengths were 3x higher in the transverse direction than in the longitudinal direction. Yield strength also showed a power-law dependence on relative density (n ≅ 1.8). Compressive strain was highly localized and occurred in discrete bands that extended transverse to the loading direction. The yield strength of foam samples showed stronger strain rate dependence at higher strain rates. The increased strain rate dependence was attributed to microinertial hardening. Energy absorption was also observed to increase with strain rate. Measurements of cell wall curvature showed that an increased mean curvature correlated with a reduced yield strength, and foam strengths generally fell below predictions of Gibson-Ashby theory. Morphological defects reduced yield strength and altered the dependence on density. Microstructural analysis was performed on a porous Mg and AZ31 Mg alloy synthesized by the GASAR process. The pore distribution depended on the distance from the chill end of ingots. TEM observations revealed apparent gas tracks neat the pores and ternary intermetallic phases in the alloy.

  13. Effect of extraction method on the yield of furanocoumarins from fruits of Archangelica officinalis Hoffm.

    PubMed

    Waksmundzka-Hajnos, M; Petruczynik, A; Dragan, A; Wianowska, D; Dawidowicz, A L

    2004-01-01

    Optimal conditions for the extraction and analysis of furanocoumarins from fruits of Archangelica officinalis Hoffm. have been determined. The following extraction methods were used: exhaustive extraction in a Soxhlet apparatus, ultrasonication at 25 and 60 degrees C, microwave-assisted solvent extraction in open and closed systems, and accelerated solvent extraction (ASE). In most cases the yields of furanocoumarins were highest using the ASE method. The effects of extracting solvent, temperature and time of extraction using this method were investigated. The highest yield of furanocoumarins by ASE was obtained with methanol at 100-130 degrees C for 10 min. The extraction yields of furanocoumarins from plant material by ultrasonication at 60 degrees C and microwave-assisted solvent extraction in an open system were comparable to the extraction yields obtained in the time- and solvent-consuming exhaustive process involving the Soxhlet apparatus.

  14. An efficient scan diagnosis methodology according to scan failure mode for yield enhancement

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Tae; Seo, Nam-Sik; Oh, Ghil-Geun; Kim, Dae-Gue; Lee, Kyu-Taek; Choi, Chi-Young; Kim, InSoo; Min, Hyoung Bok

    2008-12-01

    Yield has always been a driving consideration during fabrication of modern semiconductor industry. Statistically, the largest portion of wafer yield loss is defective scan failure. This paper presents efficient failure analysis methods for initial yield ramp up and ongoing product with scan diagnosis. Result of our analysis shows that more than 60% of the scan failure dies fall into the category of shift mode in the very deep submicron (VDSM) devices. However, localization of scan shift mode failure is very difficult in comparison to capture mode failure because it is caused by the malfunction of scan chain. Addressing the biggest challenge, we propose the most suitable analysis method according to scan failure mode (capture / shift) for yield enhancement. In the event of capture failure mode, this paper describes the method that integrates scan diagnosis flow and backside probing technology to obtain more accurate candidates. We also describe several unique techniques, such as bulk back-grinding solution, efficient backside probing and signal analysis method. Lastly, we introduce blocked chain analysis algorithm for efficient analysis of shift failure mode. In this paper, we contribute to enhancement of the yield as a result of the combination of two methods. We confirm the failure candidates with physical failure analysis (PFA) method. The direct feedback of the defective visualization is useful to mass-produce devices in a shorter time. The experimental data on mass products show that our method produces average reduction by 13.7% in defective SCAN & SRAM-BIST failure rates and by 18.2% in wafer yield rates.

  15. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  16. Ceramic Honeycomb Structures and Method Thereof

    NASA Technical Reports Server (NTRS)

    Cagliostro, Domenick E.; Riccitiello, Salvatore R.

    1989-01-01

    The present invention relates to a method for producing ceramic articles and the articles, the process comprising the chemical vapor deposition (CVD) and/or chemical vapor infiltration (CVI) of a honeycomb structure. Specifically the present invention relates to a method for the production of a ceramic honeycomb structure, including: (a) obtaining a loosely woven fabric/binder wherein the fabric consists essentially of metallic, ceramic or organic fiber and the binder consists essentially of an organic or inorganic material wherein the fabric/binder has and retains a honeycomb shape, with the proviso that when the fabric is metallic or ceramic the binder is organic only; (b) substantially evenly depositing at least one layer of a ceramic on the fabric/binder of step (a); and (c) recovering the ceramic coated fiber honeycomb structure. In another aspect, the present invention relates to a method for the manufacture of a lightweight ceramic-ceramic composite honeycomb structure, which process comprises: (d) pyrolyzing a loosely woven fabric a honeycomb shaped and having a high char yield and geometric integrity after pyrolysis at between about 700 degrees and 1,100 degrees Centigrade; (e) substantially evenly depositing at least one layer of ceramic material on the pyrolyzed fabric of step (a); and (f) recovering the coated ceramic honeycomb structure. The ceramic articles produced have enhanced physical properties and are useful in aircraft and aerospace uses.

  17. Soviet test yields

    NASA Astrophysics Data System (ADS)

    Vergino, Eileen S.

    Soviet seismologists have published descriptions of 96 nuclear explosions conducted from 1961 through 1972 at the Semipalatinsk test site, in Kazakhstan, central Asia [Bocharov et al., 1989]. With the exception of releasing news about some of their peaceful nuclear explosions (PNEs) the Soviets have never before published such a body of information.To estimate the seismic yield of a nuclear explosion it is necessary to obtain a calibrated magnitude-yield relationship based on events with known yields and with a consistent set of seismic magnitudes. U.S. estimation of Soviet test yields has been done through application of relationships to the Soviet sites based on the U.S. experience at the Nevada Test Site (NTS), making some correction for differences due to attenuation and near-source coupling of seismic waves.

  18. Enhanced Thermal Performance of Mosques in Qatar

    NASA Astrophysics Data System (ADS)

    Touma, A. Al; Ouahrani, D.

    2017-12-01

    Qatar has an abundance of mosques that significantly contribute to the increasing energy consumption in the country. Little attention has been given to providing mitigation methods that limit the energy demands of mosques without violating the worshippers’ thermal comfort. Most of these researches dealt with enhancing the mosque envelope through the addition of insulation layers. Since most mosque walls in Qatar are mostly already insulated, this study proposes the installation of shading on the mosque roof that is anticipated to yield similar energy savings in comparison with insulated roofs. An actual mosque in Qatar, which is a combination of six different spaces consisting of men and women’s prayer rooms, ablutions and toilets, was simulated and yielded a total annual energy demand of 619.55 kWh/m2. The mosque, whose walls are already insulated, yielded 9.1% energy savings when an insulation layer was added to its roof whereas it produced 6.2% energy savings when a shading layer was added above this roof. As the reconstruction of the roof envelope is practically unrealistic in existing mosques, the addition of shading to the roof was found to produce comparable energy savings. Lastly, it was found that new mosques with thin-roof insulation and shading tend to be more energy-efficient than those with thick-roof insulation.

  19. Improving database enrichment through ensemble docking

    NASA Astrophysics Data System (ADS)

    Rao, Shashidhar; Sanschagrin, Paul C.; Greenwood, Jeremy R.; Repasky, Matthew P.; Sherman, Woody; Farid, Ramy

    2008-09-01

    While it may seem intuitive that using an ensemble of multiple conformations of a receptor in structure-based virtual screening experiments would necessarily yield improved enrichment of actives relative to using just a single receptor, it turns out that at least in the p38 MAP kinase model system studied here, a very large majority of all possible ensembles do not yield improved enrichment of actives. However, there are combinations of receptor structures that do lead to improved enrichment results. We present here a method to select the ensembles that produce the best enrichments that does not rely on knowledge of active compounds or sophisticated analyses of the 3D receptor structures. In the system studied here, the small fraction of ensembles of up to 3 receptors that do yield good enrichments of actives were identified by selecting ensembles that have the best mean GlideScore for the top 1% of the docked ligands in a database screen of actives and drug-like "decoy" ligands. Ensembles of two receptors identified using this mean GlideScore metric generally outperform single receptors, while ensembles of three receptors identified using this metric consistently give optimal enrichment factors in which, for example, 40% of the known actives outrank all the other ligands in the database.

  20. Covariance generation and uncertainty propagation for thermal and fast neutron induced fission yields

    NASA Astrophysics Data System (ADS)

    Terranova, Nicholas; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Sumini, Marco

    2017-09-01

    Fission product yields (FY) are fundamental nuclear data for several applications, including decay heat, shielding, dosimetry, burn-up calculations. To be safe and sustainable, modern and future nuclear systems require accurate knowledge on reactor parameters, with reduced margins of uncertainty. Present nuclear data libraries for FY do not provide consistent and complete uncertainty information which are limited, in many cases, to only variances. In the present work we propose a methodology to evaluate covariance matrices for thermal and fast neutron induced fission yields. The semi-empirical models adopted to evaluate the JEFF-3.1.1 FY library have been used in the Generalized Least Square Method available in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation) to generate covariance matrices for several fissioning systems such as the thermal fission of U235, Pu239 and Pu241 and the fast fission of U238, Pu239 and Pu240. The impact of such covariances on nuclear applications has been estimated using deterministic and Monte Carlo uncertainty propagation techniques. We studied the effects on decay heat and reactivity loss uncertainty estimation for simplified test case geometries, such as PWR and SFR pin-cells. The impact on existing nuclear reactors, such as the Jules Horowitz Reactor under construction at CEA-Cadarache, has also been considered.

  1. The use of holographic interferometry for measurements of temperature in a rectangular heat pipe. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Marn, Jure

    1989-01-01

    Holographic interferometry is a nonintrusive method and as such possesses considerable advantages such as not disturbing the velocity and temperature field by creating obstacles which would alter the flow field. These optical methods have disadvantages as well. Holography, as one of the interferometry methods, retains the accuracy of older methods, and at the same time eliminates the system error of participating components. The holographic interferometry consists of comparing the objective beam with the reference beam and observing the difference in lengths of optical paths, which can be observed during the propagation of the light through a medium with locally varying refractive index. Thus, change in refractive index can be observed as a family of nonintersecting surfaces in space (wave fronts). The object of the investigation was a rectangular heat pipe. The goal was to measure temperatures in the heat pipe, which yields data for computer code or model assessment. The results were obtained by calculating the temperatures by means of finite fringes.

  2. Quality control assurance of strontium-90 in foodstuffs by LSC.

    PubMed

    Lopes, I; Mourato, A; Abrantes, J; Carvalhal, G; Madruga, M J; Reis, M

    2014-11-01

    A method based on the separation of Sr-90 by extraction chromatography and beta determination by Liquid Scintillation Counting (LSC) technique was used for strontium analysis in food samples. The methodology consisted in prior sample treatment (drying and incineration) followed by radiochemical separation of Sr-90 by extraction chromatography, using the Sr-resin. The chemical yield was determined by gravimetric method, adding stable strontium to the matrix. Beta activity (Sr-90/Y-90) was determined using a low background liquid scintillation spectrometer (Tri-Carb 3170 TR/SL, Packard). The accuracy and the precision of the method, was performed previously through recovery trials with Sr-90 spiked samples, using the same type of matrices (milk, complete meals, meat and vegetables). A reference material (IAEA_321) was now used to measure the accuracy of the procedure. Participation in interlaboratory comparison exercises was also performed in order to establish an external control on the measurements and to ensure the adequacy of the method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A two step method to treat variable winds in fallout smearing codes. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, A.T.

    1982-03-01

    A method was developed to treat non-constant winds in fallout smearing codes. The method consists of two steps: (1) location of the curved hotline (2) determination of the off-hotline activity. To locate the curved hotline, the method begins with an initial cloud of 20 discretely-sized pancake clouds, located at altitudes determined by weapon yield. Next, the particles are tracked through a 300 layer atmosphere, translating with different winds in each layer. The connection of the 20 particles' impact points is the fallout hotline. The hotline location was found to be independent of the assumed particle size distribution in the stabilizedmore » cloud. The off-hotline activity distribution is represented as a two-dimensional gaussian function, centered on the curved hotline. Hotline locator model results were compared to numerical calculations of hypothetical 100 kt burst and to the actual hotline produced by the Castle-Bravo 15 Mt nuclear test.« less

  4. Highly luminescent core-shell InP/ZnX (X = S, Se) quantum dots prepared via a phosphine synthetic route.

    PubMed

    Mordvinova, Natalia; Vinokurov, Alexander; Kuznetsova, Tatiana; Lebedev, Oleg I; Dorofeev, Sergey

    2017-01-24

    Here we report a simple method for the creation of highly luminescent core-shell InP/ZnX (X = S, Se) quantum dots (QDs) on the basis of a phosphine synthetic route. In this method a Zn precursor was added to the reaction mixture at the beginning of the synthesis to form an In(Zn)P alloy structure, which promoted the formation of a ZnX shell. Core-shell InP/ZnX QDs exhibit highly intensive emission with a quantum yield over 50%. The proposed method is primarily important for practical applications. Advantages of this method compared to the widely used SILAR technique are discussed. We further demonstrate that the SILAR approach consisting of consequent addition of Zn and chalcogen precursors to pre-prepared non-doped InP colloidal nanoparticles is not quite suitable for shell growth without the addition of special activator agents or the use of very reactive precursors.

  5. Free energy perturbation method for measuring elastic constants of liquid crystals

    NASA Astrophysics Data System (ADS)

    Joshi, Abhijeet

    There is considerable interest in designing liquid crystals capable of yielding specific morphological responses in confined environments, including capillaries and droplets. The morphology of a liquid crystal is largely dictated by the elastic constants, which are difficult to measure and are only available for a handful of substances. In this work, a first-principles based method is proposed to calculate the Frank elastic constants of nematic liquid crystals directly from atomistic models. These include the standard splay, twist and bend deformations, and the often-ignored but important saddle-splay constant. The proposed method is validated using a well-studied Gay-Berne(3,5,2,1) model; we examine the effects of temperature and system size on the elastic constants in the nematic and smectic phases. We find that our measurements of splay, twist, and bend elastic constants are consistent with previous estimates for the nematic phase. We further outline the implementation of our approach for the saddle-splay elastic constant, and find it to have a value at the limits of the Ericksen inequalities. We then proceed to report results for the elastic constants commonly known liquid crystals namely 4-pentyl-4'-cynobiphenyl (5CB) using atomistic model, and show that the values predicted by our approach are consistent with a subset of the available but limited experimental literature.

  6. A Sparse Dictionary Learning-Based Adaptive Patch Inpainting Method for Thick Clouds Removal from High-Spatial Resolution Remote Sensing Imagery

    PubMed Central

    Yang, Xiaomei; Zhou, Chenghu; Li, Zhi

    2017-01-01

    Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features. PMID:28914787

  7. A Sparse Dictionary Learning-Based Adaptive Patch Inpainting Method for Thick Clouds Removal from High-Spatial Resolution Remote Sensing Imagery.

    PubMed

    Meng, Fan; Yang, Xiaomei; Zhou, Chenghu; Li, Zhi

    2017-09-15

    Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features.

  8. Self-consistent self-interaction corrected density functional theory calculations for atoms using Fermi-Löwdin orbitals: Optimized Fermi-orbital descriptors for Li-Kr

    NASA Astrophysics Data System (ADS)

    Kao, Der-you; Withanage, Kushantha; Hahn, Torsten; Batool, Javaria; Kortus, Jens; Jackson, Koblar

    2017-10-01

    In the Fermi-Löwdin orbital method for implementing self-interaction corrections (FLO-SIC) in density functional theory (DFT), the local orbitals used to make the corrections are generated in a unitary-invariant scheme via the choice of the Fermi orbital descriptors (FODs). These are M positions in 3-d space (for an M-electron system) that can be loosely thought of as classical electron positions. The orbitals that minimize the DFT energy including the SIC are obtained by finding optimal positions for the FODs. In this paper, we present optimized FODs for the atoms from Li-Kr obtained using an unbiased search method and self-consistent FLO-SIC calculations. The FOD arrangements display a clear shell structure that reflects the principal quantum numbers of the orbitals. We describe trends in the FOD arrangements as a function of atomic number. FLO-SIC total energies for the atoms are presented and are shown to be in close agreement with the results of previous SIC calculations that imposed explicit constraints to determine the optimal local orbitals, suggesting that FLO-SIC yields the same solutions for atoms as these computationally demanding earlier methods, without invoking the constraints.

  9. Denitrification and nitrogen transport in a coastal aquifer receiving wastewater discharge

    USGS Publications Warehouse

    DeSimone, L.A.; Howes, B.L.

    1996-01-01

    Denitrification and nitrogen transport were quantified in a sandy glacial aquifer receiving wastewater from a septage-treatment facility on Cape Cod, MA. The resulting groundwater plume contained high concentrations of NO3- (32 mg of NL-1), total dissolved nitrogen (40.5 mg of N L-1), and dissolved organic carbon (1.9 mg of C L-1) and developed a central anoxic zone after 17 months of effluent discharge. Denitrifying activity was measured using four approaches throughout the major biogeochemical zones of the plume. Three approaches that maintained the structure of aquifer materials yielded comparable rates: acetylene block in intact sediment cores, 9.6 ng of N cm-3 d-1 (n = 61); in situ N2 production, 3.0 ng of N cm-3 d-1 (n = 11); and in situ NO3- depletion, 7.1 ng of N cm-3 d-1 (n = 3). In contrast, the mixing of aquifer materials using a standard slurry method yielded rates that were more than 15-fold higher (150 ng of N cm-3 d-1, n = 16) than other methods. Concentrations and ??15N of groundwater and effluent N2, NO3-, and NH4+ were consistent with the lower rates of denitrification determined by the intact-core or in situ methods. These methods and a plumewide survey of excess N2 indicate that 2-9% of the total mass of fixed nitrogen recharged to the anoxic zone of the plume was denitrified during the 34-month study period. Denitrification was limited by organic carbon (not NO3-) concentrations, as evidenced by a nitrate and carbon addition experiment, the correlation of denitrifying activity with in situ concentrations of dissolved organic carbon, and the assessments of available organic carbon in plume sediments. Carbon limitation is consistent with the observed conservative transport of 85-96% of the nitrate in the anoxic zone. Although denitrifying activity removed a significant amount (46250 kg) of fixed nitrogen during transport, the effects of aquifer denitrification on the nitrogen load to receiving ecosystems are likely to be small (<10%).

  10. Formation of quasi-single crystalline porous ZnO nanostructures with a single large cavity

    NASA Astrophysics Data System (ADS)

    Cho, Seungho; Kim, Semi; Jung, Dae-Won; Lee, Kun-Hong

    2011-09-01

    We report a method for synthesizing quasi-single crystalline porous ZnO nanostructures containing a single large cavity. The microwave-assisted route consists of a short (about 2 min) temperature ramping stage (from room temperature to 120 °C) and a stage in which the temperature is maintained at 120 °C for 2 h. The structures produced by this route were 200-480 nm in diameter. The morphological yields of this method were very high. The temperature- and time-dependent evolution of the synthesized powders and the effects of an additive, vitamin C, were studied. Spherical amorphous/polycrystalline structures (70-170 nm in diameter), which appeared transitorily, may play a key role in the formation of the single crystalline porous hollow ZnO nanostructures. Studies and characterization of the nanostructures suggested a possible mechanism for formation of the quasi-single crystalline porous ZnO nanostructures with an interior space.We report a method for synthesizing quasi-single crystalline porous ZnO nanostructures containing a single large cavity. The microwave-assisted route consists of a short (about 2 min) temperature ramping stage (from room temperature to 120 °C) and a stage in which the temperature is maintained at 120 °C for 2 h. The structures produced by this route were 200-480 nm in diameter. The morphological yields of this method were very high. The temperature- and time-dependent evolution of the synthesized powders and the effects of an additive, vitamin C, were studied. Spherical amorphous/polycrystalline structures (70-170 nm in diameter), which appeared transitorily, may play a key role in the formation of the single crystalline porous hollow ZnO nanostructures. Studies and characterization of the nanostructures suggested a possible mechanism for formation of the quasi-single crystalline porous ZnO nanostructures with an interior space. Electronic supplementary information (ESI) available: TEM images and the corresponding SAED image of a ZnO nanostructure synthesized from the reaction without l(+)-ascorbic acid at the 85 °C time point (Fig. S1). See DOI: 10.1039/c1nr10609k

  11. Retrieval of volcanic SO2 from HIRS/2 using optimal estimation

    NASA Astrophysics Data System (ADS)

    Miles, Georgina M.; Siddans, Richard; Grainger, Roy G.; Prata, Alfred J.; Fisher, Bradford; Krotkov, Nickolay

    2017-07-01

    We present an optimal-estimation (OE) retrieval scheme for stratospheric sulfur dioxide from the High-Resolution Infrared Radiation Sounder 2 (HIRS/2) instruments on the NOAA and MetOp platforms, an infrared radiometer that has been operational since 1979. This algorithm is an improvement upon a previous method based on channel brightness temperature differences, which demonstrated the potential for monitoring volcanic SO2 using HIRS/2. The Prata method is fast but of limited accuracy. This algorithm uses an optimal-estimation retrieval approach yielding increased accuracy for only moderate computational cost. This is principally achieved by fitting the column water vapour and accounting for its interference in the retrieval of SO2. A cloud and aerosol model is used to evaluate the sensitivity of the scheme to the presence of ash and water/ice cloud. This identifies that cloud or ash above 6 km limits the accuracy of the water vapour fit, increasing the error in the SO2 estimate. Cloud top height is also retrieved. The scheme is applied to a case study event, the 1991 eruption of Cerro Hudson in Chile. The total erupted mass of SO2 is estimated to be 2300 kT ± 600 kT. This confirms it as one of the largest events since the 1991 eruption of Pinatubo, and of comparable scale to the Northern Hemisphere eruption of Kasatochi in 2008. This retrieval method yields a minimum mass per unit area detection limit of 3 DU, which is slightly less than that for the Total Ozone Mapping Spectrometer (TOMS), the only other instrument capable of monitoring SO2 from 1979 to 1996. We show an initial comparison to TOMS for part of this eruption, with broadly consistent results. Operating in the infrared (IR), HIRS has the advantage of being able to measure both during the day and at night, and there have frequently been multiple HIRS instruments operated simultaneously for better than daily sampling. If applied to all data from the series of past and future HIRS instruments, this method presents the opportunity to produce a comprehensive and consistent volcanic SO2 time series spanning over 40 years.

  12. A data-oriented semi-process model for evaluating the yields of major crops at global scale (PRYSBI-2)

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2013-12-01

    Demand for major cereal crops will double by 2050 compared to the amount in 2005 due to the population growth, dietary change, and increase in biofuel use. This requires substantial efforts to increase crop yields under changing climate, water resources, and land use. In order to explore possible paths to meet the supply target, global crop modeling is a useful approach. To that end, we developed a process-based large-area crop model (called PRYSBIE-2) for major crops, including soybean. This model consisted of the enzyme kinetics model for photosynthetic carbon assimilation and soil water balance model from SWAT. The parameter values on water stress, nitrogen stress were calibrated over global croplands from one grid cell to another (1.125° in latitude and longitude) using Markov Chain Monte Carlo (MCMC) methods. The historical yield data collected from major crop-producing countries on a state, county, or prefecture scale were used as the calibration data. Then we obtained the model parameter sets that can give high correlation coefficients between the historical and estimated yield time series for the period 1980-2006. We analyzed the impacts on soybean yields in the three top soybean-producing countries (the USA, China, and Brazil) associated with the changes in climate and CO2 during the period 1980-2006, using the model. We found that, given the simulated yields and reported harvested areas, the estimated average net benefit from the CO2 fertilization effect (with one standard deviation) in the USA, Brazil, and China in the years was 42.70×32.52 Mt, 35.30×28.55 Mt, and 12.52×15.11 Mt, respectively. Results suggest that the CO2-induced increases in soybean yields in the USA and China likely offset a part of the negative impacts on yields due to the historical temperature rise. In contrast, the net effect of the past change in climate and CO2 in Brazil appeared to be positive. This study demonstrates a quantitative estimation of the impacts of the changes in climate and CO2 during the past few decades using a new global crop model.

  13. Measurement of airborne ultrasonic slow waves in calcaneal cancellous bone.

    PubMed

    Strelitzki, R; Paech, V; Nicholson, P H

    1999-05-01

    Measurements of an airborne ultrasonic wave were made in defatted cancellous bone from the human calcaneus using standard ultrasonic equipment. The wave propagating under these conditions was consistent with a decoupled Biot slow wave travelling in the air alone, as previously reported in gas-saturated foams. Reproducible measurements of phase velocity and attenuation coefficient were possible, and an estimate of the tortuosity of the trabecular framework was derived from the high frequency limit of the phase velocity. Thus the method offers a new approach to the acoustic characterisation of bone in vitro which, in contrast to existing techniques, has the potential to yield information directly characterising the trabecular structure.

  14. Optical configuration with fixed transverse magnification for self-interference incoherent digital holography.

    PubMed

    Imbe, Masatoshi

    2018-03-20

    The optical configuration proposed in this paper consists of a 4-f optical setup with the wavefront modulation device on the Fourier plane, such as a concave mirror and a spatial light modulator. The transverse magnification of reconstructed images with the proposed configuration is independent of locations of an object and an image sensor; therefore, reconstructed images of object(s) at different distances can be scaled with a fixed transverse magnification. It is yielded based on Fourier optics and mathematically verified with the optical matrix method. Numerical simulation results and experimental results are also given to confirm the fixity of the reconstructed images.

  15. Rare isotope studies involving catalytic oxidation of CO over platinum-tin oxide

    NASA Technical Reports Server (NTRS)

    Upchurch, Billy T.; Wood, George M., Jr.; Hess, Robert V.; Hoyt, Ronald F.

    1987-01-01

    Results of studies utilizing normal and rare oxygen isotopes in the catalytic oxidation of carbon monoxide over a platinum-tin oxide catalyst substrate are presented. Chemisorption of labeled carbon monoxide on the catalyst followed by thermal desorption yielded a carbon dioxide product with an oxygen-18 composition consistent with the formation of a carbonate-like intermediate in the chemisorption process. The efficacy of a method developed for the oxygen-18 labeling of the platinum-tin oxide catalyst surface for use in closed cycle pulsed care isotope carbon dioxide lasers is demonstrated for the equivalent of 10 to the 6th power pulses at 10 pulses per second.

  16. An electron diffraction study of alkali chloride vapors

    NASA Technical Reports Server (NTRS)

    Mawhorter, R. J.; Fink, M.; Hartley, J. G.

    1985-01-01

    A study of monomers and dimers of the four alkali chlorides NaCl, KCl, RbCl, and CsCl in the vapor phase using the counting method of high energy electron diffraction is reported. Nozzle temperatures from 850-960 K were required to achieve the necessary vapor pressures of approximately 0.01 torr. Using harmonic calculations for the monomer and dimer 1 values, a consistent set of structures for all four molecules was obained. The corrected monomer distances reproduce the microwave values very well. The experiment yields information on the amount of dimer present in the vapor, and these results are compared with thermodynamic values.

  17. Multivariate data analysis and metabolic profiling of artemisinin and related compounds in high yielding varieties of Artemisia annua field-grown in Madagascar.

    PubMed

    Suberu, John; Gromski, Piotr S; Nordon, Alison; Lapkin, Alexei

    2016-01-05

    An improved liquid chromatography-tandem mass spectrometry (LC-MS/MS) protocol for rapid analysis of co-metabolites of A. annua in raw extracts was developed and extensively characterized. The new method was used to analyse metabolic profiles of 13 varieties of A. annua from an in-field growth programme in Madagascar. Several multivariate data analysis techniques consistently show the association of artemisinin with dihydroartemisinic acid. These data support the hypothesis of dihydroartemisinic acid being the late stage precursor to artemisinin in its biosynthetic pathway. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Synthesis and Characterization of Manganese Doped Silicon Nanoparticles

    PubMed Central

    Zhang, Xiaoming; Brynda, Marcin; Britt, R. David; Carroll, Elizabeth; Larsen, Delmar S.; Louie, Angelique Y.; Kauzlarich, Susan M.

    2008-01-01

    Mn doped Si nanoparticles have been synthesized via a low temperature solution route and characterize by X-ray powder diffraction, TEM, optical and emission spectroscopy and by EPR. The particle diameter was 4 nm and the surface was capped by octyl groups. 5% Mn doping resulted in a green emission with slightly lower quantum yield than undoped Si nanoparticles prepared by the same method. Mn2+ doped into the nanoparticle is confirmed by epr hyperfine and the charge carrier dynamics were probed by ultrafast transient absorption spectroscopy. Both techniques are consistent with Mn2+ on or close to the surface of the nanoparticle. PMID:17691792

  19. Comparison of 21-Gauge and 22-Gauge Aspiration Needle in Endobronchial Ultrasound-Guided Transbronchial Needle Aspiration

    PubMed Central

    Akulian, Jason; Lechtzin, Noah; Yasin, Faiza; Kamdar, Biren; Ernst, Armin; Ost, David E.; Ray, Cynthia; Greenhill, Sarah R.; Jimenez, Carlos A.; Filner, Joshua; Feller-Kopman, David

    2013-01-01

    Background: Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a minimally invasive procedure originally performed using a 22-gauge (22G) needle. A recently introduced 21-gauge (21G) needle may improve the diagnostic yield and sample adequacy of EBUS-TBNA, but prior smaller studies have shown conflicting results. To our knowledge, this is the largest study undertaken to date to determine whether the 21G needle adds diagnostic benefit. Methods: We retrospectively evaluated the results of 1,299 patients from the American College of Chest Physicians Quality Improvement Registry, Education, and Evaluation (AQuIRE) Diagnostic Registry who underwent EBUS-TBNA between February 2009 and September 2010 at six centers throughout the United States. Data collection included patient demographics, sample adequacy, and diagnostic yield. Analysis consisted of univariate and multivariate hierarchical logistic regression comparing diagnostic yield and sample adequacy of EBUS-TBNA specimens by needle gauge. Results: A total of 1,235 patients met inclusion criteria. Sample adequacy was obtained in 94.9% of the 22G needle group and in 94.6% of the 21G needle group (P = .81). A diagnosis was made in 51.4% of the 22G and 51.3% of the 21G groups (P = .98). Multivariate hierarchical logistic regression showed no statistical difference in sample adequacy or diagnostic yield between the two groups. The presence of rapid onsite cytologic evaluation was associated with significantly fewer needle passes per procedure when using the 21G needle (P < .001). Conclusions: There is no difference in specimen adequacy or diagnostic yield between the 21G and 22G needle groups. EBUS-TBNA in conjunction with rapid onsite cytologic evaluation and a 21G needle is associated with fewer needle passes compared with a 22G needle. PMID:23632441

  20. Long-term effects of conventional and reduced tillage systems on soil condition and yield of maize

    NASA Astrophysics Data System (ADS)

    Rátonyi, Tamás; Széles, Adrienn; Harsányi, Endre

    2015-04-01

    As a consequence of operations which neglect soil condition and consist of frequent soil disturbance, conventional tillage (primary tillage with autumn ploughing) results in the degradation and compaction of soil structure, as well as the reduction of organic matter. These unfavourable processes pose an increasing economic and environmental protection problem today. The unfavourable physical condition of soils on which conventional tillage was performed indicate the need for preserving methods and tools. The examinations were performed in the multifactorial long-term tillage experiment established at the Látókép experiment site of DE MÉK. The experiment site is located in the Hajdúság loess ridge (Hungary) and its soil is loess-based calcareous chernozem with deep humus layer. The physical soil type is mid-heavy adobe. The long-term experiment has a split-split plot design. The main plots are different tillage methods (autumn ploughing, spring shallow tillage) without replication. In this paper, the effect of conventional and reduced (shallow) tillage methods on soil conditions and maize yield was examined. A manual penetrometer was used to determine the physical condition and compactedness of the soil. The soil moisture content was determined with deep probe measurement (based on capacitive method). In addition to soil analyses, the yield per hectare of different plots was also observed. In reduced tillage, one compacted layer is shown in the soil resistance profile determined with a penetrometer, while there are two compacted layers in autumn ploughing. The highest resistance was measured in the case of primary tillage performed at the same depth for several years in the compacted (pan disk) layer developed under the developed layer in both treatments. The unfavourable impact of spring shallow primary tillage on physical soil conditions is shown by the fact that the compaction of the pan disk exceed the critical limit value of 3 MPa. Over the years, further deterioration of physical conditions were observed below the regularly cultivated layer. In shallow tillage, soil contained more moisture (at 40-50 cm deep and below) than in the ploughed treatment. There are multiple reasons for this phenomenon. This tillage method is moisture preserving as the depth of disturbance (15 cm) is lower than in ploughed treatments (25-30 cm). Soil surface is covered by stem residues after sowing, which may reduce the extent of evaporation. The soil surface CO2 emission was determined based on primary tillage depth, intensity and the period which passed since primary tillage. Spring shallow primary tillage resulted in higher CO2 emission than conventional tillage. The average maize yield was significantly higher in the autumn ploughing treatment (6,6-13,9 t/ha) in the first half (7 years) of the examined period (2000-2014). Higher average yields were observed in two years in the spring shallow tillage treatment and no significant yield difference was observed between tillage treatments in other examined years. Reduced (shallow) tillage increases the risk of near-surface soil compaction and the biological activity of the soil, while it reduces the moisture loss of the soil. Reducing tillage intensity does not necessarily reduce the average yield of the produced crop (maize).

Top