Science.gov

Sample records for sample temperature modeling

  1. Recommended Maximum Temperature For Mars Returned Samples

    NASA Technical Reports Server (NTRS)

    Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.

    2016-01-01

    The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.

  2. Multiphoton cryo microscope with sample temperature control

    NASA Astrophysics Data System (ADS)

    Breunig, H. G.; Uchugonova, A.; König, K.

    2013-02-01

    We present a multiphoton microscope system which combines the advantages of multiphoton imaging with precise control of the sample temperature. The microscope provides online insight in temperature-induced changes and effects in plant tissue and animal cells with subcellular resolution during cooling and thawing processes. Image contrast is based on multiphoton fluorescence intensity or fluorescence lifetime in the range from liquid nitrogen temperature up to +600°C. In addition, micro spectra from the imaged regions can be recorded. We present measurement results from plant leaf samples as well as Chinese hamster ovary cells.

  3. Coercivity maxima at low temperatures. [of lunar samples

    NASA Technical Reports Server (NTRS)

    Schwerer, F. C.; Nagata, T.

    1974-01-01

    Recent measurements have shown that the magnetic coercive forces of some Apollo lunar samples show an unexpected decrease with decreasing temperature at cryogenic temperatures. This behavior can be explained quantitatively in terms of a model which considers additive contributions from a soft, reversible magnetic phase and from a harder, hysteretic magnetic phase.

  4. Estimation of river and stream temperature trends under haphazard sampling

    USGS Publications Warehouse

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  5. Apparatus for low temperature thermal desorption spectroscopy of portable samples.

    PubMed

    Stuckenholz, S; Büchner, C; Ronneburg, H; Thielsch, G; Heyde, M; Freund, H-J

    2016-04-01

    An experimental setup for low temperature thermal desorption spectroscopy (TDS) integrated in an ultrahigh vacuum-chamber housing a high-end scanning probe microscope for comprehensive multi-tool surface science analysis is described. This setup enables the characterization with TDS at low temperatures (T > 22 K) of portable sample designs, as is the case for scanning probe optimized setups or high-throughput experiments. This combination of techniques allows a direct correlation between surface morphology, local spectroscopy, and reactivity of model catalysts. The performance of the multi-tool setup is illustrated by measurements of a model catalyst. TDS of CO from Mo(001) and from Mo(001) supported MgO thin films were carried out and combined with scanning tunneling microscopy measurements.

  6. Apparatus for low temperature thermal desorption spectroscopy of portable samples

    NASA Astrophysics Data System (ADS)

    Stuckenholz, S.; Büchner, C.; Ronneburg, H.; Thielsch, G.; Heyde, M.; Freund, H.-J.

    2016-04-01

    An experimental setup for low temperature thermal desorption spectroscopy (TDS) integrated in an ultrahigh vacuum-chamber housing a high-end scanning probe microscope for comprehensive multi-tool surface science analysis is described. This setup enables the characterization with TDS at low temperatures (T > 22 K) of portable sample designs, as is the case for scanning probe optimized setups or high-throughput experiments. This combination of techniques allows a direct correlation between surface morphology, local spectroscopy, and reactivity of model catalysts. The performance of the multi-tool setup is illustrated by measurements of a model catalyst. TDS of CO from Mo(001) and from Mo(001) supported MgO thin films were carried out and combined with scanning tunneling microscopy measurements.

  7. Proximity effect thermometer for local temperature measurements on mesoscopic samples.

    SciTech Connect

    Aumentado, J.; Eom, J.; Chandrasekhar, V.; Baldo, P. M.; Rehn, L. E.; Materials Science Division; Northwestern Univ; Univ. of Chicago

    1999-11-29

    Using the strong temperature-dependent resistance of a normal metal wire in proximity to a superconductor, we have been able to measure the local temperature of electrons heated by flowing a direct-current (dc) in a metallic wire to within a few tens of millikelvin at low temperatures. By placing two such thermometers at different parts of a sample, we have been able to measure the temperature difference induced by a dc flowing in the samples. This technique may provide a flexible means of making quantitative thermal and thermoelectric measurements on mesoscopic metallic samples.

  8. Calibration of tip and sample temperature of a scanning tunneling microscope using a superconductive sample

    SciTech Connect

    Stocker, Matthias; Pfeifer, Holger; Koslowski, Berndt

    2014-05-15

    The temperature of the electrodes is a crucial parameter in virtually all tunneling experiments. The temperature not only controls the thermodynamic state of the electrodes but also causes thermal broadening, which limits the energy resolution. Unfortunately, the construction of many scanning tunneling microscopes inherits a weak thermal link between tip and sample in order to make one side movable. Such, the temperature of that electrode is badly defined. Here, the authors present a procedure to calibrate the tip temperature by very simple means. The authors use a superconducting sample (Nb) and a standard tip made from W. Due to the asymmetry in the density of states of the superconductor (SC)—normal metal (NM) tunneling junction, the SC temperature controls predominantly the density of states while the NM controls the thermal smearing. By numerically simulating the I-V curves and numerically optimizing the tip temperature and the SC gap width, the tip temperature can be accurately deduced if the sample temperature is known or measureable. In our case, the temperature dependence of the SC gap may serve as a temperature sensor, leading to an accurate NM temperature even if the SC temperature is unknown.

  9. Radiation and temperature effects on LDEF fiber optic samples

    NASA Technical Reports Server (NTRS)

    Johnston, A. R.; Hartmayer, R.; Bergman, L. A.

    1993-01-01

    Results obtained from the JPL Fiber Optics Long Duration Exposure Facility (LDEF) Experiment since the June 1991 Experimenters' Workshop are addressed. Radiation darkening of laboratory control samples and the subsequent annealing was measured in the laboratory for the control samples. The long-time residual loss was compared to the LDEF flight samples and found to be in agreement. The results of laboratory temperature tests on the flight samples, extending over a period of about nine years, including the pre-flight and post-flight analysis periods, are described. The temperature response of the different cable samples varies widely, and appears in two samples to be affected by polymer aging. Conclusions to date are summarized.

  10. Rotating sample magnetometer for cryogenic temperatures and high magnetic fields.

    PubMed

    Eisterer, M; Hengstberger, F; Voutsinas, C S; Hörhager, N; Sorta, S; Hecher, J; Weber, H W

    2011-06-01

    We report on the design and implementation of a rotating sample magnetometer (RSM) operating in the variable temperature insert (VTI) of a cryostat equipped with a high-field magnet. The limited space and the cryogenic temperatures impose the most critical design parameters: the small bore size of the magnet requires a very compact pick-up coil system and the low temperatures demand a very careful design of the bearings. Despite these difficulties the RSM achieves excellent resolution at high magnetic field sweep rates, exceeding that of a typical vibrating sample magnetometer by about a factor of ten. In addition the gas-flow cryostat and the high-field superconducting magnet provide a temperature and magnetic field range unprecedented for this type of magnetometer.

  11. The effects of spatial sampling choices on MR temperature measurements.

    PubMed

    Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L

    2011-02-01

    The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case.

  12. High-temperature constitutive modeling

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.; Ellis, J. R.

    1984-01-01

    Thermomechanical service conditions for high-temperature levels, thermal transients, and mechanical loads severe enough to cause measurable inelastic deformation are studied. Structural analysis in support of the design of high-temperature components depends strongly on accurate mathematical representations of the nonlinear, hereditary, inelastic behavior of structural alloys at high temperature, particularly in the relatively small strain range. Progress is discussed in the following areas: multiaxial experimentation to provide a basis for high-temperature multiaxial constitutive relationships; nonisothermal testing and theoretical development toward a complete thermomechanically path dependent formulation of viscoplasticity; and development of viscoplastic constitutive model accounting for initial anisotropy.

  13. Ultra sound absorption measurements in rock samples at low temperatures

    NASA Technical Reports Server (NTRS)

    Herminghaus, C.; Berckhemer, H.

    1974-01-01

    A new technique, comparable with the reverberation method in room acoustics, is described. It allows Q-measurements at rock samples of arbitrary shape in the frequency range of 50 to 600 kHz in vacuum (.1 mtorr) and at low temperatures (+20 to -180 C). The method was developed in particular to investigate rock samples under lunar conditions. Ultrasound absorption has been measured at volcanics, breccia, gabbros, feldspar and quartz of different grain size and texture yielding the following results: evacuation raises Q mainly through lowering the humidity in the rock. In a dry compact rock, the effect of evacuation is small. With decreasing temperature, Q generally increases. Between +20 and -30 C, Q does not change much. With further decrease of temperature in many cases distinct anomalies appear, where Q becomes frequency dependent.

  14. Dual-temperature acoustic levitation and sample transport apparatus

    NASA Technical Reports Server (NTRS)

    Trinh, E.; Robey, J.; Jacobi, N.; Wang, T.

    1986-01-01

    The properties of a dual-temperature resonant chamber to be used for acoustical levitation and positioning have been theoretically and experimentally studied. The predictions of a first-order dissipationless treatment of the generalized wave equation for an inhomogeneous medium are in close agreement with experimental results for the temperature dependence of the resonant mode spectrum and the acoustic pressure distribution, although the measured magnitude of the pressure variations does not correlate well with the calculated one. Ground-based levitation of low-density samples has been demonstrated at 800 C, where steady-state forces up to 700 dyn were generated.

  15. A low temperature scanning force microscope for biological samples

    SciTech Connect

    Gustafsson, Mats Gustaf Lennart

    1993-05-01

    An SFM has been constructed capable of operating at 143 K. Two contributions to SFM technology are described: a new method of fabricating tips, and new designs of SFM springs that significantly lower the noise level. The SFM has been used to image several biological samples (including collagen, ferritin, RNA, purple membrane) at 143 K and room temperature. No improvement in resolution resulted from 143 K operation; several possible reasons for this are discussed. Possibly sharper tips may help. The 143 K SFM will allow the study of new categories of samples, such as those prepared by freeze-frame, single molecules (temperature dependence of mechanical properties), etc. The SFM was used to cut single collagen molecules into segments with a precision of {le} 10 nm.

  16. Influence of probe-sample temperature difference on thermal mapping contrast in scanning thermal microscopy imaging

    NASA Astrophysics Data System (ADS)

    Kaźmierczak-Bałata, Anna; Juszczyk, Justyna; Trefon-Radziejewska, Dominika; Bodzenta, Jerzy

    2017-03-01

    The purpose of this work is to investigate the influence of a temperature difference through a probe-sample contact on thermal contrast in Scanning Thermal Microscopy imaging. A variety of combinations of temperature differences in the probe-sample system were first analyzed based on an electro-thermal finite element model. The numerical analysis included cooling the sample, as well as heating the sample and the probe. Due to the simplicity in the implementation, experimental verification involved modifying the standard imaging technique by heating the sample. Experiments were carried out in the temperature range between 298 K and 328 K. Contrast in thermal mapping was improved for a low probe current with a heated sample.

  17. Microcalorimetry: Wide Temperature Range, High Field, Small Sample Measurements

    NASA Astrophysics Data System (ADS)

    Hellman, Frances

    2000-03-01

    We have used Si micromachining techniques to fabricate devices for measuring specific heat or other calorimetric signals from microgram-quantity samples over a temperature range from 1 to 900K in magnetic fields to date up to 8T. The devices are based on a relatively robust silicon nitride membrane with thin film heaters and thermometers. Different types of thermometers are used for different purposes and in different temperature ranges. These devices are particularly useful for thin film samples (typically 200-400 nm thick at present) deposited directly onto the membrane through a Si micromachined evaporation mask. They have also been used for small single crystal samples attached by conducting grease or solder, and for powder samples dissolved in a solvent and dropped onto devices. The measurement technique used (relaxation method) is particularly suited to high field measurements because the thermal conductance can be measured once in zero field and is field independent, while the time constant of the relaxation does not depend on thermometer calibration. Present development efforts include designs which show promise for time-resolved calorimetry measurements of biological samples in small amounts of water. Samples measured to date include amorphous magnetic thin films (a-TbFe2 and giant negative magnetoresistance a-Gd-Si alloys), empty and filled fullerenes (C_60, K_3C_60, C_82, La@C_82, C_84, and Sc_2@C_84), single crystal manganites (La_1-xSr_xMnO_3), antiferromagnetic multilayers (NiO/CoO, NiO/MgO, and CoO/MgO), and nanoparticle magnetic materials (CoO in a Ag matrix).

  18. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  19. Modeling abundance effects in distance sampling

    USGS Publications Warehouse

    Royle, J. Andrew; Dawson, D.K.; Bates, S.

    2004-01-01

    Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

  20. Wang-Landau sampling with logarithmic windows for continuous models.

    PubMed

    Xie, Y L; Chu, P; Wang, Y L; Chen, J P; Yan, Z B; Liu, J-M

    2014-01-01

    We present a modified Wang-Landau sampling (MWLS) for continuous statistical models by partitioning the energy space into a set of windows with logarithmically shrinking width. To demonstrate its necessity and advantages, we apply this sampling to several continuous models, including the two-dimensional square XY spin model, triangular J1-J2 spin model, and Lennard-Jones cluster model. Given a finite number of bins for partitioning the energy space, the conventional Wang-Landau sampling may not generate sufficiently accurate density of states (DOS) around the energy boundaries. However, it is demonstrated that much more accurate DOS can be obtained by this MWLS, and thus a precise evaluation of the thermodynamic behaviors of the continuous models at extreme low temperature (kBT<0.1) becomes accessible. The present algorithm also allows efficient computation besides the highly reliable data sampling.

  1. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  2. Monte Carlo Sampling of Negative-temperature Plasma States

    SciTech Connect

    John A. Krommes; Sharadini Rath

    2002-07-19

    A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.

  3. Tissue Sampling Guides for Porcine Biomedical Models.

    PubMed

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results.

  4. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    SciTech Connect

    Montaser, A.

    1992-01-01

    New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)

  5. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Raicich, F.; Rampazzo, A.

    2003-01-01

    For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS) is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis) code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT) profiles collected along Volunteer Observing Ship (VOS) tracks, Airborne XBTs (AXBTs) and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000) is also studied.

  6. Real sample temperature: a critical issue in the experiments of nuclear resonant vibrational spectroscopy on biological samples.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Kamali, Saeed; Zhou, Zhao Hui; Cramer, Stephen P

    2012-03-01

    There are several practical and intertangled issues which make the experiments of nuclear resonant vibrational spectroscopy (NRVS) on biological samples difficult to perform. The sample temperature is one of the most important issues. In NRVS the real sample temperatures can be very different from the readings on the temperature sensors. In this study the following have been performed: (i) citing and analyzing various existing NRVS data to assess the real sample temperatures during the NRVS measurements and to understand their trends with the samples' loading conditions; (ii) designing several NRVS measurements with (Et(4)N)[FeCl(4)] to verify these trends; and (iii) proposing a new sample-loading procedure to achieve significantly lower real sample temperatures and to balance among the intertangled experimental issues in biological NRVS measurements.

  7. Origin and temperature dependence of radiation damage in biological samples at cryogenic temperatures.

    PubMed

    Meents, Alke; Gutmann, Sascha; Wagner, Armin; Schulze-Briese, Clemens

    2010-01-19

    Radiation damage is the major impediment for obtaining structural information from biological samples by using ionizing radiation such as x-rays or electrons. The knowledge of underlying processes especially at cryogenic temperatures is still fragmentary, and a consistent mechanism has not been found yet. By using a combination of single-crystal x-ray diffraction, small-angle scattering, and qualitative and quantitative radiolysis experiments, we show that hydrogen gas, formed inside the sample during irradiation, rather than intramolecular bond cleavage between non-hydrogen atoms, is mainly responsible for the loss of high-resolution information and contrast in diffraction experiments and microscopy. The experiments that are presented in this paper cover a temperature range between 5 and 160 K and reveal that the commonly used temperature in x-ray crystallography of 100 K is not optimal in terms of minimizing radiation damage and thereby increasing the structural information obtainable in a single experiment. At 50 K, specific radiation damage to disulfide bridges is reduced by a factor of 4 compared to 100 K, and samples can tolerate a factor of 2.6 and 3.9 higher dose, as judged by the increase of R(free) values of elastase and cubic insulin crystals, respectively.

  8. Beam Heating of Samples: Modeling and Verification. Part 2

    NASA Technical Reports Server (NTRS)

    Kazmierczak, Michael; Gopalakrishnan, Pradeep; Kumar, Raghav; Banerjee Rupak; Snell, Edward; Bellamy, Henry; Rosenbaum, Gerd; vanderWoerd, Mark

    2006-01-01

    Energy absorbed from the X-ray beam by the sample requires cooling by forced convection (i.e. cryostream) to minimize temperature increase and the damage caused to the sample by the X-ray heating. In this presentation we will first review the current theoretical models and recent studies in the literature, which predict the sample temperature rise for a given set of beam parameters. It should be noted that a common weakness of these previous studies is that none of them provide actual experimental confirmation. This situation is now remedied in our investigation where the problem of x-ray sample heating is taken up once more. We have theoretically investigated, and at the same time, in addition to the numerical computations, performed experiments to validate the predictions. We have modeled, analyzed and experimentally tested the temperature rise of a 1 mm diameter glass sphere (sample surrogate) exposed to an intense synchrotron X-ray beam, while it is being cooled in a uniform flow of nitrogen gas. The heat transfer, including external convection and internal heat conduction was theoretically modeled using CFD to predict the temperature variation in the sphere during cooling and while it was subjected to an undulator (ID sector 19) X-ray beam at the APS. The surface temperature of the sphere during the X-ray beam heating was measured using the infrared camera measurement technique described in a previous talk. The temperatures from the numerical predictions and experimental measurements are compared and discussed. Additional results are reported for the two different sphere sizes and for two different supporting pin orientations.

  9. Magnetic microscopy based on high-Tc SQUIDs for room temperature samples

    NASA Astrophysics Data System (ADS)

    Wang, H. W.; Kong, X. Y.; Ren, Y. F.; Yu, H. W.; Ding, H. S.; Zhao, S. P.; Chen, G. H.; Zhang, L. H.; Zhou, Y. L.; Yang, Q. S.

    2003-11-01

    The SQUID microscope is the most suitable instrument for imaging magnetic fields above sample surfaces if one is mainly interested in field sensitivity. In this paper, both the magnetic moment sensitivity and spatial resolution of the SQUID microscope are analysed with a simple point moment model. The result shows that the ratio of SQUID sensor size to sensor-sample distance effectively influences the sensitivity and spatial resolution. In comparison with some experimental results of magnetic images for room temperature samples from our high-Tc SQUID microscope in an unshielded environment, a brief discussion for further improvement is presented.

  10. Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea

    NASA Astrophysics Data System (ADS)

    Lorentzen, Torbjørn

    2014-02-01

    The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model

  11. Modeling abundance using hierarchical distance sampling

    USGS Publications Warehouse

    Royle, Andy; Kery, Marc

    2016-01-01

    In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.

  12. Mixture models for distance sampling detection functions.

    PubMed

    Miller, David L; Thomas, Len

    2015-01-01

    We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used "key function plus series adjustment" (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.

  13. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test for filter temperature control... Class I and Class II Equivalent Methods for PM 2.5 or PM 10-2,5 § 53.57 Test for filter temperature... temperature during a 4-hour period of active sampling as well as during a subsequent 4-hour non-sampling...

  14. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  15. Fitting models to correlated data (large samples)

    NASA Astrophysics Data System (ADS)

    Féménias, Jean-Louis

    2004-03-01

    The study of the ordered series of residuals of a fit proved to be useful in evaluating separately the pure experimental error and the model bias leading to a possible improvement of the modeling [J. Mol. Spectrosc. 217 (2003) 32]. In the present work this procedure is extended to homogeneous correlated data. This new method allows a separate estimation of pure experimental error, model bias, and data correlation; furthermore, it brings a new insight into the difference between goodness of fit and model relevance. It can be considered either as a study of 'random systematic errors' or as an extended approach of the Durbin-Watson problem [Biometrika 37 (1950) 409] taking into account the model error. In the present work an empirical approach is proposed for large samples ( n⩾500) where numerical tests are done showing the accuracy and the limits of the method.

  16. Latin hypercube sampling with the SESOIL model

    SciTech Connect

    Hetrick, D.M.; Luxmoore, R.J.; Tharp, M.L.

    1994-09-01

    The seasonal soil compartment model SESOIL, a one-dimensional vertical transport code for chemicals in the unsaturated soil zone, has been coupled with the Monte Carlo computer code PRISM, which utilizes a Latin hypercube sampling method. Frequency distributions are assigned to each of 64 soil, chemical, and climate input variables for the SESOIL model, and these distributions are randomly sampled to generate N (200, for example) input data sets. The SESOIL model is run by PRISM for each set of input values, and the combined set of model variables and predictions are evaluated statistically by PRISM to summarize the relative influence of input variables on model results. Output frequency distributions for selected SESOIL components are produced. As an initial analysis and to illustrate the PRISM/SESOIL approach, input data were compiled for the model for three sites at different regions of the country (Oak Ridge, Tenn.; Fresno, Calif.; Fargo, N.D.). The chemical chosen for the analysis was trichloroethylene (TCE), which was initially loaded in the soil column at a 60- to 90-cm depth. The soil type at each site was assumed to be identical to the cherty silt loam at Oak Ridge; the only difference in the three data sets was the climatic data. Output distributions for TCE mass flux volatilized, TCE mass flux to groundwater, and residual TCE concentration in the lowest soil layer are vastly different for the three sites.

  17. Far infrared reflectance of sintered nickel manganite samples for negative temperature coefficient thermistors

    SciTech Connect

    Nikolic, M.V. . E-mail: maria@mi.sanu.ac.yu; Paraskevopoulos, K.M.; Aleksic, O.S.; Zorba, T.T.; Savic, S.M.; Lukovic, D.T.

    2007-08-07

    Single phase complex spinel (Mn, Ni, Co, Fe){sub 3}O{sub 4} samples were sintered at 1050, 1200 and 1300 deg. C for 30 min and at 1200 deg. C for 120 min. Morphological changes of the obtained samples with the sintering temperature and time were analyzed by X-ray diffraction and scanning electron microscope (SEM). Room temperature far infrared reflectivity spectra for all samples were measured in the frequency range between 50 and 1200 cm{sup -1}. The obtained spectra for all samples showed the presence of the same oscillators, but their intensities increased with the sintering temperature and time in correlation with the increase in sample density and microstructure changes during sintering. The measured spectra were numerically analyzed using the Kramers-Kroenig method and the four-parameter model of coupled oscillators. Optical modes were calculated for six observed ionic oscillators belonging to the spinel structure of (Mn, Ni, Co, Fe){sub 3}O{sub 4} of which four were strong and two were weak.

  18. Fast temperature spectrometer for samples under extreme conditions

    SciTech Connect

    Zhang, Dongzhou; Jackson, Jennifer M.; Sturhahn, Wolfgang; Zhao, Jiyong; Alp, E. Ercan; Toellner, Thomas S.; Hu, Michael Y.

    2015-01-15

    We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample’s transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of {sup 57}Fe{sub 0.9}Ni{sub 0.1} at high pressure.

  19. Multiple temperatures sampled using only one reference junction

    NASA Technical Reports Server (NTRS)

    Cope, G. W.

    1966-01-01

    In a multitemperature sampling system where the reference thermocouples are a distance from the test thermocouples, an intermediate thermal junction block is placed between the sets of thermocouples permitting switching between a single reference and the test thermocouples. This reduces the amount of cabling, reference thermocouples, and cost of the sampling system.

  20. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    PubMed

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  1. Fast temperature relaxation model in dense plasmas

    NASA Astrophysics Data System (ADS)

    Faussurier, Gérald; Blancard, Christophe

    2017-01-01

    We present a fast model to calculate the temperature-relaxation rates in dense plasmas. The electron-ion interaction-potential is calculated by combining a Yukawa approach and a finite-temperature Thomas-Fermi model. We include the internal energy as well as the excess energy of ions using the QEOS model. Comparisons with molecular dynamics simulations and calculations based on an average-atom model are presented. This approach allows the study of the temperature relaxation in a two-temperature electron-ion system in warm and hot dense matter.

  2. Thermospheric temperature, density, and composition: New models

    NASA Technical Reports Server (NTRS)

    Jacchia, L. G.

    1977-01-01

    The models essentially consist of two parts: the basic static models, which give temperature and density profiles for the relevant atmospheric constituents for any specified exospheric temperature, and a set of formulae to compute the exospheric temperature and the expected deviations from the static models as a result of all the recognized types of thermospheric variation. For the basic static models, tables are given for heights from 90 to 2,500 km and for exospheric temperatures from 500 to 2600 K. In the formulae for the variations, an attempt has been made to represent the changes in composition observed by mass spectrometers on the OGO 6 and ESRO 4 satellites.

  3. Thermal modeling of core sampling in flammable gas waste tanks. Part 2: Rotary-mode sampling

    SciTech Connect

    Unal, C.; Poston, D.; Pasamehmetoglu, K.O.; Witwer, K.S.

    1997-08-01

    The radioactive waste stored in underground storage tanks at Hanford site includes mixtures of sodium nitrate and sodium nitrite with organic compounds. The waste can produce undesired violent exothermic reactions when heated locally during the rotary-mode sampling. Experiments are performed varying the downward force at a maximum rotational speed of 55 rpm and minimum nitrogen purge flow of 30 scfm. The rotary drill bit teeth-face temperatures are measured. The waste is simulated with a low thermal conductivity hard material, pumice blocks. A torque meter is used to determine the energy provided to the drill string. The exhaust air-chip temperature as well as drill string and drill bit temperatures and other key operating parameters were recorded. A two-dimensional thermal model is developed. The safe operating conditions were determined for normal operating conditions. A downward force of 750 at 55 rpm and 30 scfm nitrogen purge flow was found to yield acceptable substrate temperatures. The model predicted experimental results reasonably well. Therefore, it could be used to simulate abnormal conditions to develop procedures for safe operations.

  4. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Montaser, A.

    In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, nutrition, and biomedicine. Emphasis is being placed on: (1) generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; (2) computer modelings of ICP discharges to predict the behavior of new and existing plasmas; (3) diagnostic studies of high temperature plasmas and sample introduction systems to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; (4) development and characterization of new, low cost sample introduction systems that consume microliter or microgram quantities of samples; and (5) investigation of new membrane separators for stripping solvent from sample aerosol to reduce various interferences and to enhance sensitivity and selectivity in plasma spectrometry.

  5. Confocal sample-scanning microscope for single-molecule spectroscopy and microscopy with fast sample exchange at cryogenic temperatures.

    PubMed

    Hussels, Martin; Konrad, Alexander; Brecht, Marc

    2012-12-01

    The construction of a microscope with fast sample transfer system for single-molecule spectroscopy and microscopy at low temperatures using 2D/3D sample-scanning is reported. The presented construction enables the insertion of a sample from the outside (room temperature) into the cooled (4.2 K) cryostat within seconds. We describe the mechanical and optical design and present data from individual Photosystem I complexes. With the described setup numerous samples can be investigated within one cooling cycle. It opens the possibility to investigate biological samples (i) without artifacts introduced by prolonged cooling procedures and (ii) samples that require preparation steps like plunge-freezing or specific illumination procedures prior to the insertion into the cryostat.

  6. Estimation of surface heat flux and surface temperature during inverse heat conduction under varying spray parameters and sample initial temperature.

    PubMed

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong; Zubair, Muhammad

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m(2) was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa.

  7. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    PubMed Central

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  8. Effects of sample storage time, temperature and syringe type on blood gas tensions in samples with high oxygen partial pressures.

    PubMed Central

    Pretto, J. J.; Rochford, P. D.

    1994-01-01

    BACKGROUND--Although plastic arterial sampling syringes are now commonly used, the effects of sample storage time and temperature on blood gas tensions are poorly described for samples with a high oxygen partial pressure (PaO2) taken with these high density polypropylene syringes. METHODS--Two ml samples of tonometered whole blood (PaO2 86.7 kPa, PaCO2 4.27 kPa) were placed in glass syringes and in three brands of plastic blood gas syringes. The syringes were placed either at room temperature or in iced water and blood gas analysis was performed at baseline and after 5, 10, 20, 40, 60, 90, and 120 minutes. RESULTS--In the first 10 minutes measured PaO2 in plastic syringes at room temperature fell by an average of 1.21 kPa/min; placing the sample on ice reduced the rate of PaO2 decline to 0.19 kPa/min. The rate of fall of PaO2 in glass at room temperature was 0.49 kPa/min. The changes in PaCO2 were less dramatic and at room temperature averaged increases of 0.47 kPa for plastic syringes and 0.71 kPa for glass syringes over the entire two hour period. These changes in gas tension for plastic syringes would lead to an overestimation of pulmonary shunt measured by the 100% oxygen technique of 0.6% for each minute left at room temperature before analysis. CONCLUSIONS--Glass syringes are superior to plastic syringes in preserving samples with a high PaO2, and prompt and adequate cooling of such samples is essential for accurate blood gas analysis. PMID:8016801

  9. Modeling monthly mean air temperature for Brazil

    NASA Astrophysics Data System (ADS)

    Alvares, Clayton Alcarde; Stape, José Luiz; Sentelhas, Paulo Cesar; de Moraes Gonçalves, José Leonardo

    2013-08-01

    Air temperature is one of the main weather variables influencing agriculture around the world. Its availability, however, is a concern, mainly in Brazil where the weather stations are more concentrated on the coastal regions of the country. Therefore, the present study had as an objective to develop models for estimating monthly and annual mean air temperature for the Brazilian territory using multiple regression and geographic information system techniques. Temperature data from 2,400 stations distributed across the Brazilian territory were used, 1,800 to develop the equations and 600 for validating them, as well as their geographical coordinates and altitude as independent variables for the models. A total of 39 models were developed, relating the dependent variables maximum, mean, and minimum air temperatures (monthly and annual) to the independent variables latitude, longitude, altitude, and their combinations. All regression models were statistically significant ( α ≤ 0.01). The monthly and annual temperature models presented determination coefficients between 0.54 and 0.96. We obtained an overall spatial correlation higher than 0.9 between the models proposed and the 16 major models already published for some Brazilian regions, considering a total of 3.67 × 108 pixels evaluated. Our national temperature models are recommended to predict air temperature in all Brazilian territories.

  10. Effects of High-frequency Wind Sampling on Simulated Mixed Layer Depth and Upper Ocean Temperature

    NASA Technical Reports Server (NTRS)

    Lee, Tong; Liu, W. Timothy

    2005-01-01

    Effects of high-frequency wind sampling on a near-global ocean model are studied by forcing the model with a 12 hourly averaged wind product and its 24 hourly subsamples in separate experiments. The differences in mixed layer depth and sea surface temperature resulting from these experiments are examined, and the underlying physical processes are investigated. The 24 hourly subsampling not only reduces the high-frequency variability of the wind but also affects the annual mean wind because of aliasing. While the former effect largely impacts mid- to high-latitude oceans, the latter primarily affects tropical and coastal oceans. At mid- to high-latitude regions the subsampled wind results in a shallower mixed layer and higher sea surface temperature because of reduced vertical mixing associated with weaker high-frequency wind. In tropical and coastal regions, however, the change in upper ocean structure due to the wind subsampling is primarily caused by the difference in advection resulting from aliased annual mean wind, which varies with the subsampling time. The results of the study indicate a need for more frequent sampling of satellite wind measurement and have implications for data assimilation in terms of identifying the nature of model errors.

  11. Modeling daily average stream temperature from air temperature and watershed area

    NASA Astrophysics Data System (ADS)

    Butler, N. L.; Hunt, J. R.

    2012-12-01

    Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7

  12. Using Inverse Probability Bootstrap Sampling to Eliminate Sample Induced Bias in Model Based Analysis of Unequal Probability Samples.

    PubMed

    Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E

    2015-01-01

    In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be

  13. Using Inverse Probability Bootstrap Sampling to Eliminate Sample Induced Bias in Model Based Analysis of Unequal Probability Samples

    PubMed Central

    Nahorniak, Matthew

    2015-01-01

    In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be

  14. Method and apparatus for transport, introduction, atomization and excitation of emission spectrum for quantitative analysis of high temperature gas sample streams containing vapor and particulates without degradation of sample stream temperature

    DOEpatents

    Eckels, David E.; Hass, William J.

    1989-05-30

    A sample transport, sample introduction, and flame excitation system for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses.

  15. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present...

  16. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present...

  17. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present...

  18. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present...

  19. A numerical model for ground temperature determination

    NASA Astrophysics Data System (ADS)

    Jaszczur, M.; Polepszyc, I.; Biernacka, B.; Sapińska-Śliwa, A.

    2016-09-01

    The ground surface temperature and the temperature with respect to depth are one of the most important issues for geotechnical and environmental applications as well as for plants and other living organisms. In geothermal systems, temperature is directly related to the energy resources in the ground and it influences the efficiency of the ground source system. The ground temperature depends on a very large number of parameters, but it often needs to be evaluated with good accuracy. In the present work, models for the prediction of the ground temperature with a focus on the surface temperature at which all or selected important ground and environmental phenomena are taken into account have been analysed. It has been found that the simplest models and the most complex model may result in a similar temperature variation, yet at a very low depth and for specific cases only. A detailed analysis shows that taking into account different types of pavement or a greater depth requires more complex and advanced models.

  20. Modeling of global surface air temperature

    NASA Astrophysics Data System (ADS)

    Gusakova, M. A.; Karlin, L. N.

    2012-04-01

    A model to assess a number of factors, such as total solar irradiance, albedo, greenhouse gases and water vapor, affecting climate change has been developed on the basis of Earth's radiation balance principle. To develop the model solar energy transformation in the atmosphere was investigated. It's a common knowledge, that part of the incoming radiation is reflected into space from the atmosphere, land and water surfaces, and another part is absorbed by the Earth's surface. Some part of outdoing terrestrial radiation is retained in the atmosphere by greenhouse gases (carbon dioxide, methane, nitrous oxide) and water vapor. Making use of the regression analysis a correlation between concentration of greenhouse gases, water vapor and global surface air temperature was obtained which, it is turn, made it possible to develop the proposed model. The model showed that even smallest fluctuations of total solar irradiance intensify both positive and negative feedback which give rise to considerable changes in global surface air temperature. The model was used both to reconstruct the global surface air temperature for the 1981-2005 period and to predict global surface air temperature until 2030. The reconstructions of global surface air temperature for 1981-2005 showed the models validity. The model makes it possible to assess contribution of the factors listed above in climate change.

  1. Measurement of temperature and temperature gradient in millimeter samples by chlorine NQR

    NASA Astrophysics Data System (ADS)

    Lužnik, Janko; Pirnat, Janez; Trontelj, Zvonko

    2009-09-01

    A mini-thermometer based on the 35Cl nuclear quadrupole resonance (NQR) frequency temperature dependence in the chlorates KClO3 and NaClO3 was built and successfully tested by measuring temperature and temperature gradient at 77 K and higher in about 100 mm3 active volume of a mini Joule-Thomson refrigerator. In the design of the tank-circuit coil, an array of small coils connected in series enabled us (a) to achieve a suitable ratio of inductance to capacity in the NQR spectrometer input tank circuit, (b) to use a single crystal of KClO3 or NaClO3 (of 1-2 mm3 size) in one coil as a mini-thermometer with a resolution of 0.03 K and (c) to construct a system for measuring temperature gradients when the spatial coordinates of each chlorate single crystal within an individual coil are known.

  2. The effectiveness of cooling conditions on temperature of canine EDTA whole blood samples

    PubMed Central

    Sun, Xiaocun; Flatland, Bente

    2016-01-01

    Background Preanalytic factors such as time and temperature can have significant effects on laboratory test results. For example, ammonium concentration will increase 31% in blood samples stored at room temperature for 30 min before centrifugation. To reduce preanalytic error, blood samples may be placed in precooled tubes and chilled on ice or in ice water baths; however, the effectiveness of these modalities in cooling blood samples has not been formally evaluated. The purpose of this study was to evaluate the effectiveness of various cooling modalities on reducing temperature of EDTA whole blood samples. Methods Pooled samples of canine EDTA whole blood were divided into two aliquots. Saline was added to one aliquot to produce a packed cell volume (PCV) of 40% and to the second aliquot to produce a PCV of 20% (simulated anemia). Thirty samples from each aliquot were warmed to 37.7 °C and cooled in 2 ml allotments under one of three conditions: in ice, in ice after transfer to a precooled tube, or in an ice water bath. Temperature of each sample was recorded at one minute intervals for 15 min. Results Within treatment conditions, sample PCV had no significant effect on cooling. Cooling in ice water was significantly faster than cooling in ice only or transferring the sample to a precooled tube and cooling it on ice. Mean temperature of samples cooled in ice water was significantly lower at 15 min than mean temperatures of those cooled in ice, whether or not the tube was precooled. By 4 min, samples cooled in an ice water bath had reached mean temperatures less than 4 °C (refrigeration temperature), while samples cooled in other conditions remained above 4.0 °C for at least 11 min. For samples with a PCV of 40%, precooling the tube had no significant effect on rate of cooling on ice. For samples with a PCV of 20%, transfer to a precooled tube resulted in a significantly faster rate of cooling than direct placement of the warmed tube onto ice. Discussion Canine

  3. High Temperature High Pressure Thermodynamic Measurements for Coal Model Compounds

    SciTech Connect

    John C. Chen; Vinayak N. Kabadi

    1998-11-12

    The overall objective of this project is to develop a better thermodynamic model for predicting properties of high-boiling coal derived liquids, especially the phase equilibria of different fractions at elevated temperatures and pressures. The development of such a model requires data on vapor-liquid equilibria (VLE), enthalpy, and heat capacity which would be experimentally determined for binary systems of coal model compounds and compiled into a database. The data will be used to refine existing models such as UNIQUAC and UNIFAC. The flow VLE apparatus designed and built for a previous project was upgraded and recalibrated for data measurements for thk project. The modifications include better and more accurate sampling technique and addition of a digital recorder to monitor temperature, pressure and liquid level inside the VLE cell. VLE data measurements for system benzene-ethylbenzene have been completed. The vapor and liquid samples were analysed using the Perkin-Elmer Autosystem gas chromatography.

  4. Temperature dependence of standard model CP violation.

    PubMed

    Brauner, Tomáš; Taanila, Olli; Tranberg, Anders; Vuorinen, Aleksi

    2012-01-27

    We analyze the temperature dependence of CP violation effects in the standard model by determining the effective action of its bosonic fields, obtained after integrating out the fermions from the theory and performing a covariant gradient expansion. We find nonvanishing CP violating terms starting at the sixth order of the expansion, albeit only in the C-odd-P-even sector, with coefficients that depend on quark masses, Cabibbo-Kobayashi-Maskawa matrix elements, temperature and the magnitude of the Higgs field. The CP violating effects are observed to decrease rapidly with temperature, which has important implications for the generation of a matter-antimatter asymmetry in the early Universe. Our results suggest that the cold electroweak baryogenesis scenario may be viable within the standard model, provided the electroweak transition temperature is at most of order 1 GeV.

  5. Modeling of concrete response at high temperature

    SciTech Connect

    Pfeiffer, P.; Marchertas, A.

    1984-01-01

    A rate-type creep law is implemented into the computer code TEMP-STRESS for high temperature concrete analysis. The disposition of temperature, pore pressure and moisture for the particular structure in question is provided as input for the thermo-mechanical code. The loss of moisture from concrete also induces material shrinkage which is accounted for in the analytical model. Examples are given to illustrate the numerical results.

  6. Temperature Dependent Residual Stress Models for Ultra-High-Temperature Ceramics on High Temperature Oxidation

    NASA Astrophysics Data System (ADS)

    Wang, Ruzhuan; Li, Weiguo

    2016-11-01

    The strength of SiC-depleted layer of ultra-high-temperature ceramics on high temperature oxidation degrades seriously. The research for residual stresses developed within the SiC-depleted layer is important and necessary. In this work, the residual stress evolutions in the SiC-depleted layer and the unoxidized substrate in various stages of oxidation are studied by using the characterization models. The temperature and oxidation time dependent mechanical/thermal properties of each phase in SiC-depleted layer are considered in the models. The study shows that the SiC-depleted layer would suffer from large tensile stresses due to the great temperature changes and the formation of pores on high temperature oxidation. The stresses may lead to the cracking and even the delamination of the oxidation layer.

  7. Pre-analytical sample quality: metabolite ratios as an intrinsic marker for prolonged room temperature exposure of serum samples.

    PubMed

    Anton, Gabriele; Wilson, Rory; Yu, Zhong-Hao; Prehn, Cornelia; Zukunft, Sven; Adamski, Jerzy; Heier, Margit; Meisinger, Christa; Römisch-Margl, Werner; Wang-Sattler, Rui; Hveem, Kristian; Wolfenbuttel, Bruce; Peters, Annette; Kastenmüller, Gabi; Waldenberger, Melanie

    2015-01-01

    Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT) and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation.

  8. Global modeling of fresh surface water temperature

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Eikelboom, T.; van Vliet, M. T.; Van Beek, L. P.

    2011-12-01

    Temperature determines a range of water physical properties, the solubility of oxygen and other gases and acts as a strong control on fresh water biogeochemistry, influencing chemical reaction rates, phytoplankton and zooplankton composition and the presence or absence of pathogens. Thus, in freshwater ecosystems the thermal regime affects the geographical distribution of aquatic species through their growth and metabolism, tolerance to parasites, diseases and pollution and life history. Compared to statistical approaches, physically-based models of surface water temperature have the advantage that they are robust in light of changes in flow regime, river morphology, radiation balance and upstream hydrology. Such models are therefore better suited for projecting the effects of global change on water temperature. Till now, physically-based models have only been applied to well-defined fresh water bodies of limited size (e.g., lakes or stream segments), where the numerous parameters can be measured or otherwise established, whereas attempts to model water temperature over larger scales has thus far been limited to regression type of models. Here, we present a first attempt to apply a physically-based model of global fresh surface water temperature. The model adds a surface water energy balance to river discharge modelled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by short and long-wave radiation and sensible and latent heat fluxes. Also included are ice-formation and its effect on heat storage and river hydraulics. We used the coupled surface water and energy balance model to simulate global fresh surface water temperature at daily time steps on a 0.5x0.5 degree grid for the period 1970-2000. Meteorological forcing was obtained from the CRU data set, downscaled to daily values with ECMWF

  9. Simple method for highlighting the temperature distribution into a liquid sample heated by microwave power field

    SciTech Connect

    Surducan, V.; Surducan, E.; Dadarlat, D.

    2013-11-13

    Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved.

  10. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1991-01-01

    A two dimensional conduction/radiation problem for an alumina crucible in a zirconia heater/muffle tube enclosing a liquid iron sample was solved numerically. Variations in the crucible wall thickness were numerically examined. The results showed that the temperature profiles within the liquid iron sample were significantly affected by the crucible wall thicknesses. New zirconia heating elements are under development that will permit continued experimental investigations of the zirconia furnace. These elements have been designed to work with the existing furnace and have been shown to have longer lifetimes than commercially available zirconia heating elements. The first element has been constructed and tested successfully.

  11. The XXL Survey . IV. Mass-temperature relation of the bright cluster sample

    NASA Astrophysics Data System (ADS)

    Lieu, M.; Smith, G. P.; Giles, P. A.; Ziparo, F.; Maughan, B. J.; Démoclès, J.; Pacaud, F.; Pierre, M.; Adami, C.; Bahé, Y. M.; Clerc, N.; Chiappetti, L.; Eckert, D.; Ettori, S.; Lavoie, S.; Le Fevre, J. P.; McCarthy, I. G.; Kilbinger, M.; Ponman, T. J.; Sadibekova, T.; Willis, J. P.

    2016-06-01

    Context. The XXL Survey is the largest survey carried out by XMM-Newton. Covering an area of 50 deg2, the survey contains ~450 galaxy clusters out to a redshift ~2 and to an X-ray flux limit of ~ 5 × 10-15 erg s-1 cm-2. This paper is part of the first release of XXL results focussed on the bright cluster sample. Aims: We investigate the scaling relation between weak-lensing mass and X-ray temperature for the brightest clusters in XXL. The scaling relation discussed in this article is used to estimate the mass of all 100 clusters in XXL-100-GC. Methods: Based on a subsample of 38 objects that lie within the intersection of the northern XXL field and the publicly available CFHTLenS shear catalog, we derive the weak-lensing mass of each system with careful considerations of the systematics. The clusters lie at 0.1 temperature range of T ≃ 1-5 keV. We combine our sample with an additional 58 clusters from the literature, increasing the range to T ≃ 1-10 keV. To date, this is the largest sample of clusters with weak-lensing mass measurements that has been used to study the mass-temperature relation. Results: The mass-temperature relation fit (M ∝ Tb) to the XXL clusters returns a slope and intrinsic scatter σlnM|T≃ 0.53; the scatter is dominated by disturbed clusters. The fit to the combined sample of 96 clusters is in tension with self-similarity, b = 1.67 ± 0.12 and σlnM|T ≃ 0.41. Conclusions: Overall our results demonstrate the feasibility of ground-based weak-lensing scaling relation studies down to cool systems of ~1 keV temperature and highlight that the current data and samples are a limit to our statistical precision. As such we are unable to determine whether the validity of hydrostatic equilibrium is a function of halo mass. An enlarged sample of cool systems, deeper weak-lensing data, and robust modelling of the selection function will help to explore these issues further. Based on observations obtained with XMM-Newton, an ESA

  12. Apparatus Measures Thermal Conductance Through a Thin Sample from Cryogenic to Room Temperature

    NASA Technical Reports Server (NTRS)

    Tuttle, James G.

    2009-01-01

    An apparatus allows the measurement of the thermal conductance across a thin sample clamped between metal plates, including thermal boundary resistances. It allows in-situ variation of the clamping force from zero to 30 lb (133.4 N), and variation of the sample temperature between 40 and 300 K. It has a special design feature that minimizes the effect of thermal radiation on this measurement. The apparatus includes a heater plate sandwiched between two identical thin samples. On the side of each sample opposite the heater plate is a cold plate. In order to take data, the heater plate is controlled at a slightly higher temperature than the two cold plates, which are controlled at a single lower temperature. The steady-state controlling power supplied to the hot plate, the area and thickness of samples, and the temperature drop across the samples are then used in a simple calculation of the thermal conductance. The conductance measurements can be taken at arbitrary temperatures down to about 40 K, as the entire setup is cooled by a mechanical cryocooler. The specific geometry combined with the pneumatic clamping force control system and the steady-state temperature control approach make this a unique apparatus.

  13. Remote sensing of sample temperatures in nuclear magnetic resonance using photoluminescence of semiconductor quantum dots.

    PubMed

    Tycko, Robert

    2014-07-01

    Knowledge of sample temperatures during nuclear magnetic resonance (NMR) measurements is important for acquisition of optimal NMR data and proper interpretation of the data. Sample temperatures can be difficult to measure accurately for a variety of reasons, especially because it is generally not possible to make direct contact to the NMR sample during the measurements. Here I show that sample temperatures during magic-angle spinning (MAS) NMR measurements can be determined from temperature-dependent photoluminescence signals of semiconductor quantum dots that are deposited in a thin film on the outer surface of the MAS rotor, using a simple optical fiber-based setup to excite and collect photoluminescence. The accuracy and precision of such temperature measurements can be better than ±5K over a temperature range that extends from approximately 50K (-223°C) to well above 310K (37°C). Importantly, quantum dot photoluminescence can be monitored continuously while NMR measurements are in progress. While this technique is likely to be particularly valuable in low-temperature MAS NMR experiments, including experiments involving dynamic nuclear polarization, it may also be useful in high-temperature MAS NMR and other forms of magnetic resonance.

  14. Remote sensing of sample temperatures in nuclear magnetic resonance using photoluminescence of semiconductor quantum dots

    PubMed Central

    Tycko, Robert

    2014-01-01

    Knowledge of sample temperatures during nuclear magnetic resonance (NMR) measurements is important for acquisition of optimal NMR data and proper interpretation of the data. Sample temperatures can be difficult to measure accurately for a variety of reasons, especially because it is generally not possible to make direct contact to the NMR sample during the measurements. Here I show that sample temperatures during magic-angle spinning (MAS) NMR measurements can be determined from temperature-dependent photoluminescence signals of semiconductor quantum dots that are deposited in a thin film on the outer surface of the MAS rotor, using a simple optical fiber-based setup to excite and collect photoluminescence. The accuracy and precision of such temperature measurements can be better than ±5 K over a temperature range that extends from approximately 50 K (−223° C) to well above 310 K (37° C). Importantly, quantum dot photoluminescence can be monitored continuously while NMR measurements are in progress. While this technique is likely to be particularly valuable in low-temperature MAS NMR experiments, including experiments involving dynamic nuclear polarization, it may also be useful in high-temperature MAS NMR and other forms of magnetic resonance. PMID:24859817

  15. Energy based model for temperature dependent behavior of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Sah, Sanjay; Atulasimha, Jayasimha

    2017-03-01

    An energy based model for temperature dependent anhysteretic magnetization curves of ferromagnetic materials is proposed and benchmarked against experimental data. This is based on the calculation of macroscopic magnetic properties by performing an energy weighted average over all possible orientations of the magnetization vector. Most prior approaches that employ this method are unable to independently account for the effect of both inhomogeneity and temperature in performing the averaging necessary to model experimental data. Here we propose a way to account for both effects simultaneously and benchmark the model against experimental data from 5 K to 300 K for two different materials in both annealed (fewer inhomogeneities) and deformed (more inhomogeneities) samples. This demonstrates that this framework is well suited to simulate temperature dependent experimental magnetic behavior.

  16. Automated sample exchange and tracking system for neutron research at cryogenic temperatures.

    PubMed

    Rix, J E; Weber, J K R; Santodonato, L J; Hill, B; Walker, L M; McPherson, R; Wenzel, J; Hammons, S E; Hodges, J; Rennich, M; Volin, K J

    2007-01-01

    An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10 mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of approximately 25 K, after several minutes, it was moved onto a "landing pad" at approximately 10 K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2 min. The time to cool the sample from ambient temperature to approximately 10 K was approximately 7 min including precooling time. The cooling time increases to approximately 12 min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to approximately 350 K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to approximately 10 K after it has been heated to approximately 240 K was approximately 20 min.

  17. Flight summaries and temperature climatology at airliner cruise altitudes from GASP (Global Atmospheric Sampling Program) data

    NASA Technical Reports Server (NTRS)

    Nastrom, G. D.; Jasperson, W. H.

    1983-01-01

    Temperature data obtained by the Global Atmospheric Sampling Program (GASP) during the period March 1975 to July 1979 are compiled to form flight summaries of static air temperature and a geographic temperature climatology. The flight summaries include the height and location of the coldest observed temperature and the mean flight level, temperature and the standard deviation of temperature for each flight as well as for flight segments. These summaries are ordered by route and month. The temperature climatology was computed for all statistically independent temperture data for each flight. The grid used consists of 5 deg latitude, 30 deg longitude and 2000 feet vertical resolution from FL270 to FL430 for each month of the year. The number of statistically independent observations, their mean, standard deviation and the empirical 98, 50, 16, 2 and .3 probability percentiles are presented.

  18. Exploring HP protein models using Wang-Landau sampling

    NASA Astrophysics Data System (ADS)

    Wuest, Thomas; Landau, David P.

    2008-03-01

    The hydrophobic-polar (HP) protein model has become a standard in assessing the efficiency of computational methods for protein structure prediction as well as for exploring the statistical physics of protein folding in general. Numerous methods have been proposed to address the challenges of finding minimal energy conformations within the rough energy landscape of this lattice heteropolymer model. However, only a few studies have been dedicated to the more revealing - but also more demanding - problem of estimating the density of states which allows access to thermodynamic properties of a system at any temperature. Here, we show that Wang-Landau sampling, in connection with a suitable move set (``pull moves''), provides a powerful route for the ground state search and the precise determination of the density of states for HP sequences (with up to 100 monomers) in both, two and three dimensions. Our procedure possesses an intrinsic simplicity and overcomes the inevitable limitations inherent in other more tailored approaches. The main advantage lies in its general applicability to a broad range of lattice protein models that go beyond the scope of the HP model.

  19. Radiation and temperature effects on LDEF fiber optic cable samples. [long duration exposure facility

    NASA Technical Reports Server (NTRS)

    Johnston, Alan R.; Hartmayer, Ron; Bergman, Larry A.

    1992-01-01

    This paper will concentrate on results obtained from the Jet Propulsion Lab (JPL) Fiber Optics Long Duration Exposure Facility (LDEF) Experiment since the June 1991 Experimenters Workshop. Radiation darkening of the laboratory control samples will be compared with the LDEF flight samples. The results of laboratory temperature tests on the flight samples extending over a period of about nine years including the preflight and postflight analysis periods will be described.

  20. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Montaser, A.

    This research follows a multifaceted approach, from theory to practice, to the investigation and development of novel helium plasmas, sample introduction systems, and diagnostic techniques for atomic and mass spectrometries. During the period January 1994 - December 1994, four major sets of challenging research programs were addressed that each included a number of discrete but complementary projects: (1) The first program is concerned with fundamental and analytical investigations of novel atmospheric-pressure helium inductively coupled plasmas (He ICPS) that are suitable for the atomization-excitation-ionization of elements, especially those possessing high excitation and ionization energies, for the purpose of enhancing sensitivity and selectivity of analytical measurements. (2) The second program includes simulation and computer modeling of He ICPS. The aim is to ease the hunt for new helium plasmas by predicting their structure and fundamental and analytical properties, without incurring the enormous cost for extensive experimental studies. (3) The third program involves spectroscopic imaging and diagnostic studies of plasma discharges to instantly visualize their prevailing structures, to quantify key fundamental properties, and to verify predictions by mathematical models. (4) The fourth program entails investigation of new, low-cost sample introduction systems that consume micro- to nanoliter quantity of sample solution in plasma spectrometries. A portion of this research involves development and applications of novel diagnostic techniques suitable for probing key fundamental properties of aerosol prior to and after injection into high-temperature plasmas. These efforts, still in progress, collectively offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, material science, biomedicine and nutrition.

  1. Chopped sample heating for quantitative profile analysis of low energy electron diffraction spots at high temperatures

    SciTech Connect

    Kury, P.; Zahl, P.; Horn-von Hoegen, M.; Voges, C.; Frischat, H.; Guenter, H.-L.; Pfnuer, H.; Henzler, M.

    2004-11-01

    Spot profile analysis low energy electron diffraction (SPA-LEED) is one of the most versatile and powerful methods for the determination of the structure and morphology of surfaces even at elevated temperatures. In setups where the sample is heated directly by an electric current, the resolution of the diffraction images at higher temperatures can be heavily degraded due to the inhomogeneous electric and magnetic fields around the sample. Here we present an easily applicable modification of the common data acquisition hardware of the SPA-LEED, which enables the system to work in a pulsed heating mode: Instead of heating the sample with a constant current, a square wave is used and electron counting is only performed when the current through the sample vanishes. Thus, undistorted diffration images can be acquired at high temperatures.

  2. Modelling Brain Temperature and Cerebral Cooling Methods

    NASA Astrophysics Data System (ADS)

    Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael

    2014-11-01

    Direct measurement of cerebral temperature is invasive and impractical meaning treatments for reduction of core brain temperature rely on predictive mathematical models. Current models rely on continuum equations which heavily simplify thermal interactions between blood and tissue. A novel two-phase 3D porous-fluid model is developed to address these limitations. The model solves porous flow equations in 3D along with energy transport equation in both the blood and tissue phases including metabolic generation. By incorporating geometry data extracted from MRI scans, 3D vasculature can be inserted into a porous brain structure to realistically represent blood distribution within the brain. Therefore, thermal transport and convective heat transfer of blood are solved by means of direct numerical simulations. In application, results show that external scalp cooling has a higher impact on both maximum and average core brain temperatures than previously predicted. Additionally, the extent of alternative treatment methods such as pharyngeal cooling and carotid infusion can be investigated using this model. Acknowledgement: EPSRC DTA.

  3. Frequency sampling in microhistological studies: An alternative model

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    Frequency sampling in microhistological studies is discussed in terms of sampling procedures, statistical properties, and biological inferences. Two sampling approaches are described and con-trasted, and some standard methods for improving the stability of density estimators are discussed. Possible sources of difficulty are highlighted in terms of sampling design and statistical analysis. An alternative model is proposed that accounts for 2-stage sampling, and yields reasonable, we!!-behaved estimates of relative densities.

  4. Meth math: modeling temperature responses to methamphetamine.

    PubMed

    Molkov, Yaroslav I; Zaretskaia, Maria V; Zaretsky, Dmitry V

    2014-04-15

    Methamphetamine (Meth) can evoke extreme hyperthermia, which correlates with neurotoxicity and death in laboratory animals and humans. The objective of this study was to uncover the mechanisms of a complex dose dependence of temperature responses to Meth by mathematical modeling of the neuronal circuitry. On the basis of previous studies, we composed an artificial neural network with the core comprising three sequentially connected nodes: excitatory, medullary, and sympathetic preganglionic neuronal (SPN). Meth directly stimulated the excitatory node, an inhibitory drive targeted the medullary node, and, in high doses, an additional excitatory drive affected the SPN node. All model parameters (weights of connections, sensitivities, and time constants) were subject to fitting experimental time series of temperature responses to 1, 3, 5, and 10 mg/kg Meth. Modeling suggested that the temperature response to the lowest dose of Meth, which caused an immediate and short hyperthermia, involves neuronal excitation at a supramedullary level. The delay in response after the intermediate doses of Meth is a result of neuronal inhibition at the medullary level. Finally, the rapid and robust increase in body temperature induced by the highest dose of Meth involves activation of high-dose excitatory drive. The impairment in the inhibitory mechanism can provoke a life-threatening temperature rise and makes it a plausible cause of fatal hyperthermia in Meth users. We expect that studying putative neuronal sites of Meth action and the neuromediators involved in a detailed model of this system may lead to more effective strategies for prevention and treatment of hyperthermia induced by amphetamine-like stimulants.

  5. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  6. Spatiotemporal modeling of node temperatures in supercomputers

    SciTech Connect

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this and other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.

  7. Meth math: modeling temperature responses to methamphetamine

    PubMed Central

    Molkov, Yaroslav I.; Zaretskaia, Maria V.

    2014-01-01

    Methamphetamine (Meth) can evoke extreme hyperthermia, which correlates with neurotoxicity and death in laboratory animals and humans. The objective of this study was to uncover the mechanisms of a complex dose dependence of temperature responses to Meth by mathematical modeling of the neuronal circuitry. On the basis of previous studies, we composed an artificial neural network with the core comprising three sequentially connected nodes: excitatory, medullary, and sympathetic preganglionic neuronal (SPN). Meth directly stimulated the excitatory node, an inhibitory drive targeted the medullary node, and, in high doses, an additional excitatory drive affected the SPN node. All model parameters (weights of connections, sensitivities, and time constants) were subject to fitting experimental time series of temperature responses to 1, 3, 5, and 10 mg/kg Meth. Modeling suggested that the temperature response to the lowest dose of Meth, which caused an immediate and short hyperthermia, involves neuronal excitation at a supramedullary level. The delay in response after the intermediate doses of Meth is a result of neuronal inhibition at the medullary level. Finally, the rapid and robust increase in body temperature induced by the highest dose of Meth involves activation of high-dose excitatory drive. The impairment in the inhibitory mechanism can provoke a life-threatening temperature rise and makes it a plausible cause of fatal hyperthermia in Meth users. We expect that studying putative neuronal sites of Meth action and the neuromediators involved in a detailed model of this system may lead to more effective strategies for prevention and treatment of hyperthermia induced by amphetamine-like stimulants. PMID:24500434

  8. Temperature-dependent magnetic properties of individual glass spherules, Apollo 11, 12, and 14 lunar samples.

    NASA Technical Reports Server (NTRS)

    Thorpe, A. N.; Sullivan, S.; Alexander, C. C.; Senftle, F. E.; Dwornik, E. J.

    1972-01-01

    Magnetic susceptibility of 11 glass spherules from the Apollo 14 lunar fines have been measured from room temperature to 4 K. Data taken at room temperature, 77 K, and 4.2 K, show that the soft saturation magnetization was temperature independent. In the temperature range 300 to 77 K the temperature-dependent component of the magnetic susceptibility obeys the Curie law. Susceptibility measurements on these same specimens and in addition 14 similar spherules from the Apollo 11 and 12 mission show a Curie-Weiss relation at temperatures less than 77 K with a Weiss temperature of 3-7 degrees in contrast to 2-3 degrees found for tektites and synthetic glasses of tektite composition. A proposed model and a theoretical expression closely predict the variation of the susceptibility of the glass spherules with temperature.

  9. Modeling quantum fluid dynamics at nonzero temperatures

    PubMed Central

    Berloff, Natalia G.; Brachet, Marc; Proukakis, Nick P.

    2014-01-01

    The detailed understanding of the intricate dynamics of quantum fluids, in particular in the rapidly growing subfield of quantum turbulence which elucidates the evolution of a vortex tangle in a superfluid, requires an in-depth understanding of the role of finite temperature in such systems. The Landau two-fluid model is the most successful hydrodynamical theory of superfluid helium, but by the nature of the scale separations it cannot give an adequate description of the processes involving vortex dynamics and interactions. In our contribution we introduce a framework based on a nonlinear classical-field equation that is mathematically identical to the Landau model and provides a mechanism for severing and coalescence of vortex lines, so that the questions related to the behavior of quantized vortices can be addressed self-consistently. The correct equation of state as well as nonlocality of interactions that leads to the existence of the roton minimum can also be introduced in such description. We review and apply the ideas developed for finite-temperature description of weakly interacting Bose gases as possible extensions and numerical refinements of the proposed method. We apply this method to elucidate the behavior of the vortices during expansion and contraction following the change in applied pressure. We show that at low temperatures, during the contraction of the vortex core as the negative pressure grows back to positive values, the vortex line density grows through a mechanism of vortex multiplication. This mechanism is suppressed at high temperatures. PMID:24704874

  10. Disposable sample holder for high temperature measurements in MPMS superconducting quantum interference device magnetometers.

    PubMed

    Sesé, J; Bartolomé, J; Rillo, C

    2007-04-01

    A sample holder for high temperature (300 Ksamples in either solid or powder form. The holder is homogeneous for the gradiometer coil, and this results in a contribution to the background signal that is below the instrument noise at any field (<10(-9) A m2 at mu(0)H=200 mT). Further it is inexpensive and simple to fabricate, and it can be considered as a disposable sample holder that avoids eventual contamination between different samples.

  11. Modeling Low-temperature Geochemical Processes

    NASA Astrophysics Data System (ADS)

    Nordstrom, D. K.

    2003-12-01

    Geochemical modeling has become a popular and useful tool for a wide number of applications from research on the fundamental processes of water-rock interactions to regulatory requirements and decisions regarding permits for industrial and hazardous wastes. In low-temperature environments, generally thought of as those in the temperature range of 0-100 °C and close to atmospheric pressure (1 atm=1.01325 bar=101,325 Pa), complex hydrobiogeochemical reactions participate in an array of interconnected processes that affect us, and that, in turn, we affect. Understanding these complex processes often requires tools that are sufficiently sophisticated to portray multicomponent, multiphase chemical reactions yet transparent enough to reveal the main driving forces. Geochemical models are such tools. The major processes that they are required to model include mineral dissolution and precipitation; aqueous inorganic speciation and complexation; solute adsorption and desorption; ion exchange; oxidation-reduction; or redox; transformations; gas uptake or production; organic matter speciation and complexation; evaporation; dilution; water mixing; reaction during fluid flow; reaction involving biotic interactions; and photoreaction. These processes occur in rain, snow, fog, dry atmosphere, soils, bedrock weathering, streams, rivers, lakes, groundwaters, estuaries, brines, and diagenetic environments. Geochemical modeling attempts to understand the redistribution of elements and compounds, through anthropogenic and natural means, for a large range of scale from nanometer to global. "Aqueous geochemistry" and "environmental geochemistry" are often used interchangeably with "low-temperature geochemistry" to emphasize hydrologic or environmental objectives.Recognition of the strategy or philosophy behind the use of geochemical modeling is not often discussed or explicitly described. Plummer (1984, 1992) and Parkhurst and Plummer (1993) compare and contrast two approaches for

  12. Chamber validation of a passive air sampling device for measuring ambient VOCs at subzero temperatures

    SciTech Connect

    Gagner, R.V.; Hrudey, S.E.

    1997-12-31

    An evaluation was made of the performance of the 3M Organic Vapor Monitor No. 3500 through experiments conducted under permeation tube generated atmospheres in a controlled chamber environment. A range of typical ambient benzene and toluene concentrations were produced in the chamber to test the consistency of the sampling rate under different exposure levels. All tests were repeated at room temperature, and under subzero Celsius conditions to determine the effect of lowered temperatures on the performance of the badge. As expected, relatively low concentrations of benzene and toluene produced small incremental increases in analyte above the background levels inherent to the badge and analytical methods resulting in a loss of method precision. The badge sampling rate was not significantly affected by decreases in temperature to minus fifteen degrees Celsius. This finding was not consistent with the theoretically-based temperature correction factors identified in the product literature.

  13. Comparison of single-point and continuous sampling methods for estimating residential indoor temperature and humidity

    PubMed Central

    Johnston, James D.; Magnusson, Brianna M.; Eggett, Dennis; Collingwood, Scott C.; Bernhardt, Scott A.

    2016-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hrs, and 12-days) in 9 northern Utah homes, from March – June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088

  14. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  15. The use of ESR technique for assessment of heating temperatures of archaeological lentil samples

    NASA Astrophysics Data System (ADS)

    Aydaş, Canan; Engin, Birol; Dönmez, Emel Oybak; Belli, Oktay

    2010-01-01

    Heat-induced paramagnetic centers in modern and archaeological lentils ( Lens culinaris, Medik.) were studied by X-band (9.3 GHz) electron spin resonance (ESR) technique. The modern red lentil samples were heated in an electrical furnace at increasing temperatures in the range 70-500 °C. The ESR spectral parameters (the intensity, g-value and peak-to-peak line width) of the heat-induced organic radicals were investigated for modern red lentil ( Lens culinaris, Medik.) samples. The obtained ESR spectra indicate that the relative number of heat-induced paramagnetic species and peak-to-peak line widths depends on the temperature and heating time of the modern lentil. The g-values also depend on the heating temperature but not heating time. Heated modern red lentils produced a range of organic radicals with g-values from g = 2.0062 to 2.0035. ESR signals of carbonised archaeological lentil samples from two archaeological deposits of the Van province in Turkey were studied and g-values, peak-to-peak line widths, intensities and elemental compositions were compared with those obtained for modern samples in order to assess at which temperature these archaeological lentils were heated in prehistoric sites. The maximum temperatures of the previous heating of carbonised UA5 and Y11 lentil seeds are as follows about 500 °C and above 500 °C, respectively.

  16. The use of variable temperature and magic-angle sample spinning in studies of fulvic acids

    USGS Publications Warehouse

    Earl, W.L.; Wershaw, R. L.; Thorn, K.A.

    1987-01-01

    Intensity distortions and poor signal to noise in the cross-polarization magic-angle sample spinning NMR of fulvic acids were investigated and attributed to molecular mobility in these ostensibly "solid" materials. We have shown that inefficiencies in cross polarization can be overcome by lowering the sample temperature to about -60??C. These difficulties can be generalized to many other synthetic and natural products. The use of variable temperature and cross-polarization intensity as a function of contact time can yield valuable qualitative information which can aid in the characterization of many materials. ?? 1987.

  17. Temperature-controlled neutron reflectometry sample cell suitable for study of photoactive thin films

    SciTech Connect

    Yager, Kevin G.; Tanchak, Oleh M.; Barrett, Christopher J.; Watson, Mike J.; Fritzsche, Helmut

    2006-04-15

    We describe a novel cell design intended for the study of photoactive materials using neutron reflectometry. The cell can maintain sample temperature and control of ambient atmospheric environment. Critically, the cell is built with an optical port, enabling light irradiation or light probing of the sample, simultaneous with neutron reflectivity measurements. The ability to measure neutron reflectivity with simultaneous temperature ramping and/or light illumination presents unique opportunities for measuring photoactive materials. To validate the cell design, we present preliminary results measuring the photoexpansion of thin films of azobenzene polymer.

  18. Method for determining temperatures and heat transfer coefficients with a superconductive sample

    SciTech Connect

    Gentile, D.; Hassenzahl, W.; Polak, M.

    1980-05-01

    The method that is described here uses the current-sharing characteristic of a copper-stabilized, superconductive NbTi wire to determine the temperature. The measurements were made for magnetic fields up to 6 T and the precision actually attained with this method is about 0.1 K. It is an improvement over one that has been used at 4.2 K to measure transient heat transfer in that all the parameters of the sample are well known and the current in the sample is measured directly. The response time of the probe is less than 5 ..mu..s and it has been used to measure temperatures during heat pulses as short as 20 ..mu..s. Temperature measurements between 1.6 and 8.5 K are described. An accurate formula based on the current and electric field along the sample has been developed for temperatures between 2.5 K and the critical temperature of the conductor, which, of course, depends on the applied field. Also described is a graphical method that must be used below 2.5 K, where the critical current is not a linear function of temperature.

  19. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    NASA Astrophysics Data System (ADS)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  20. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2016-06-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  1. Effects of different temperature treatments on biological ice nuclei in snow samples

    NASA Astrophysics Data System (ADS)

    Hara, Kazutaka; Maki, Teruya; Kakikawa, Makiko; Kobayashi, Fumihisa; Matsuki, Atsushi

    2016-09-01

    The heat tolerance of biological ice nucleation activity (INA) depends on their types. Different temperature treatments may cause varying degrees of inactivation on biological ice nuclei (IN) in precipitation samples. In this study, we measured IN concentration and bacterial INA in snow samples using a drop freezing assay, and compared the results for unheated snow and snow treated at 40 °C and 90 °C. At a measured temperature of -7 °C, the concentration of IN in untreated snow was 100-570 L-1, whereas the concentration in snow treated at 40 °C and 90 °C was 31-270 L-1 and 2.5-14 L-1, respectively. In the present study, heat sensitive IN inactivated by heating at 40 °C were predominant, and ranged 23-78% of IN at -7 °C compared with untreated samples. Ice nucleation active Pseudomonas strains were also isolated from the snow samples, and heating at 40 °C and 90 °C inactivated these microorganisms. Consequently, different temperature treatments induced varying degrees of inactivation on IN in snow samples. Differences in the concentration of IN across a range of treatment temperatures might reflect the abundance of different heat sensitive biological IN components.

  2. Preliminary Proactive Sample Size Determination for Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Koran, Jennifer

    2016-01-01

    Proactive preliminary minimum sample size determination can be useful for the early planning stages of a latent variable modeling study to set a realistic scope, long before the model and population are finalized. This study examined existing methods and proposed a new method for proactive preliminary minimum sample size determination.

  3. Dependence of Characteristic Diode Parameters in Ni/n-GaAs Contacts on Thermal Annealing and Sample Temperature

    NASA Astrophysics Data System (ADS)

    Yildirim, N.; Dogan, H.; Korkut, H.; Turut, A.

    We have prepared the sputtered Ni/n-GaAs Schottky diodes which consist of as-deposited, and diodes annealed at 200 and 400°C for 2 min. The effect of thermal annealing on the temperature-dependent current-voltage (I-V) characteristics of the diodes has been experimentally investigated. Their I-V characteristics have been measured in the temperature range of 60-320 K with steps of 20 K. It has been seen that the barrier height (BH) slightly increased from 0.84 (as-deposited sample) to 0.88 eV at 300 K when the contact has been annealed at 400°C. The SBH increased whereas the ideality factor decreased with increasing annealing temperature for each sample temperature. The I-V measurements showed a dependence of ideality factor n and BH on the measuring temperature that cannot be explained by the classical thermionic emission theory. The experimental data are consistent with the presence of an inhomogeneity of the SBHs. Therefore, the temperature dependent I-V characteristics of the diodes have been discussed in terms of the multi-Gaussian distribution model. The experimental data good have agree with the fitting curves over whole measurement temperature range indicating that the SBH inhomogeneity of our as-deposited and annealed Ni/n-GaAs SBDs can be well-described by a double-Gaussian distribution. The slope of the nT versus T plot for the samples has approached to unity with increasing annealing temperature and becomes parallel to that of the ideal Schottky contact behavior for the 400°C annealed diode. Thus, it has been concluded that the thermal annealing process translates the metal-semiconductor contacts into thermally stable Schottky contacts.

  4. Method and apparatus for transport, introduction, atomization and excitation of emission spectrum for quantitative analysis of high temperature gas sample streams containing vapor and particulates without degradation of sample stream temperature

    SciTech Connect

    Eckels, D.E.; Hass, W.J.

    1989-05-30

    A sample transport, sample introduction, and flame excitation system is described for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses. 4 figs.

  5. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    NASA Astrophysics Data System (ADS)

    Portner, H.; Bugmann, H.; Wolf, A.

    2010-11-01

    Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not well understood. Thus, we performed an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS. We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff). We determined the parameter confidence ranges of the formulations by nonlinear regression analysis based on eight experimental datasets from Northern Hemisphere ecosystems. We sampled over the confidence ranges of the parameters and ran simulations for each pair of temperature response function and calibration site. We analyzed both the long-term and the short-term heterotrophic soil carbon dynamics over a virtual elevation gradient in southern Switzerland. The temperature relationship of Lloyd-Taylor fitted the overall data set best as the other functions either resulted in poor fits (Exponential, Arrhenius) or were not applicable for all datasets (Gaussian, Van't Hoff). There were two main sources of uncertainty for model simulations: (1) the lack of confidence in the parameter estimates of the temperature response, which increased with increasing temperature, and (2) the size of the simulated soil carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. Our results therefore indicate that such projections are more uncertain for higher elevations and hence also higher latitudes, which are of key importance for the global terrestrial carbon budget.

  6. Selective determination of methyl mercury in biological samples by means of programmed temperature gas chromatography.

    PubMed

    Lorenzo, R A; Carro, A; Rubí, E; Casais, C; Cela, R

    1993-01-01

    A programmed temperature gas chromatographic method is presented by which it is possible to carry out routine analysis of methyl mercury in biological samples prepared according to the AOAC official first action recommendations without the need for preliminary treatment of the columns. This method greatly extends the life of the columns as well as the useful time for analysis; it has good linearity and repeatability. With the proposed method a total of 36 samples can be analyzed daily.

  7. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.

  8. TEMPERATURE HISTORY AND DYNAMICAL EVOLUTION OF (101955) 1999 RQ 36: A POTENTIAL TARGET FOR SAMPLE RETURN FROM A PRIMITIVE ASTEROID

    SciTech Connect

    Delbo, Marco; Michel, Patrick

    2011-02-20

    It has been recently shown that near-Earth objects (NEOs) have a temperature history-due to the radiative heating by the Sun-non-trivially correlated to their present orbits. This is because the perihelion distance of NEOs varies as a consequence of dynamical mechanisms, such as resonances and close encounters with planets. Thus, it is worth investigating the temperature history of NEOs that are potential targets of space missions devoted to return samples of prebiotic organic compounds. Some of these compounds, expected to be found on NEOs of primitive composition, break up at moderate temperatures, e.g., 300-670 K. Using a model of the orbital evolution of NEOs and thermal models, we studied the temperature history of (101955) 1999 RQ{sub 36} (the primary target of the mission OSIRIS-REx, proposed in the program New Frontiers of NASA). Assuming that the same material always lies on the surface (i.e., there is no regolith turnover), our results suggest that the temperatures reached during its past evolution affected the stability of some organic compounds at the surface (e.g., there is 50% probability that the surface of 1999 RQ{sub 36} was heated at temperatures {>=}500 K). However, the temperature drops rapidly with depth: the regolith at a depth of 3-5 cm, which is not considered difficult to reach with the current designs of sampling devices, has experienced temperatures about 100 K below those at the surface. This is sufficient to protect some subsurface organics from thermal breakup.

  9. Thermal Response Modeling System for a Mars Sample Return Vehicle

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Miles, Frank S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite-element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas eneration and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.

  10. Thermal Response Modeling System for a Mars Sample Return Vehicle

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.

    2002-01-01

    A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas generation and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.

  11. Electric transport measurements on bulk, polycrystalline MgB2 samples prepared at various reaction temperatures

    NASA Astrophysics Data System (ADS)

    Wiederhold, A.; Koblischka, M. R.; Inoue, K.; Muralidhar, M.; Murakami, M.; Hartmann, U.

    2016-03-01

    A series of disk-shaped, bulk MgB2 superconductors (sample diameter up to 4 cm) was prepared in order to improve the performance for superconducting super-magnets. Several samples were fabricated using a solid state reaction in pure Ar atmosphere from 750 to 950oC in order to determine the optimum processing parameters to obtain the highest critical current density as well as large trapped field values. Additional samples were prepared with added silver (up to 10 wt.-%) to the Mg and B powder. Magneto-resistance data and I/V-characteristics were recorded using an Oxford Instruments Teslatron system. From Arrhenius plots, we determine the TAFF pinning potential, U 0. The I/V-characteristics yield detailed information on the current flow through the polycrystalline samples. The current flow is influenced by the presence of pores in the samples. Our analysis of the achieved critical currents together with a thorough microstructure investigation reveals that the samples prepared at temperatures between 775°C and 805°C exhibit the smallest grains and the best connectivity between them, while the samples fabricated at higher reaction temperatures show a reduced connectivity and lower pinning potential. Doping the samples with silver leads to a considerable increase of the pinning potential and hence, the critical current densities.

  12. Temperature influences in receiver clock modelling

    NASA Astrophysics Data System (ADS)

    Wang, Kan; Meindl, Michael; Rothacher, Markus; Schoenemann, Erik; Enderle, Werner

    2016-04-01

    In Precise Point Positioning (PPP), hardware delays at the receiver site (receiver, cables, antenna, …) are always difficult to be separated from the estimated receiver clock parameters. As a result, they are partially or fully contained in the estimated "apparent" clocks and will influence the deterministic and stochastic modelling of the receiver clock behaviour. In this contribution, using three years of data, the receiver clock corrections of a set of high-precision Hydrogen Masers (H-Masers) connected to stations of the ESA/ESOC network and the International GNSS Service (IGS) are firstly characterized concerning clock offsets, drifts, modified Allan deviations and stochastic parameters. In a second step, the apparent behaviour of the clocks is modelled with the help of a low-order polynomial and a known temperature coefficient (Weinbach, 2013). The correlations between the temperature and the hardware delays generated by different types of antennae are then analysed looking at daily, 3-day and weekly time intervals. The outcome of these analyses is crucial, if we intend to model the receiver clocks in the ground station network to improve the estimation of station-related parameters like coordinates, troposphere zenith delays and ambiguities. References: Weinbach, U. (2013) Feasibility and impact of receiver clock modeling in precise GPS data analysis. Dissertation, Leibniz Universität Hannover, Germany.

  13. An environmental sampling model for combining judgment and randomly placed samples

    SciTech Connect

    Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.

    2007-08-23

    In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.

  14. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    NASA Astrophysics Data System (ADS)

    Portner, H.; Bugmann, H.; Wolf, A.

    2009-08-01

    Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not understood. Thus, we made an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS. We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff). We determined the parameter uncertainty ranges of the functions by nonlinear regression analysis based on eight experimental datasets from northern hemisphere ecosystems. We sampled over the uncertainty bounds of the parameters and run simulations for each pair of temperature response function and calibration site. The uncertainty in both long-term and short-term soil carbon dynamics was analyzed over an elevation gradient in southern Switzerland. The function of Lloyd-Taylor turned out to be adequate for modelling the temperature dependency of soil organic matter decomposition, whereas the other functions either resulted in poor fits (Exponential, Arrhenius) or were not applicable for all datasets (Gaussian, Van't Hoff). There were two main sources of uncertainty for model simulations: (1) the uncertainty in the parameter estimates of the response functions, which increased with increasing temperature and (2) the uncertainty in the simulated size of carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. The higher uncertainty in carbon pools with slow turn-over rates has important implications for the uncertainty in the projection of the change of soil carbon stocks driven by climate change, which turned out to be more uncertain for higher elevations and hence higher latitudes, which are of key importance for the global terrestrial carbon budget.

  15. Stratospheric Temperature Changes: Observations and Model Simulations

    NASA Technical Reports Server (NTRS)

    Ramaswamy, V.; Chanin, M.-L.; Angell, J.; Barnett, J.; Gaffen, D.; Gelman, M.; Keckhut, P.; Koshelkov, Y.; Labitzke, K.; Lin, J.-J. R.

    1999-01-01

    This paper reviews observations of stratospheric temperatures that have been made over a period of several decades. Those observed temperatures have been used to assess variations and trends in stratospheric temperatures. A wide range of observation datasets have been used, comprising measurements by radiosonde (1940s to the present), satellite (1979 - present), lidar (1979 - present) and rocketsonde (periods varying with location, but most terminating by about the mid-1990s). In addition, trends have also been assessed from meteorological analyses, based on radiosonde and/or satellite data, and products based on assimilating observations into a general circulation model. Radiosonde and satellite data indicate a cooling trend of the annual-mean lower stratosphere since about 1980. Over the period 1979-1994, the trend is 0.6K/decade. For the period prior to 1980, the radiosonde data exhibit a substantially weaker long-term cooling trend. In the northern hemisphere, the cooling trend is about 0.75K/decade in the lower stratosphere, with a reduction in the cooling in mid-stratosphere (near 35 km), and increased cooling in the upper stratosphere (approximately 2 K per decade at 50 km). Model simulations indicate that the depletion of lower stratospheric ozone is the dominant factor in the observed lower stratospheric cooling. In the middle and upper stratosphere both the well-mixed greenhouse gases (such as CO) and ozone changes contribute in an important manner to the cooling.

  16. Long-term storage of salivary cortisol samples at room temperature

    NASA Technical Reports Server (NTRS)

    Chen, Yu-Ming; Cintron, Nitza M.; Whitson, Peggy A.

    1992-01-01

    Collection of saliva samples for the measurement of cortisol during space flights provides a simple technique for studying changes in adrenal function due microgravity. In the present work, several methods for preserving saliva cortisol at room temperature were investigated using radioimmunoassays for determining cortisol in saliva samples collected on a saliva-collection device called Salivettes. It was found that a pretreatment of Salivettes with citric acid resulted in preserving more than 85 percent of the salivary cortisol for as long as six weeks. The results correlated well with those for a sample stored in a freezer on an untreated Salivette.

  17. Graphite sample preparation for AMS in a high pressure and temperature press

    USGS Publications Warehouse

    Rubin, Meyer; Mysen, Bjorn O.; Polach, Henry

    1984-01-01

    A high pressure-temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical **1**4C samples including oxalic acid and carbonates. Beam strength of **1**2C was generally adequate, but random radioactive contamination by **1**4C made age measurements impractical.

  18. Graphite sample preparation for AMS in a high pressure and temperature press

    USGS Publications Warehouse

    Rubin, M.; Mysen, B.O.; Polach, H.

    1984-01-01

    A high pressure-high temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical 14C samples including oxalic acid and carbonates. Beam strength of 12C was generally adequate, but random radioactive contamination by 14C made age measurements impractical. ?? 1984.

  19. Fast sweep-rate plastic Faraday force magnetometer with simultaneous sample temperature measurement.

    PubMed

    Slobinsky, D; Borzi, R A; Mackenzie, A P; Grigera, S A

    2012-12-01

    We present a design for a magnetometer capable of operating at temperatures down to 50 mK and magnetic fields up to 15 T with integrated sample temperature measurement. Our design is based on the concept of a Faraday force magnetometer with a load-sensing variable capacitor. A plastic body allows for fast sweep rates and sample temperature measurement, and the possibility of regulating the initial capacitance simplifies the initial bridge balancing. Under moderate gradient fields of ~1 T/m our prototype performed with a resolution better than 1 × 10(-5) emu. The magnetometer can be operated either in a dc mode, or in an oscillatory mode which allows the determination of the magnetic susceptibility. We present measurements on Dy(2)Ti(2)O(7) and Sr(3)Ru(2)O(7) as an example of its performance.

  20. Correcting for Microbial Blooms in Fecal Samples during Room-Temperature Shipping

    PubMed Central

    Amir, Amnon; McDonald, Daniel; Navas-Molina, Jose A.; Debelius, Justine; Morton, James T.; Hyde, Embriette; Robbins-Pianka, Adam

    2017-01-01

    ABSTRACT The use of sterile swabs is a convenient and common way to collect microbiome samples, and many studies have shown that the effects of room-temperature storage are smaller than physiologically relevant differences between subjects. However, several bacterial taxa, notably members of the class Gammaproteobacteria, grow at room temperature, sometimes confusing microbiome results, particularly when stability is assumed. Although comparative benchmarking has shown that several preservation methods, including the use of 95% ethanol, fecal occult blood test (FOBT) and FTA cards, and Omnigene-GUT kits, reduce changes in taxon abundance during room-temperature storage, these techniques all have drawbacks and cannot be applied retrospectively to samples that have already been collected. Here we performed a meta-analysis using several different microbiome sample storage condition studies, showing consistent trends in which specific bacteria grew (i.e., “bloomed”) at room temperature, and introduce a procedure for removing the sequences that most distort analyses. In contrast to similarity-based clustering using operational taxonomic units (OTUs), we use a new technique called “Deblur” to identify the exact sequences corresponding to blooming taxa, greatly reducing false positives and also dramatically decreasing runtime. We show that applying this technique to samples collected for the American Gut Project (AGP), for which participants simply mail samples back without the use of ice packs or other preservatives, yields results consistent with published microbiome studies performed with frozen or otherwise preserved samples. IMPORTANCE In many microbiome studies, the necessity to store samples at room temperature (i.e., remote fieldwork) and the ability to ship samples without hazardous materials that require special handling training, such as ethanol (i.e., citizen science efforts), is paramount. However, although room-temperature storage for a few days has

  1. Correcting for Microbial Blooms in Fecal Samples during Room-Temperature Shipping.

    PubMed

    Amir, Amnon; McDonald, Daniel; Navas-Molina, Jose A; Debelius, Justine; Morton, James T; Hyde, Embriette; Robbins-Pianka, Adam; Knight, Rob

    2017-01-01

    The use of sterile swabs is a convenient and common way to collect microbiome samples, and many studies have shown that the effects of room-temperature storage are smaller than physiologically relevant differences between subjects. However, several bacterial taxa, notably members of the class Gammaproteobacteria, grow at room temperature, sometimes confusing microbiome results, particularly when stability is assumed. Although comparative benchmarking has shown that several preservation methods, including the use of 95% ethanol, fecal occult blood test (FOBT) and FTA cards, and Omnigene-GUT kits, reduce changes in taxon abundance during room-temperature storage, these techniques all have drawbacks and cannot be applied retrospectively to samples that have already been collected. Here we performed a meta-analysis using several different microbiome sample storage condition studies, showing consistent trends in which specific bacteria grew (i.e., "bloomed") at room temperature, and introduce a procedure for removing the sequences that most distort analyses. In contrast to similarity-based clustering using operational taxonomic units (OTUs), we use a new technique called "Deblur" to identify the exact sequences corresponding to blooming taxa, greatly reducing false positives and also dramatically decreasing runtime. We show that applying this technique to samples collected for the American Gut Project (AGP), for which participants simply mail samples back without the use of ice packs or other preservatives, yields results consistent with published microbiome studies performed with frozen or otherwise preserved samples. IMPORTANCE In many microbiome studies, the necessity to store samples at room temperature (i.e., remote fieldwork) and the ability to ship samples without hazardous materials that require special handling training, such as ethanol (i.e., citizen science efforts), is paramount. However, although room-temperature storage for a few days has been shown not to

  2. Calorimeters for precision power dissipation measurements on controlled-temperature superconducting radiofrequency samples.

    PubMed

    Xiao, B P; Reece, C E; Phillips, H L; Kelley, M J

    2012-12-01

    Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the 7.5 GHz surface impedance characterization system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm diameter disk sample which is thermally isolated from the radiofrequency (RF) portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analyzed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al(2)O(3), Cu, MgO, Nb, and Si.

  3. Calorimeters for Precision Power Dissipation Measurements on Controlled-Temperature Superconducting Radiofrequency Samples

    SciTech Connect

    Xiao, Binping P.; Kelley, Michael J.; Reece, Charles E.; Phillips, H. L.

    2012-12-01

    Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the surface impedance characterization (SIC) system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm dia. disk sample which is thermally isolated from the RF portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency (SRF) materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analysed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al{sub 2}O{sub 3}, Cu, MgO, Nb and Si.

  4. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    SciTech Connect

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-15

    High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components.

  5. Dynamical opacity-sampling models of Mira variables - I. Modelling description and analysis of approximations

    NASA Astrophysics Data System (ADS)

    Ireland, M. J.; Scholz, M.; Wood, P. R.

    2008-12-01

    We describe the Cool Opacity-sampling Dynamic EXtended (CODEX) atmosphere models of Mira variable stars, and examine in detail the physical and numerical approximations that go in-to the model creation. The CODEX atmospheric models are obtained by computing the temperature and the chemical and radiative states of the atmospheric layers, assuming gas pressure and velocity profiles from Mira pulsation models, which extend from near the H-burning shell to the outer layers of the atmosphere. Although the code uses the approximation of Local Thermodynamic Equilibrium (LTE) and a grey approximation in the dynamical atmosphere code, many key observable quantities, such as infrared diameters and low-resolution spectra, are predicted robustly in spite of these approximations. We show that in visible light, radiation from Mira variables is dominated by fluorescence scattering processes, and that the LTE approximation likely underpredicts visible-band fluxes by a factor of 2.

  6. Modelling nanofluidic field amplified sample stacking with inhomogeneous surface charge

    NASA Astrophysics Data System (ADS)

    McCallum, Christopher; Pennathur, Sumita

    2015-11-01

    Nanofluidic technology has exceptional applications as a platform for biological sample preconcentration, which will allow for an effective electronic detection method of low concentration analytes. One such preconcentration method is field amplified sample stacking, a capillary electrophoresis technique that utilizes large concentration differences to generate high electric field gradients, causing the sample of interest to form a narrow, concentrated band. Field amplified sample stacking has been shown to work well at the microscale, with models and experiments confirming expected behavior. However, nanofluidics allows for further concentration enhancement due to focusing of the sample ions toward the channel center by the electric double layer. We have developed a two-dimensional model that can be used for both micro- and nanofluidics, fully accounting for the electric double layer. This model has been used to investigate even more complex physics such as the role of inhomogeneous surface charge.

  7. Estimation of Lunar Surface Temperatures: a Numerical Model

    NASA Astrophysics Data System (ADS)

    Bauch, K.; Hiesinger, H.; Helbert, J.

    2009-04-01

    About 40 years after the Apollo and other lunar missions, several nations return to the Moon. Indian, Chinese, Japanese and American missions are already in orbit or will soon be launched, and the possibility of a "Made in Germany" mission (Lunar Exploration Orbiter - LEO) looms on the horizon [1]. In preparation of this mission, which will include a thermal infrared spectrometer (SERTIS - SElenological Radiometer and Thermal infrared Imaging Spectrometer), accurate temperature maps of the lunar surface are required. Because the orbiter will be imaging the Moon's surface at different times of the lunar day, an accurate estimation of the thermal variations of the surface with time is necessary to optimize signal-to-noise ratios and define optimal measurement areas. In this study we present new global temperature estimates for sunrise, noontime and sunset. This work provides new and updated research on the temperature variations of the lunar surface, by taking into account the surface and subsurface bulk thermophysical properties, namely their bulk density, heat capacity, thermal conductivity, emissivity and albedo. These properties have been derived from previous spacecraft-based observations, in-situ measurements and returned samples [e.g. 2-4]. In order to determine surface and subsurface temperatures, the one-dimensional heat conduction equation is solved for a resolution of about 0.4°, which is better by a factor of 2 compared to the Clementine measurement and temperature modeling described in [2]. Our work expands on the work of Lawson et al. [2], who calculated global brightness temperatures of subsolar points from the instantaneous energy balance equation assuming the Moon to be a spherical object [2]. Surface daytime temperatures are mainly controlled by their surface albedo and angle of incidence. On the other hand nighttime temperatures are affected by the thermal inertia of the observed surface. Topographic effects are expected to cause earlier or later

  8. Slice sampling technique in Bayesian extreme of gold price modelling

    NASA Astrophysics Data System (ADS)

    Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham

    2013-09-01

    In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.

  9. Importance of sample form and surface temperature for analysis by ambient plasma mass spectrometry (PADI).

    PubMed

    Salter, Tara La Roche; Bunch, Josephine; Gilmore, Ian S

    2014-09-16

    Many different types of samples have been analyzed in the literature using plasma-based ambient mass spectrometry sources; however, comprehensive studies of the important parameters for analysis are only just beginning. Here, we investigate the effect of the sample form and surface temperature on the signal intensities in plasma-assisted desorption ionization (PADI). The form of the sample is very important, with powders of all volatilities effectively analyzed. However, for the analysis of thin films at room temperature and using a low plasma power, a vapor pressure of greater than 10(-4) Pa is required to achieve a sufficiently good quality spectrum. Using thermal desorption, we are able to increase the signal intensity of less volatile materials with vapor pressures less than 10(-4) Pa, in thin film form, by between 4 and 7 orders of magnitude. This is achieved by increasing the temperature of the sample up to a maximum of 200 °C. Thermal desorption can also increase the signal intensity for the analysis of powders.

  10. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  11. Sampling theory applied to measurement and analysis of temperature for climate studies

    NASA Technical Reports Server (NTRS)

    Edwards, Howard B.

    1987-01-01

    Of all the errors discussed in climatology literature, aliasing errors caused by undersampling of unsmoothed or improperly smoothed temperature data seem to be completely overlooked. This is a serious oversight in view of long-term trends of 1 K or less. Adequate sampling of properly smoothed data is demonstrated with a Hamming digital filter. It is also demonstrated that hourly temperatures, daily averages, and annual averages free of aliasing errors can be obtained by use of a microprocessor added to standard weather sensors and recorders.

  12. Thermal mapping and trends of Mars analog materials in sample acquisition operations using experimentation and models

    NASA Astrophysics Data System (ADS)

    Szwarc, Timothy; Hubbard, Scott

    2014-09-01

    The effects of atmosphere, ambient temperature, and geologic material were studied experimentally and using a computer model to predict the heating undergone by Mars rocks during rover sampling operations. Tests were performed on five well-characterized and/or Mars analog materials: Indiana limestone, Saddleback basalt, kaolinite, travertine, and water ice. Eighteen tests were conducted to 55 mm depth using a Mars Sample Return prototype coring drill, with each sample containing six thermal sensors. A thermal simulation was written to predict the complete thermal profile within each sample during coring and this model was shown to be capable of predicting temperature increases with an average error of about 7%. This model may be used to schedule power levels and periods of rest during actual sample acquisition processes to avoid damaging samples or freezing the bit into icy formations. Maximum rock temperature increase is found to be modeled by a power law incorporating rock and operational parameters. Energy transmission efficiency in coring is found to increase linearly with rock hardness and decrease by 31% at Mars pressure.

  13. Sample weight and digestion temperature as critical factors in mercury determination in fish

    SciTech Connect

    Sadiq, M.; Zaidi, T.H.; Al-Mohana, H. )

    1991-09-01

    The concern about mercury (Hg) pollution of the marine environment started with the well publicized case of Minimata (Japan) where in the 1950s several persons died or became seriously ill after consuming fish or shellfish containing high levels of methylmercury. It is now accepted that Hg contaminated seafoods constitute a hazard to human health. To safeguard humans, accurate determination of Hg in marine biota is, therefore, very important. Two steps are involved in the determination of total Hg in biological materials: (a) decomposition of organic matrix (sample preparation), and (b) determination of Hg in aliquot samples. Although the procedures for determining Hg using the cold vapor technique are well established, sample preparation procedures have not been standardized. In general, samples of marine biota have been prepared by digesting different weights at different temperatures, by using mixtures of different chemicals and of varying quantities, and by digesting for variable durations. The objectives of the present paper were to evaluate the effects of sample weights and digestion temperatures on Hg determination in fish.

  14. Integrated research in constitutive modelling at elevated temperatures, part 1

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Allen, D. H.

    1986-01-01

    Topics covered include: numerical integration techniques; thermodynamics and internal state variables; experimental lab development; comparison of models at room temperature; comparison of models at elevated temperature; and integrated software development.

  15. Characterization of Decommissioned PWR Vessel Internals Materials Samples: Material Certification, Fluence, and Temperature (Nonproprietary Version)

    SciTech Connect

    M. Krug; R. Shogan; A. Fero; M. Snyder

    2004-11-01

    Pressurized water reactor (PWR) cores, operate under extreme environmental conditions due to coolant chemistry, operating temperature, and neutron exposure. Extending the life of PWRs require detailed knowledge of the changes in mechanical and corrosion properties of the structural austenitic stainless steel components adjacent to the fuel. This report contains basic material characterization information of the as-installed samples of reactor internals material which were harvested from a decommissioned PWR.

  16. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGES

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  17. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  18. Errors of five-day mean surface wind and temperature conditions due to inadequate sampling

    NASA Technical Reports Server (NTRS)

    Legler, David M.

    1991-01-01

    Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.

  19. Effect of vacuum packing and temperature on survival and hatching of strongyle eggs in faecal samples.

    PubMed

    Sengupta, Mita E; Thapa, Sundar; Thamsborg, Stig M; Mejer, Helena

    2016-02-15

    Strongyle eggs of helminths of livestock usually hatch within a few hours or days after deposition with faeces. This poses a problem when faecal sampling is performed in the field. As oxygen is needed for embryonic development, it is recommended to reduce air supply during transport and refrigerate. The present study therefore investigated the combined effect of vacuum packing and temperature on survival of strongyle eggs and their subsequent ability to hatch and develop into L3. Fresh faecal samples were collected from calves infected with Cooperia oncophora, pigs infected with Oesophagostomum dentatum, and horses infected with Strongylus vulgaris and cyathostomins. The samples were allocated into four treatments: vacuum packing and storage at 5 °C or 20 °C (5 V and 20 V); normal packing in plastic gloves closed with a loose knot and storage at 5 °C or 20 °C (5 N and 20 N). The number of eggs per gram faeces (EPG) was estimated every fourth day until day 28 post set up (p.s.) by a concentration McMaster-method. Larval cultures were prepared on day 0, 12 and 28 p.s. and the larval yield determined. For C. oncophora, the EPG was significantly higher in vacuum packed samples after 28 days as compared to normal storage, regardless of temperature. However, O. dentatum EPG was significantly higher in samples kept at 5 °C as compared to 20 °C, irrespective of packing. For the horse strongyles, vacuum packed samples at 5 °C had a significantly higher EPG compared to the other treatments after 28 days. The highest larval yield of O. dentatum and horse strongyles were obtained from fresh faecal samples, however, if storage is necessary prior to setting up larval cultures O. dentatum should be kept at room temperature (aerobic or anaerobic). However, horse strongyle coprocultures should ideally be set up on the day of collection to ensure maximum yield. Eggs of C. oncophora should be kept vacuum packed at room temperature for the highest larval yield.

  20. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  1. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  2. A new two-temperature dissociation model for reacting flows

    NASA Technical Reports Server (NTRS)

    Olynick, David R.; Hassan, H. A.

    1992-01-01

    A new two-temperature dissociation model for flows undergoing compression is derived from kinetic theory. The model minimizes uncertainties associated with the two-temperature model of Park. The effects of the model on AOTV type flowfields are examined and compared with the Park model. Calculations are carried out for flows with and without ionization. When considering flows with ionization, a four temperature model is employed. For Fire II conditions, the assumption of equilibrium between the vibrational and electron-electronic temperatures is somewhat poor. A similar statement holds for the translational and rotational temperatures. These trends are consistent with results obtained using the direct simulation Monte Carlo method.

  3. Scaled tests and modeling of effluent stack sampling location mixing.

    PubMed

    Recknagle, Kurtis P; Yokuda, Satoru T; Ballinger, Marcel Y; Barnett, J Matthew

    2009-02-01

    A three-dimensional computational fluid dynamics computer model was used to evaluate the mixing at a sampling system for radioactive air emissions. Researchers sought to determine whether the location would meet the criteria for uniform air velocity and contaminant concentration as prescribed in the American National Standards Institute standard, Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities. This standard requires that the sampling location be well-mixed and stipulates specific tests to verify the extent of mixing. The exhaust system for the Radiochemical Processing Laboratory was modeled with a computational fluid dynamics code to better understand the flow and contaminant mixing and to predict mixing test results. The modeled results were compared to actual measurements made at a scale-model stack and to the limited data set for the full-scale facility stack. Results indicated that the computational fluid dynamics code provides reasonable predictions for velocity, cyclonic flow, gas, and aerosol uniformity, although the code predicts greater improvement in mixing as the injection point is moved farther away from the sampling location than is actually observed by measurements. In expanding from small to full scale, the modeled predictions for full-scale measurements show similar uniformity values as in the scale model. This work indicated that a computational fluid dynamics code can be a cost-effective aid in designing or retrofitting a facility's stack sampling location that will be required to meet standard ANSI/HPS N13.1-1999.

  4. Latent spatial models and sampling design for landscape genetics

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  5. RNA modeling using Gibbs sampling and stochastic context free grammars

    SciTech Connect

    Grate, L.; Herbster, M.; Rughey, R.; Haussler, D.

    1994-12-31

    A new method of discovering the common secondary structure of a family of homologous RNA sequences using Gibbs sampling and stochastic context-free grammars is proposed. Given an unaligned set of sequences, a Gibbs sampling step simultaneously estimates the secondary structure of each sequence and a set of statistical parameters describing the common secondary structure of the set as a whole. These parameters describe a statistical model of the family. After the Gibbs sampling has produced a crude statistical model for the family, this model is translated into a stochastic context-free grammar, which is then refined by an Expectation Maximization (EM) procedure to produce a more complete model. A prototype implementation of the method is tested on tRNA, pieces of 16S rRNA and on U5 snRNA with good results.

  6. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    SciTech Connect

    Montaser, A.

    1990-01-01

    In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, biomedicine and nutrition. Emphasis is being placed on: generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; diagnostic studies of high-temperature plasmas to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; development and characterization of new sample introduction systems that consume microliter or microgram quantities of samples, and investigation of new membrane separators for striping solvent from sample aerosol to reduce various interferences and to enhance sensitivity in plasma spectrometry.

  7. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability

    NASA Astrophysics Data System (ADS)

    Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang

    2015-09-01

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  8. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability

    SciTech Connect

    Kim, Jungdae; Nam, Hyoungdo; Schroeder, Allan; Shih, Chih-Kang; Qin, Shengyong; Kim, Sang-ui; Eom, Daejin

    2015-09-15

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  9. Effect of short-term room temperature storage on the microbial community in infant fecal samples

    PubMed Central

    Guo, Yong; Li, Sheng-Hui; Kuang, Ya-Shu; He, Jian-Rong; Lu, Jin-Hua; Luo, Bei-Jun; Jiang, Feng-Ju; Liu, Yao-Zhong; Papasian, Christopher J.; Xia, Hui-Min; Deng, Hong-Wen; Qiu, Xiu

    2016-01-01

    Sample storage conditions are important for unbiased analysis of microbial communities in metagenomic studies. Specifically, for infant gut microbiota studies, stool specimens are often exposed to room temperature (RT) conditions prior to analysis. This could lead to variations in structural and quantitative assessment of bacterial communities. To estimate such effects of RT storage, we collected feces from 29 healthy infants (0–3 months) and partitioned each sample into 5 portions to be stored for different lengths of time at RT before freezing at −80 °C. Alpha diversity did not differ between samples with storage time from 0 to 2 hours. The UniFrac distances and microbial composition analysis showed significant differences by testing among individuals, but not by testing between different time points at RT. Changes in the relative abundance of some specific (less common, minor) taxa were still found during storage at room temperature. Our results support previous studies in children and adults, and provided useful information for accurate characterization of infant gut microbiomes. In particular, our study furnished a solid foundation and justification for using fecal samples exposed to RT for less than 2 hours for comparative analyses between various medical conditions. PMID:27226242

  10. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Montaser, Akbar

    In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, biomedicine and nutrition. Emphasis is being placed on: generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; diagnostic studies of high-temperature plasmas to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; development and characterization of new sample introduction systems that consume microliter or microgram quantities of samples, and investigation of new membrane separators for striping solvent from sample aerosol to reduce various interferences and to enhance sensitivity in plasma spectrometry.

  11. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability.

    PubMed

    Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang

    2015-09-01

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  12. Multiple sample characterization of coals and other substances by controlled-atmosphere programmed temperature oxidation

    DOEpatents

    LaCount, Robert B.

    1993-01-01

    A furnace with two hot zones holds multiple analysis tubes. Each tube has a separable sample-packing section positioned in the first hot zone and a catalyst-packing section positioned in the second hot zone. A mass flow controller is connected to an inlet of each sample tube, and gas is supplied to the mass flow controller. Oxygen is supplied through a mass flow controller to each tube to either or both of an inlet of the first tube and an intermediate portion between the tube sections to intermingle with and oxidize the entrained gases evolved from the sample. Oxidation of those gases is completed in the catalyst in each second tube section. A thermocouple within a sample reduces furnace temperature when an exothermic condition is sensed within the sample. Oxidized gases flow from outlets of the tubes to individual gas cells. The cells are sequentially aligned with an infrared detector, which senses the composition and quantities of the gas components. Each elongated cell is tapered inward toward the center from cell windows at the ends. Volume is reduced from a conventional cell, while permitting maximum interaction of gas with the light beam. Reduced volume and angulation of the cell inlets provide rapid purgings of the cell, providing shorter cycles between detections. For coal and other high molecular weight samples, from 50% to 100% oxygen is introduced to the tubes.

  13. Cusp Catastrophe Polynomial Model: Power and Sample Size Estimation

    PubMed Central

    Chen, Ding-Geng(Din); Chen, Xinguang(Jim); Lin, Feng; Tang, Wan; Lio, Y. L.; Guo, (Tammy) Yuanyuan

    2016-01-01

    Guastello’s polynomial regression method for solving cusp catastrophe model has been widely applied to analyze nonlinear behavior outcomes. However, no statistical power analysis for this modeling approach has been reported probably due to the complex nature of the cusp catastrophe model. Since statistical power analysis is essential for research design, we propose a novel method in this paper to fill in the gap. The method is simulation-based and can be used to calculate statistical power and sample size when Guastello’s polynomial regression method is used to cusp catastrophe modeling analysis. With this novel approach, a power curve is produced first to depict the relationship between statistical power and samples size under different model specifications. This power curve is then used to determine sample size required for specified statistical power. We verify the method first through four scenarios generated through Monte Carlo simulations, and followed by an application of the method with real published data in modeling early sexual initiation among young adolescents. Findings of our study suggest that this simulation-based power analysis method can be used to estimate sample size and statistical power for Guastello’s polynomial regression method in cusp catastrophe modeling. PMID:27158562

  14. Modelling LARES temperature distribution and thermal drag

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuc H.; Matzner, Richard

    2015-10-01

    The LARES satellite, a laser-ranged space experiment to contribute to geophysics observation, and to measure the general relativistic Lense-Thirring effect, has been observed to undergo an anomalous along-track orbital acceleration of -0.4 pm/s2 (pm : = picometer). This thermal "drag" is not surprising; along-track thermal drag has previously been observed with the related LAGEOS satellites (-3.4 pm/s2). It is hypothesized that the thermal drag is principally due to anisotropic thermal radiation from the satellite's exterior. We report the results of numerical computations of the along-track orbital decay of the LARES satellite during the first 126 days after launch. The results depend to a significant degree on the visual and IR absorbance α and emissivity ɛ of the fused silica Cube Corner Reflectors. We present results for two values of α IR = ɛ IR : 0.82, a standard number for "clean" fused silica; and 0.60, a possible value for silica with slight surface contamination subjected to the space environment. The heating and the resultant along-track acceleration depend on the plane of the orbit, the sun position, and, in particular, on the occurrence of eclipses, all of which are functions of time. Thus we compute the thermal drag for specific days. We compare our model to observational data, available for a 120 day period starting with the 7th day after launch, which shows the average acceleration of -0.4 pm/s2. With our model the average along-track thermal drag over this 120 day period for CCR α IR = ɛ IR = 0.82 was computed to be -0.59 pm/s2. For CCR α IR = ɛ IR = 0.60 we compute -0.36 pm/s2. LARES consists of a solid spherical tungsten sphere, into which the CCRs are set in colatitude circles. Our calculation models the satellite as 93 isothermal elements: the tungsten part, and each of the 92 Cube Corner Reflectors. The satellite is heated from two sources: sunlight and Earth's infrared (IR) radiation. We work in the fast-spin regime, where CCRs with

  15. Effect of flux adjustments on temperature variability in climate models

    NASA Astrophysics Data System (ADS)

    CMIP investigators; Duffy, P. B.; Bell, J.; Covey, C.; Sloan, L.

    2000-03-01

    It has been suggested that “flux adjustments” in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  16. Temperature calibration of lacustrine alkenones using in-situ sampling and growth cultures

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Toney, J. L.; Andersen, R.; Fritz, S. C.; Baker, P. A.; Grimm, E. C.; Theroux, S.; Amaral Zettler, L.; Nyren, P. E.

    2010-12-01

    Sedimentary alkenones have been found in an increasing number of lakes around the globe. Studies using molecular biological tools, however, indicate that haptophyte species that produce lacustrine alkenones differ from the oceanic species. In order to convert alkenone unsaturation ratios measured in sediments into temperature, it is necessary to obtain an accurate calibration for individual lakes. Using Lake George, North Dakota, U.S. as an example, we have carried out temperature calibrations by both in-situ water column sampling and culture growth experiments. In-situ measured lake water temperatures in the lake show a strong correlation with the alkenone unsaturation indices (r-squared = 0.82), indicating a rapid equilibrium of alkenone distributions with the lake water temperature in the water column. We applied the in-situ calibration to down-core measurements for Lake George and generated realistic temperature estimates for the past 8 kyr. Algal isolation and culture growth, on the other hand, reveal the presence of two different types of alkenone producing haptophytes. The species making a predominant C37:4 alkenone (species A) produced much greater concentrations of alkenones per unit volume than the species that produced a predominant C37:3 alkenone (species B). It is the first time that a haptophyte species (species A) making a predominant C37:4 alkenone is cultured successfully and now replicated at four different growth temperatures. The distribution of alkeones in Lake George sediments matches extremely well with the alkenones produced by species A, indicating species A is likely the producer for the alkenones in the sediments. The alkenone unsaturation ratio of alkenones produced by species A haptophyte shows a primary dependence on growth temperature as expected, but the slope of change appears to vary depending on the growth stages. The implications of our findings for paleoclimate reconstructions using lacustrine alkenones will be discussed.

  17. Modeling temperature dependence of trace element concentrations in groundwater using temperature dependent distribution coefficient

    NASA Astrophysics Data System (ADS)

    Saito, H.; Saito, T.; Hamamoto, S.; Komatsu, T.

    2015-12-01

    In our previous study, we have observed trace element concentrations in groundwater increased when groundwater temperature was increased with constant thermal loading using a 50-m long vertical heat exchanger installed at Saitama University, Japan. During the field experiment, 38 degree C fluid was circulated in the heat exchanger resulting 2.8 kW thermal loading over 295 days. Groundwater samples were collected regularly from 17-m and 40-m deep aquifers at four observation wells located 1, 2, 5, and 10 m, respectively, from the heat exchange well and were analyzed with ICP-MS. As a result, concentrations of some trace elements such as boron increased with temperature especially at the 17-m deep aquifer that is known as marine sediment. It has been also observed that the increased concentrations have decreased after the thermal loading was terminated indicating that this phenomenon may be reversible. Although the mechanism is not fully understood, changes in the liquid phase concentration should be associated with dissolution and/or desorption from the solid phase. We therefore attempt to model this phenomenon by introducing temperature dependence in equilibrium linear adsorption isotherms. We assumed that distribution coefficients decrease with temperature so that the liquid phase concentration of a given element becomes higher as the temperature increases under the condition that the total mass stays constant. A shape function was developed to model the temperature dependence of the distribution coefficient. By solving the mass balance equation between the liquid phase and the solid phase for a given element, a new term describing changes in the concentration was implemented in a source/sink term of a standard convection dispersion equation (CDE). The CDE was then solved under a constant ground water flow using FlexPDE. By calibrating parameters in the newly developed shape function, the changes in element concentrations observed were quite well predicted. The

  18. Long-term room temperature preservation of corpse soft tissue: an approach for tissue sample storage

    PubMed Central

    2011-01-01

    Background Disaster victim identification (DVI) represents one of the most difficult challenges in forensic sciences, and subsequent DNA typing is essential. Collected samples for DNA-based human identification are usually stored at low temperature to halt the degradation processes of human remains. We have developed a simple and reliable procedure for soft tissue storage and preservation for DNA extraction. It ensures high quality DNA suitable for PCR-based DNA typing after at least 1 year of room temperature storage. Methods Fragments of human psoas muscle were exposed to three different environmental conditions for diverse time periods at room temperature. Storage conditions included: (a) a preserving medium consisting of solid sodium chloride (salt), (b) no additional substances and (c) garden soil. DNA was extracted with proteinase K/SDS followed by organic solvent treatment and concentration by centrifugal filter devices. Quantification was carried out by real-time PCR using commercial kits. Short tandem repeat (STR) typing profiles were analysed with 'expert software'. Results DNA quantities recovered from samples stored in salt were similar up to the complete storage time and underscored the effectiveness of the preservation method. It was possible to reliably and accurately type different genetic systems including autosomal STRs and mitochondrial and Y-chromosome haplogroups. Autosomal STR typing quality was evaluated by expert software, denoting high quality profiles from DNA samples obtained from corpse tissue stored in salt for up to 365 days. Conclusions The procedure proposed herein is a cost efficient alternative for storage of human remains in challenging environmental areas, such as mass disaster locations, mass graves and exhumations. This technique should be considered as an additional method for sample storage when preservation of DNA integrity is required for PCR-based DNA typing. PMID:21846338

  19. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  20. A Nonlinear Viscoelastic Model for Ceramics at High Temperatures

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.

    2002-01-01

    High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.

  1. NERVE AS MODEL TEMPERATURE END ORGAN

    PubMed Central

    Bernhard, C. G.; Granit, Ragnar

    1946-01-01

    Rapid local cooling of mammalian nerve sets up a discharge which is preceded by a local temperature potential, the cooled region being electronegative relative to a normal portion of the nerve. Heating the nerve locally above its normal temperature similarly makes the heated region electronegative relative to a region at normal temperature, and again a discharge is set up from the heated region. These local temperature potentials, set up by the nerve itself, are held to serve as "generator potentials" and the mechanism found is regarded as the prototype for temperature end organs. PMID:19873460

  2. Water adsorption at high temperature on core samples from The Geysers geothermal field

    SciTech Connect

    Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.

    1998-06-01

    The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal field, California, was measured at 150, 200, and 250 C as a function of steam pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption and desorption runs were made in order to investigate the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were made on the same rock samples. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there was no direct correlation between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The hysteresis decreased significantly at 250 C. The results indicate that multilayer adsorption, rather than capillary condensation, is the dominant water storage mechanism at high temperatures.

  3. Water adsorption at high temperature on core samples from The Geysers geothermal field

    SciTech Connect

    Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.

    1998-06-01

    The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal reservoir, California, was measured at 150, 200, and 250 C as a function of pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption (increasing pressure) and desorption (decreasing pressure) runs were made in order to investigate the nature and the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were performed on the same rock samples. Nitrogen or krypton adsorption and desorption isotherms at 77 K were used to obtain BET specific surface areas, pore volumes and their distributions with respect to pore sizes. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there is in general no proportionality between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The results indicate that multilayer adsorption rather than capillary condensation is the dominant water storage mechanism at high temperatures.

  4. On species sampling sequences induced by residual allocation models

    PubMed Central

    Rodríguez, Abel; Quintana, Fernando A.

    2014-01-01

    We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling. PMID:25477705

  5. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  6. Abstract: Sample Size Planning for Latent Curve Models.

    PubMed

    Lai, Keke

    2011-11-30

    When designing a study that uses structural equation modeling (SEM), an important task is to decide an appropriate sample size. Historically, this task is approached from the power analytic perspective, where the goal is to obtain sufficient power to reject a false null hypothesis. However, hypothesis testing only tells if a population effect is zero and fails to address the question about the population effect size. Moreover, significance tests in the SEM context often reject the null hypothesis too easily, and therefore the problem in practice is having too much power instead of not enough power. An alternative means to infer the population effect is forming confidence intervals (CIs). A CI is more informative than hypothesis testing because a CI provides a range of plausible values for the population effect size of interest. Given the close relationship between CI and sample size, the sample size for an SEM study can be planned with the goal to obtain sufficiently narrow CIs for the population model parameters of interest. Latent curve models (LCMs) is an application of SEM with mean structure to studying change over time. The sample size planning method for LCM from the CI perspective is based on maximum likelihood and expected information matrix. Given a sample, to form a CI for the model parameter of interest in LCM, it requires the sample covariance matrix S, sample mean vector [Formula: see text], and sample size N. Therefore, the width (w) of the resulting CI can be considered a function of S, [Formula: see text], and N. Inverting the CI formation process gives the sample size planning process. The inverted process requires a proxy for the population covariance matrix Σ, population mean vector μ, and the desired width ω as input, and it returns N as output. The specification of the input information for sample size planning needs to be performed based on a systematic literature review. In the context of covariance structure analysis, Lai and Kelley

  7. A temperature dependent SPICE macro-model for power MOSFETs

    SciTech Connect

    Pierce, D.G.

    1991-01-01

    The power MOSFET SPICE Macro-Model has been developed suitable for use over the temperature range {minus}55 to 125 {degrees}C. The model is comprised of a single parameter set with temperature dependence accessed through the SPICE .TEMP card. SPICE parameter extraction techniques for the model and model predictive accuracy are discussed. 7 refs., 8 figs., 1 tab.

  8. Simulating canopy temperature for modelling heat stress in cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop models must be improved to account for the large effects of heat stress effects on crop yields. To date, most approaches in crop models use air temperature despite evidence that crop canopy temperature better explains yield reductions associated with high temperature events. This study presents...

  9. Far-infrared Dust Temperatures and Column Densities of the MALT90 Molecular Clump Sample

    NASA Astrophysics Data System (ADS)

    Guzmán, Andrés E.; Sanhueza, Patricio; Contreras, Yanett; Smith, Howard A.; Jackson, James M.; Hoq, Sadia; Rathborne, Jill M.

    2015-12-01

    We present dust column densities and dust temperatures for ˜3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm-2, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.

  10. Study of Low Temperature Baking Effect on Field Emission on Nb Samples Treated by BEP, EP, and BCP

    SciTech Connect

    Andy Wu, Song Jin, Robert Rimmer, Xiang Yang Lu, K. Zhao, Laura MacIntyre, Robert Ike

    2010-05-01

    Field emission is still one of the major obstacles facing Nb superconducting radio frequency (SRF) community for allowing Nb SRF cavities to reach routinely accelerating gradient of 35 MV/m that is required for the international linear collider. Nowadays, the well know low temperature backing at 120 oC for 48 hours is a common procedure used in the SRF community to improve the high field Q slope. However, some cavity production data have showed that the low temperature baking may induce field emission for cavities treated by EP. On the other hand, an earlier study of field emission on Nb flat samples treated by BCP showed an opposite conclusion. In this presentation, the preliminary measurements of Nb flat samples treated by BEP, EP, and BCP via our unique home-made scanning field emission microscope before and after the low temperature baking are reported. Some correlations between surface smoothness and the number of the observed field emitters were found. The observed experimental results can be understood, at least partially, by a simple model that involves the change of the thickness of the pent-oxide layer on Nb surfaces.

  11. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  12. Temperature Chaos in Some Spherical Mixed p-Spin Models

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Kuo; Panchenko, Dmitry

    2017-03-01

    We give two types of examples of the spherical mixed even- p-spin models for which chaos in temperature holds. These complement some known results for the spherical pure p-spin models and for models with Ising spins. For example, in contrast to a recent result of Subag who showed absence of chaos in temperature in the spherical pure p-spin models for p≥3, we show that even a smaller order perturbation induces temperature chaos.

  13. Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses

    SciTech Connect

    Bastos, Rodrigo O.; Appoloni, Carlos R.

    2008-08-07

    In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.

  14. Application of sample-sample two-dimensional correlation spectroscopy to determine the glass transition temperature of poly(ethylene terephthalate) thin films.

    PubMed

    Hu, Yun; Zhang, Ying; Li, Boyan; Ozaki, Yukihiro

    2007-01-01

    The glass transition temperatures (Tg) of poly(ethylene terephthalate) (PET) thin films with different thicknesses are determined by analyzing their in situ reflection-absorption infrared (RAIR) spectra measured over a temperature range of 28 to 84 degrees C. The criterion of standard deviation of the covariance matrices is used as a graphical indicator for the determination of the Tg present in the sample-sample two-dimensional (2D) correlation spectra calculated from the temperature-dependent RAIR spectra. After two data pretreatments of the first derivative of the spectral absorbance versus temperature and the mean normalization over the wavenumbers are sequentially carried out on the RAIR spectra, an abrupt change of the first-derivative correlation spectra with respect to temperature is quickly obtained. It reflects the temperature at which the apparent intensity changes in pertinent absorption bands of PET thin films take place due to the dramatic segmental motion of PET chain conformation. The Tg of the thin PET films is accordingly determined. The results reveal that it decreases with a great dependence on the film thickness and that sample-sample 2D correlation spectroscopy enables one to determine the transition temperature of polymer thin films in an easy and valid way.

  15. A three stage sampling model for remote sensing applications

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A conceptual model and an empirical application of the relationship between the manner of selecting observations and its effect on the precision of estimates from remote sensing are reported. This three stage sampling scheme considers flightlines, segments within flightlines, and units within these segments. The error of estimate is dependent on the number of observations in each of the stages.

  16. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  17. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 8.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade eight language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different…

  18. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 5.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade five language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different writing…

  19. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 7.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade seven language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different…

  20. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 6.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade six language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different writing…

  1. Description of a sample holder for ion channeling near liquid-helium temperature

    NASA Astrophysics Data System (ADS)

    Daudin, B.; Dubus, M.; Viargues, F.

    1990-01-01

    Ion channeling is sensitive to very small shifts (10 -2 nm) of the atomic equilibrium positions. As a consequence, this technique appears to be suitable to study lattice dynamics, in particular when a displacive phase transition occurs. As many phase transitions of interest are observed at low temperature, we developed a three-axis goniometer in order to perform channeling experiments between 5 and 30 K. As no thermal screen could be placed between the sample and the ion beam, the quantity of heat radiated onto the sample holder was very large. The technical solutions which were chosen to overcome this difficulty and ensure both an efficient cooling and a good rotational mobility of the sample are described in detail. A liquid-helium flow of ˜ 6.5 1/h was found to be necessary to achieve a continuous refrigeration of the sample at 5 K. To conclude, proton channeling experiments in the blue bronze, K 0.3MoO 3, are presented as an illustration of the device possibilities.

  2. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    NASA Astrophysics Data System (ADS)

    Zou, S.; Maughan, B. J.; Giles, P. A.; Vikhlinin, A.; Pacaud, F.; Burenin, R.; Hornstrup, A.

    2016-11-01

    We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 < z < 0.15 with a median temperature of {˜ }2{keV}. The sample is a statistically complete flux-limited subset of the 400 deg2 survey. We investigated the scaling relation between X-ray luminosity (L) and temperature (T), taking selection biases fully into account. The logarithmic slope of the bolometric L-T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L-T relation, we show that there is no evidence for the slope, normalization, or scatter of the L-T relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised L-T relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups.

  3. Temperature distributions in the laser-heated diamond anvil cell from 3-D numerical modeling

    SciTech Connect

    Rainey, E. S. G.; Kavner, A.; Hernlund, J. W.

    2013-11-28

    We present TempDAC, a 3-D numerical model for calculating the steady-state temperature distribution for continuous wave laser-heated experiments in the diamond anvil cell. TempDAC solves the steady heat conduction equation in three dimensions over the sample chamber, gasket, and diamond anvils and includes material-, temperature-, and direction-dependent thermal conductivity, while allowing for flexible sample geometries, laser beam intensity profile, and laser absorption properties. The model has been validated against an axisymmetric analytic solution for the temperature distribution within a laser-heated sample. Example calculations illustrate the importance of considering heat flow in three dimensions for the laser-heated diamond anvil cell. In particular, we show that a “flat top” input laser beam profile does not lead to a more uniform temperature distribution or flatter temperature gradients than a wide Gaussian laser beam.

  4. Accelerated rare event sampling: Refinement and Ising model analysis

    NASA Astrophysics Data System (ADS)

    Yevick, David; Lee, Yong Hwan

    In this paper, a recently introduced accelerated sampling technique [D. Yevick, Int. J. Mod. Phys. C 27, 1650041 (2016)] for constructing transition matrices is further developed and applied to a two-dimensional 32×32 Ising spin system. By permitting backward displacements up to a certain limit for each forward step while evolving the system to first higher and then lower energies within a restricted interval that is steadily displaced toward zero temperature as the computation proceeds, accuracy can be greatly enhanced. Simultaneously, the elements obtained from numerous independent calculations are collected in a single transition matrix. The relative accuracy of this novel method is established through a comparison to a transition matrix procedure based on the Metropolis algorithm in which the temperature is appropriately varied during the calculation and the results interpreted in terms of the distribution of realizations over both energy and magnetization.

  5. Modeling air temperature changes in Northern Asia

    NASA Astrophysics Data System (ADS)

    Onuchin, A.; Korets, M.; Shvidenko, A.; Burenina, T.; Musokhranova, A.

    2014-11-01

    Based on time series (1950-2005) of monthly temperatures from 73 weather stations in Northern Asia (limited by 70-180° EL and 48-75° NL), it is shown that there are statistically significant spatial differences in character and intensity of the monthly and yearly temperature trends. These differences are defined by geomorphological and geographical parameters of the area including exposure of the territory to Arctic and Pacific air mass, geographic coordinates, elevation, and distances to Arctic and Pacific oceans. Study area has been divided into six domains with unique groupings of the temperature trends based on cluster analysis. An original methodology for mapping of temperature trends has been developed and applied to the region. The assessment of spatial patterns of temperature trends at the regional level requires consideration of specific regional features in the complex of factors operating in the atmosphere-hydrosphere-lithosphere-biosphere system.

  6. Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions

    PubMed Central

    Forstmann, B.U.; Ratcliff, R.; Wagenmakers, E.-J.

    2016-01-01

    Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model—the diffusion decision model—is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences. PMID:26393872

  7. Modeling the Freezing of SN in High Temperature Furnaces

    NASA Technical Reports Server (NTRS)

    Brush, Lucien

    1999-01-01

    Presently, crystal growth furnaces are being designed that will be used to monitor the crystal melt interface shape and the solutal and thermal fields in its vicinity during the directional freezing of dilute binary alloys, To monitor the thermal field within the solidifying materials, thermocouple arrays (AMITA) are inserted into the sample. Intrusive thermocouple monitoring devices can affect the experimental data being measured. Therefore, one objective of this work is to minimize the effect of the thermocouples on the data generated. To aid in accomplishing this objective, two models of solidification have been developed. Model A is a fully transient, one dimensional model for the freezing of a dilute binary alloy that is used to compute temperature profiles for comparison with measurements taken from the thermocouples. Model B is a fully transient two dimensional model of the solidification of a pure metal. It will be used to uncover the manner in which thermocouple placement and orientation within the ampoule breaks the longitudinal axis of symmetry of the thermal field and the crystal-melt interface. Results and conclusions are based on the comparison of the models with experimental results taken during the freezing of pure Sn.

  8. Data augmentation for models based on rejection sampling

    PubMed Central

    Rao, Vinayak; Lin, Lizhen; Dunson, David B.

    2016-01-01

    We present a data augmentation scheme to perform Markov chain Monte Carlo inference for models where data generation involves a rejection sampling algorithm. Our idea is a simple scheme to instantiate the rejected proposals preceding each data point. The resulting joint probability over observed and rejected variables can be much simpler than the marginal distribution over the observed variables, which often involves intractable integrals. We consider three problems: modelling flow-cytometry measurements subject to truncation; the Bayesian analysis of the matrix Langevin distribution on the Stiefel manifold; and Bayesian inference for a nonparametric Gaussian process density model. The latter two are instances of doubly-intractable Markov chain Monte Carlo problems, where evaluating the likelihood is intractable. Our experiments demonstrate superior performance over state-of-the-art sampling algorithms for such problems. PMID:27279660

  9. Tigers on trails: occupancy modeling for cluster sampling.

    PubMed

    Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U

    2010-07-01

    Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy

  10. β-Galactosidase activity of commercial lactase samples in raw and pasteurized milk at refrigerated temperatures.

    PubMed

    Horner, T W; Dunn, M L; Eggett, D L; Ogden, L V

    2011-07-01

    Many consumers are unable to enjoy the benefits of milk due to lactose intolerance. Lactose-free milk is available but at about 2 times the cost of regular milk or greater, it may be difficult for consumers to afford. The high cost of lactose-free milk is due in part to the added cost of the lactose hydrolysis process. Hydrolysis at refrigerated temperatures, possibly in the bulk tank or package, could increase the flexibility of the process and potentially reduce the cost. A rapid β-galactosidase assay was used to determine the relative activity of commercially available lactase samples at different temperatures. Four enzymes exhibited low-temperature activity and were added to refrigerated raw and pasteurized milk at various concentrations and allowed to react for various lengths of time. The degree of lactose hydrolysis by each of the enzymes as a function of time and enzyme concentration was determined by HPLC. The 2 most active enzymes, as determined by the β-galactosidase assay, hydrolyzed over 98% of the lactose in 24h at 2°C using the supplier's recommended dosage. The other 2 enzymes hydrolyzed over 95% of the lactose in 24h at twice the supplier's recommended dosage at 2°C. Results were consistent in all milk types tested. The results show that it is feasible to hydrolyze lactose during refrigerated storage of milk using currently available enzymes.

  11. Ambient temperature modelling with soft computing techniques

    SciTech Connect

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  12. De novo protein conformational sampling using a probabilistic graphical model.

    PubMed

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-06

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using 'blind' protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  13. Field Portable Low Temperature Porous Layer Open Tubular Cryoadsorption Headspace Sampling and Analysis Part II: Applications*

    PubMed Central

    Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.

    2016-01-01

    This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934

  14. The genealogy of samples in models with selection.

    PubMed

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  15. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2016-02-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy.

  16. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  17. The Genealogy of Samples in Models with Selection

    PubMed Central

    Neuhauser, C.; Krone, S. M.

    1997-01-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604

  18. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  19. FAR-INFRARED DUST TEMPERATURES AND COLUMN DENSITIES OF THE MALT90 MOLECULAR CLUMP SAMPLE

    SciTech Connect

    Guzmán, Andrés E.; Smith, Howard A.; Sanhueza, Patricio; Contreras, Yanett; Rathborne, Jill M.; Jackson, James M.; Hoq, Sadia

    2015-12-20

    We present dust column densities and dust temperatures for ∼3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm{sup −2}, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.

  20. Imputation for semiparametric transformation models with biased-sampling data

    PubMed Central

    Liu, Hao; Qin, Jing; Shen, Yu

    2012-01-01

    Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example. PMID:22903245

  1. Modeling HIV Prevention Strategies among Two Puerto Rican Samples

    PubMed Central

    Santiago-Rivas, Marimer; Pérez-Jiménez, David

    2012-01-01

    The Information-Motivation-Behavioral Skills model examines factors that are used to initiate and maintain sexual and reproductive health promotion behaviors. The present study evaluated the association among these constructs as it is applied to sexually active heterosexual adults with steady partners, using a Structural Equation Modeling approach. At the same time, it was analyzed if the same model structure could be generalized to two samples of participants that produced the results following two different formats for data collection. Two-hundred ninety one participants completed the Information-Motivation-Behavioral Skills Questionnaire (Spanish version), and 756 participants completed an Internet version on the instrument. The proposed model fits the data for both groups, supporting a predictive and positive relationship among all of the latent variables, with Information predicting Motivation, and Motivation therefore predicting Behavioral Skills. The findings support the notion that there are important issues that need to be addressed when promoting HIV prevention. PMID:23243320

  2. OPC model sampling evaluation and weakpoint "in-situ" improvement

    NASA Astrophysics Data System (ADS)

    Fu, Nan; Elshafie, Shady; Ning, Guoxiang; Roling, Stefan

    2016-10-01

    One of the major challenges of optical proximity correction (OPC) models is to maximize the coverage of real design features using sampling pattern. Normally, OPC model building is based on 1-D and 2-D test patterns with systematically changing pitches alignment with design rules. However, those features with different optical and geometric properties will generate weak-points where OPC simulation cannot precisely predict resist contours on wafer due to the nature of infinite IC designs and limited number of model test patterns. In this paper, optical property data of real design features were collected from full chips and classified to compare with the same kind of data from OPC test patterns. Therefore sample coverage could be visually mapped according to different optical properties. Design features, which are out of OPC capability, were distinguished by their optical properties and marked as weak-points. New patterns with similar optical properties would be added into model build site-list. Further, an alternative and more efficient method was created in this paper to improve the treatment of issue features and remove weak-points without rebuilding models. Since certain classification of optical properties will generate weak-points, an OPC-integrated repair algorithm was developed and implemented to scan full chip for optical properties, locate those features and then optimize OPC treatment or apply precise sizing on site. This is a named "in-situ" weak-point improvement flow which includes issue feature definition, allocation in full chip and real-time improvement.

  3. Manipulation of Samples at Extreme Temperatures for Fast in-situ Synchrotron Measurements

    SciTech Connect

    Weber, Richard

    2016-04-22

    An aerodynamic sample levitation system with laser beam heating was integrated with the APS beamlines 6 ID-D, 11 ID-C and 20 BM-B. The new capability enables in-situ measurements of structure and XANES at extreme temperatures (300-3500 °C) and in conditions that completely avoid contact with container surfaces. In addition to maintaining a high degree of sample purity, the use of aerodynamic levitation enables deep supercooling and greatly enhanced glass formation from a wide variety of melts and liquids. Development and integration of controlled extreme sample environments and new measurement techniques is an important aspect of beamline operations and user support. Processing and solidifying liquids is a critical value-adding step in manufacturing semiconductors, optical materials, metals and in the operation of many energy conversion devices. Understanding structural evolution is of fundamental importance in condensed materials, geology, and biology. The new capability provides unique possibilities for materials research and helps to develop and maintain a competitive materials manufacturing and energy utilization industry. Test samples were used to demonstrate key features of the capability including experiments on hot crystalline materials, liquids at temperatures from about 500 to 3500 °C. The use of controlled atmospheres using redox gas mixtures enabled in-situ changes in the oxidation states of cations in melts. Significant innovations in this work were: (i) Use of redox gas mixtures to adjust the oxidation state of cations in-situ (ii) Operation with a fully enclosed system suitable for work with nuclear fuel materials (iii) Making high quality high energy in-situ x-ray diffraction measurements (iv) Making high quality in-situ XANES measurements (v) Publishing high impact results (vi) Developing independent funding for the research on nuclear materials This SBIR project work led to a commercial instrument product for the niche market of processing and

  4. THE TWO-LEVEL MODEL AT FINITE-TEMPERATURE

    SciTech Connect

    Goodman, A.L.

    1980-07-01

    The finite-temperature HFB cranking equations are solved for the two-level model. The pair gap, moment of inertia and internal energy are determined as functions of spin and temperature. Thermal excitations and rotations collaborate to destroy the pair correlations. Raising the temperature eliminates the backbending effect and improves the HFB approximation.

  5. Two-Temperature Model of Nonequilibrium Electron Relaxation:. a Review

    NASA Astrophysics Data System (ADS)

    Singh, Navinder

    The present paper is a review of the phenomena related to nonequilibrium electron relaxation in bulk and nano-scale metallic samples. The workable Two-Temperature Model (TTM) based on Boltzmann-Bloch-Peierls kinetic equation has been applied to study the ultra-fast (femto-second) electronic relaxation in various metallic systems. The advent of new ultra-fast (femto-second) laser technology and pump-probe spectroscopy has produced wealth of new results for micro- and nano-scale electronic technology. The aim of this paper is to clarify the TTM, conditions of its validity and nonvalidity, its modifications for nano-systems, to sum-up the progress, and to point out open problems in this field. We also give a phenomenological integro-differential equation for the kinetics of nondegenerate electrons that goes beyond the TTM.

  6. Three-dimensional temperature fields of the North Patagonian Sea recorded by Magellanic penguins as biological sampling platforms

    NASA Astrophysics Data System (ADS)

    Sala, Juan E.; Pisoni, Juan P.; Quintana, Flavio

    2017-04-01

    Temperature is a primary determinant of biogeographic patterns and ecosystem processes. Standard techniques to study the ocean temperature in situ are, however, particularly limited by their time and spatial coverage, problems which might be partially mitigated by using marine top predators as biological platforms for oceanographic sampling. We used small archival tags deployed on 33 Magellanic penguins (Spheniscus magellanicus), and obtained 21,070 geo-localized profiles of water temperature, during late spring of 2008, 2011, 2012 and 2013; in a region of the North Patagonian Sea with limited oceanographic records in situ. We compared our in situ data of sea surface temperature (SST) with those available from satellite remote sensing; to describe the three-dimensional temperature fields around the area of influence of two important tidal frontal systems; and to study the inter-annual variation in the three-dimensional temperature fields. There was a strong positive relationship between satellite- and animal-derived SST data although there was an overestimation by remote-sensing by a maximum difference of +2 °C. Little inter-annual variability in the 3-dimensional temperature fields was found, with the exception of 2012 (and to a lesser extent in 2013) where the SST was significantly higher. In 2013, we found weak stratification in a region which was unexpected. In addition, during the same year, a warm small-scale vortex is indicated by the animal-derived temperature data. This allowed us to describe and better understand the dynamics of the water masses, which, so far, have been mainly studied by remote sensors and numerical models. Our results highlight again the potential of using marine top predators as biological platforms to collect oceanographic data, which will enhance and accelerate studies on the Southwest Atlantic Ocean. In a changing world, threatened by climate change, it is urgent to fill information gaps on the coupled ocean-atmosphere system

  7. Modelling of tandem cell temperature coefficients

    SciTech Connect

    Friedman, D.J.

    1996-05-01

    This paper discusses the temperature dependence of the basic solar-cell operating parameters for a GaInP/GaAs series-connected two-terminal tandem cell. The effects of series resistance and of different incident solar spectra are also discussed.

  8. Optimizing headspace temperature and time sampling for identification of volatile compounds in ground roasted Arabica coffee.

    PubMed

    Sanz, C; Ansorena, D; Bello, J; Cid, C

    2001-03-01

    Equilibration time and temperature were the factors studied to choose the best conditions for analyzing volatiles in roasted ground Arabica coffee by a static headspace sampling extraction method. Three temperatures of equilibration were studied: 60, 80, and 90 degrees C. A larger quantity of volatile compounds was extracted at 90 degrees C than at 80 or 60 degrees C, although the same qualitative profile was found for each. The extraction of the volatile compounds was studied at seven different equilibration times: 30, 45, 60, 80, 100, 120, and 150 min. The best time of equilibration for headspace analysis of roasted ground Arabica coffee should be selected depending on the chemical class or compound studied. One hundred and twenty-two volatile compounds were identified, including 26 furans, 20 ketones, 20 pyrazines, 9 alcohols, 9 aldehydes, 8 esters, 6 pyrroles, 6 thiophenes, 4 sulfur compounds, 3 benzenic compounds, 2 phenolic compounds, 2 pyridines, 2 thiazoles, 1 oxazole, 1 lactone, 1 alkane, 1 alkene, and 1 acid.

  9. Physical Models of Seismic-Attenuation Measurements on Lab Samples

    NASA Astrophysics Data System (ADS)

    Coulman, T. J.; Morozov, I. B.

    2012-12-01

    Seismic attenuation in Earth materials is often measured in the lab by using low-frequency forced oscillations or static creep experiments. The usual assumption in interpreting and even designing such experiments is the "viscoelastic" behavior of materials, i.e., their description by the notions of a Q-factor and material memory. However, this is not the only theoretical approach to internal friction, and it also involves several contradictions with conventional mechanics. From the viewpoint of mechanics, the frequency-dependent Q becomes a particularly enigmatic property attributed to the material. At the same time, the behavior of rock samples in seismic-attenuation experiments can be explained by a strictly mechanical approach. We use this approach to simulate such experiments analytically and numerically for a system of two cylinders consisting of a rock sample and elastic standard undergoing forced oscillations, and also for a single rock sample cylinder undergoing static creep. The system is subject to oscillatory compression or torsion, and the phase-lag between the sample and standard is measured. Unlike in the viscoelastic approach, a full Lagrangian formulation is considered, in which material anelasticity is described by parameters of "solid viscosity" and a dissipation function from which the constitutive equation is derived. Results show that this physical model of anelasticity predicts creep results very close to those obtained by using empirical Burger's bodies or Andrade laws. With nonlinear (non-Newtonian) solid viscosity, the system shows an almost instantaneous initial deformation followed by slow creep towards an equilibrium. For Aheim Dunite, the "rheologic" parameters of nonlinear viscosity are υ=0.79 and η=2.4 GPa-s. Phase-lag results for nonlinear viscosity show Q's slowly decreasing with frequency. To explain a Q increasing with frequency (which is often observed in the lab and in the field), one has to consider nonlinear viscosity with

  10. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry. Progress report, January 1, 1990--December 31, 1992

    SciTech Connect

    Montaser, A.

    1992-09-01

    New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)

  11. Sampling the NCAR TIEGCM, TIME-GCM, and GSWM models for CEDAR and TIMED related studies

    NASA Astrophysics Data System (ADS)

    Oberheide, J.; Hagan, M. E.; Roble, R. G.; Lu, G.

    2003-04-01

    The instruments on the TIMED satellite and a complement of ground based CEDAR instruments will provide invaluable diagnostics of mesosphere, lower thermosphere, and E-region ionosphere (MLTI, ca. 60-180 km) forcings, dynamics, and energetics. The interpretation of these diagnostics and elucidation of the impact of the associated processes on the MLTI requires complementary modeling initiatives. We make samples of the NCAR/HAO TIME-GCM, TIEGCM, and GSWM model outputs available to the community via the web. The model results are sampled in a way to provide winds, temperatures, and trace constituents that would be measured by the TIMED instruments if the satellite flew through the model atmosphere. We also provide an analogous product for the CEDAR ground-based component of TIMED.

  12. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1988-01-01

    Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.

  13. Temperature-variable high-frequency dynamic modeling of PIN diode

    NASA Astrophysics Data System (ADS)

    Shangbin, Ye; Jiajia, Zhang; Yicheng, Zhang; Yongtao, Yao

    2016-04-01

    The PIN diode model for high frequency dynamic transient characteristic simulation is important in conducted EMI analysis. The model should take junction temperature into consideration since equipment usually works at a wide range of temperature. In this paper, a temperature-variable high frequency dynamic model for the PIN diode is built, which is based on the Laplace-transform analytical model at constant temperature. The relationship between model parameters and temperature is expressed as temperature functions by analyzing the physical principle of these parameters. A fast recovery power diode MUR1560 is chosen as the test sample and its dynamic performance is tested under inductive load by a temperature chamber experiment, which is used for model parameter extraction and model verification. Results show that the model proposed in this paper is accurate for reverse recovery simulation with relatively small errors at the temperature range from 25 to 120 °C. Project supported by the National High Technology and Development Program of China (No. 2011AA11A265).

  14. Sample size matters: investigating the effect of sample size on a logistic regression susceptibility model for debris flows

    NASA Astrophysics Data System (ADS)

    Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

    2014-02-01

    Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable and reproducible results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and they approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial data sets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In

  15. Sample size matters: investigating the effect of sample size on a logistic regression debris flow susceptibility model

    NASA Astrophysics Data System (ADS)

    Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.

    2013-06-01

    Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial datasets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In view of these results, we

  16. Modeling Climate Change Effects on Stream Temperatures in Regulated Rivers

    NASA Astrophysics Data System (ADS)

    Null, S. E.; Akhbari, M.; Ligare, S. T.; Rheinheimer, D. E.; Peek, R.; Yarnell, S. M.; Viers, J. H.

    2013-12-01

    We provide a method for examining mesoscale stream temperature objectives downstream of dams with anticipated climate change using an integrated multi-model approach. Changing hydroclimatic conditions will likely impact stream temperatures within reservoirs and below dams, and affect downstream ecology. We model hydrology and water temperature using a series of linked models that includes a hydrology model to predict natural unimpaired flows in upstream reaches, a reservoir temperature simulation model , an operations model to simulate reservoir releases, and a stream temperature simulation model to simulate downstream conditions . All models are 1-dimensional and operate on either a weekly or daily timestep. First, we model reservoir thermal dynamics and release operations of hypothetical reservoirs of different sizes, elevations, and latitudes with climate-forced inflow hydrologies to examine the potential to manage stream temperatures for coldwater habitat. Results are presented as stream temperature change from the historical time period and indicate that reservoir releases are cooler than upstream conditions, although the absolute temperatures of reaches below dams warm with climate change. We also apply our method to a case study in California's Yuba River watershed to evaluate water regulation and hydropower operation effects on stream temperatures with climate change. Catchments of the upper Yuba River are highly-engineered, with multiple, interconnected infrastructure to provide hydropower, water supply, flood control, environmental flows, and recreation. Results illustrate climate-driven versus operations-driven changes to stream temperatures. This work highlights the need for methods to consider reservoir regulation effects on stream temperatures with climate change, particularly for hydropower relicensing (which currently ignores climate change) such that impacts to other beneficial uses like coldwater habitat and instream ecosystems can be

  17. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  18. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  19. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  20. Motif Yggdrasil: sampling sequence motifs from a tree mixture model.

    PubMed

    Andersson, Samuel A; Lagergren, Jens

    2007-06-01

    In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model (MY model) describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. The model allows toggling, i.e., the restriction of a position to a subset of nucleotides, but does not require aligned sequences nor edge lengths, which may be difficult to come by. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. We show that the MY model improves the modeling of difficult motif instances and that the use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes. We investigate the sensitivity to errors in the tree and show that using random trees MY sampler still has a performance similar to the original version.

  1. Volcanic Aerosol Evolution: Model vs. In Situ Sampling

    NASA Astrophysics Data System (ADS)

    Pfeffer, M. A.; Rietmeijer, F. J.; Brearley, A. J.; Fischer, T. P.

    2002-12-01

    Volcanoes are the most significant non-anthropogenic source of tropospheric aerosols. Aerosol samples were collected at different distances from 92°C fumarolic source at Poás Volcano. Aerosols were captured on TEM grids coated by a thin C-film using a specially designed collector. In the sampling, grids were exposed to the plume for 30-second intervals then sealed and frozen to prevent reaction before ATEM analysis to determine aerosol size and chemistry. Gas composition was established using gas chromatography, wet chemistry techniques, AAS and Ion Chromatography on samples collected directly from a fumarolic vent. SO2 flux was measured remotely by COSPEC. A Gaussian plume dispersion model was used to model concentrations of the gases at different distances down-wind. Calculated mixing ratios of air and the initial gas species were used as input to the thermo-chemical model GASWORKS (Symonds and Reed, Am. Jour. Sci., 1993). Modeled products were compared with measured aerosol compositions. Aerosols predicted to precipitate out of the plume one meter above the fumarole are [CaSO4, Fe2.3SO4, H2SO4, MgF2. Na2SO4, silica, water]. Where the plume leaves the confines of the crater, 380 meters distant, the predicted aerosols are the same, excepting FeF3 replacing Fe2.3SO4. Collected aerosols show considerable compositional differences between the sampling locations and are more complex than those predicted. Aerosols from the fumarole consist of [Fe +/- Si,S,Cl], [S +/- O] and [Si +/- O]. Aerosols collected on the crater rim consist of the same plus [O,Na,Mg,Ca], [O,Si,Cl +/- Fe], [Fe,O,F] and [S,O +/- Mg,Ca]. The comparison between results obtained by the equilibrium gas model and the actual aerosol compositions shows that an assumption of chemical and thermal equilibrium evolution is invalid. The complex aerosols collected contrast the simple formulae predicted. These findings show that complex, non-equilibrium chemical reactions take place immediately upon volcanic

  2. Temperature-accelerated molecular dynamics gives insights into globular conformations sampled in the free state of the AC catalytic domain.

    PubMed

    Selwa, Edithe; Huynh, Tru; Ciccotti, Giovanni; Maragliano, Luca; Malliavin, Thérèse E

    2014-10-01

    The catalytic domain of the adenyl cyclase (AC) toxin from Bordetella pertussis is activated by interaction with calmodulin (CaM), resulting in cAMP overproduction in the infected cell. In the X-ray crystallographic structure of the complex between AC and the C terminal lobe of CaM, the toxin displays a markedly elongated shape. As for the structure of the isolated protein, experimental results support the hypothesis that more globular conformations are sampled, but information at atomic resolution is still lacking. Here, we use temperature-accelerated molecular dynamics (TAMD) simulations to generate putative all-atom models of globular conformations sampled by CaM-free AC. As collective variables, we use centers of mass coordinates of groups of residues selected from the analysis of standard molecular dynamics (MD) simulations. Results show that TAMD allows extended conformational sampling and generates AC conformations that are more globular than in the complexed state. These structures are then refined via energy minimization and further unrestrained MD simulations to optimize inter-domain packing interactions, thus resulting in the identification of a set of hydrogen bonds present in the globular conformations.

  3. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  4. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    SciTech Connect

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.; Mahadevan, Sankaran

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  5. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  6. A Simple Dewar/Cryostat for Thermally Equilibrating Samples at Known Temperatures for Accurate Cryogenic Luminescence Measurements.

    PubMed

    Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W

    2016-07-19

    The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature.

  7. Modeling and Simulation of a Tethered Harpoon for Comet Sampling

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2014-01-01

    This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.

  8. A thermocouple-based remote temperature controller of an electrically floated sample to study plasma CVD growth of carbon nanotube

    NASA Astrophysics Data System (ADS)

    Miura, Takuya; Xie, Wei; Yanase, Takashi; Nagahama, Taro; Shimada, Toshihiro

    2015-09-01

    Plasma chemical vapor deposition (CVD) is now gathering attention from a novel viewpoint, because it is easy to combine plasma processes and electrochemistry by applying a bias voltage to the sample. In order to explore electrochemistry during the plasma CVD, the temperature of the sample must be controlled precisely. In traditional equipment, the sample temperature is measured by a radiation thermometer. Since emissivity of the sample surface changes in the course of the CVD growth, it is difficult to measure the exact temperature using the radiation thermometer. In this work, we developed new equipment to control the temperature of electrically floated samples by thermocouple with Wi-Fi transmission. The growth of the CNT was investigated using our plasma CVD equipment. We examined the temperature accuracy and stability controlled by the thermocouple with monitoring the radiation thermometer. We noticed that the thermocouple readings were stable, whereas the readings of the radiation thermometer changes significantly (20 °C) during plasma CVD. This result clearly shows that the sample temperature should be measured with direct connection. On the result of CVD experiment, different structures of carbon including CNT were obtained by changing the bias voltages.

  9. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    1999-02-20

    It is well known that the fluid phase equilibria can be represented by a number of {gamma}-models , but unfortunately most of them do not function well under high temperature. In this calculation, we mainly investigate the performance of UNIQUAC and NRTL models under high temperature, using temperature dependent parameters rather than using the original formulas. the other feature of this calculation is that we try to relate the excess Gibbs energy G{sup E}and enthalpy of mixing H{sup E}simultaneously. In other words, we will use the high temperature and pressure G{sup E} and H{sup E}data to regress the temperature dependant parameters to find out which model and what kind of temperature dependant parameters should be used.

  10. Temperature Dependent Constitutive Modeling for Magnesium Alloy Sheet

    SciTech Connect

    Lee, Jong K.; Lee, June K.; Kim, Hyung S.; Kim, Heon Y.

    2010-06-15

    Magnesium alloys have been increasingly used in automotive and electronic industries because of their excellent strength to weight ratio and EMI shielding properties. However, magnesium alloys have low formability at room temperature due to their unique mechanical behavior (twinning and untwining), prompting for forming at an elevated temperature. In this study, a temperature dependent constitutive model for magnesium alloy (AZ31B) sheet is developed. A hardening law based on non linear kinematic hardening model is used to consider Bauschinger effect properly. Material parameters are determined from a series of uni-axial cyclic experiments (T-C-T or C-T-C) with the temperature ranging 150-250 deg. C. The influence of temperature on the constitutive equation is introduced by the material parameters assumed to be functions of temperature. Fitting process of the assumed model to measured data is presented and the results are compared.

  11. Simulation of soil temperature dynamics with models using different concepts.

    PubMed

    Sándor, Renáta; Fodor, Nándor

    2012-01-01

    This paper presents two soil temperature models with empirical and mechanistic concepts. At the test site (calcaric arenosol), meteorological parameters as well as soil moisture content and temperature at 5 different depths were measured in an experiment with 8 parcels realizing the combinations of the fertilized, nonfertilized, irrigated, nonirrigated treatments in two replicates. Leaf area dynamics was also monitored. Soil temperature was calculated with the original and a modified version of CERES as well as with the HYDRUS-1D model. The simulated soil temperature values were compared to the observed ones. The vegetation reduced both the average soil temperature and its diurnal amplitude; therefore, considering the leaf area dynamics is important in modeling. The models underestimated the actual soil temperature and overestimated the temperature oscillation within the winter period. All models failed to account for the insulation effect of snow cover. The modified CERES provided explicitly more accurate soil temperature values than the original one. Though HYDRUS-1D provided more accurate soil temperature estimations, its superiority to CERES is not unequivocal as it requires more detailed inputs.

  12. Multi-Relaxation Temperature-Dependent Dielectric Model of the Arctic Soil at Positive Temperatures

    NASA Astrophysics Data System (ADS)

    Savin, I. V.; Mironov, V. L.

    2014-11-01

    Frequency spectra of the dielectric permittivity of the Arctic soil of Alaska are investigated with allowance for the dipole and ionic relaxation of molecules of the soil moisture at frequencies from 40 MHz to 16 GHz and temperatures from -5 to +25°С. A generalized temperature-dependent multi-relaxation refraction dielectric model of the humid Arctic soil is suggested.

  13. Sparse model selection in the highly under-sampled regime

    NASA Astrophysics Data System (ADS)

    Bulso, Nicola; Marsili, Matteo; Roudi, Yasser

    2016-09-01

    We propose a method for recovering the structure of a sparse undirected graphical model when very few samples are available. The method decides about the presence or absence of bonds between pairs of variable by considering one pair at a time and using a closed form formula, analytically derived by calculating the posterior probability for every possible model explaining a two body system using Jeffreys prior. The approach does not rely on the optimization of any cost functions and consequently is much faster than existing algorithms. Despite this time and computational advantage, numerical results show that for several sparse topologies the algorithm is comparable to the best existing algorithms, and is more accurate in the presence of hidden variables. We apply this approach to the analysis of US stock market data and to neural data, in order to show its efficiency in recovering robust statistical dependencies in real data with non-stationary correlations in time and/or space.

  14. Hierarchical Bayesian Modeling, Estimation, and Sampling for Multigroup Shape Analysis

    PubMed Central

    Yu, Yen-Yun; Fletcher, P. Thomas; Awate, Suyash P.

    2016-01-01

    This paper proposes a novel method for the analysis of anatomical shapes present in biomedical image data. Motivated by the natural organization of population data into multiple groups, this paper presents a novel hierarchical generative statistical model on shapes. The proposed method represents shapes using pointsets and defines a joint distribution on the population’s (i) shape variables and (ii) object-boundary data. The proposed method solves for optimal (i) point locations, (ii) correspondences, and (iii) model-parameter values as a single optimization problem. The optimization uses expectation maximization relying on a novel Markov-chain Monte-Carlo algorithm for sampling in Kendall shape space. Results on clinical brain images demonstrate advantages over the state of the art. PMID:25320776

  15. West Flank Coso, CA FORGE 3D temperature model

    SciTech Connect

    Doug Blankenship

    2016-03-01

    x,y,z data of the 3D temperature model for the West Flank Coso FORGE site. Model grid spacing is 250m. The temperature model for the Coso geothermal field used over 100 geothermal production sized wells and intermediate-depth temperature holes. At the near surface of this model, two boundary temperatures were assumed: (1) areas with surface manifestations, including fumaroles along the northeast striking normal faults and northwest striking dextral faults with the hydrothermal field, a temperature of ~104˚C was applied to datum at +1066 meters above sea level elevation, and (2) a near-surface temperature at about 10 meters depth, of 20˚C was applied below the diurnal and annual conductive temperature perturbations. These assumptions were based on heat flow studies conducted at the CVF and for the Mojave Desert. On the edges of the hydrothermal system, a 73˚C/km (4˚F/100’) temperature gradient contour was established using conductive gradient data from shallow and intermediate-depth temperature holes. This contour was continued to all elevation datums between the 20˚C surface and -1520 meters below mean sea level. Because the West Flank is outside of the geothermal field footprint, during Phase 1, the three wells inside the FORGE site were incorporated into the preexisting temperature model. To ensure a complete model was built based on all the available data sets, measured bottom-hole temperature gradients in certain wells were downward extrapolated to the next deepest elevation datum (or a maximum of about 25% of the well depth where conductive gradients are evident in the lower portions of the wells). After assuring that the margins of the geothermal field were going to be adequately modelled, the data was contoured using the Kriging method algorithm. Although the extrapolated temperatures and boundary conditions are not rigorous, the calculated temperatures are anticipated to be within ~6˚C (20˚F), or one contour interval, of the

  16. Sample Collection from Small Airless Bodies: Examination of Temperature Constraints for the TGIP Sample Collector for the Hera Near-Earth Asteroid Sample Return Mission

    NASA Technical Reports Server (NTRS)

    Franzen, M. A.; Roe, L. A.; Buffington, J. A.; Sears, D. W. G.

    2005-01-01

    There have been a number of missions that have explored the solar system with cameras and other instruments but profound questions remain that can only be addressed through the analysis of returned samples. However, due to lack of appropriate technology, high cost, and high risk, sample return has only recently become a feasible part of robotic solar system exploration. One specific objective of the President s new vision is that robotic exploration of the solar system should enhance human exploration as it discovers and understands the the solar system, and searches for life and resources [1]. Missions to small bodies, asteroids and comets, will partially fill the huge technological void between missions to the Moon and missions to Mars. However, such missions must be low cost and inherently simple, so they can be applied routinely to many missions. Sample return from asteroids, comets, Mars, and Jupiter s moons will be an important and natural part of the human exploration of space effort. Here we describe the collector designed for the Hera Near-Earth Asteroid Sample Return Mission. We have built a small prototype for preliminary evaluation, but expect the final collector to gather approx.100 g of sample of dust grains to centimeter sized clasts on each application to the surface of the asteroid.

  17. Activation energy for a model ferrous-ferric half reaction from transition path sampling.

    PubMed

    Drechsel-Grau, Christof; Sprik, Michiel

    2012-01-21

    Activation parameters for the model oxidation half reaction of the classical aqueous ferrous ion are compared for different molecular simulation techniques. In particular, activation free energies are obtained from umbrella integration and Marcus theory based thermodynamic integration, which rely on the diabatic gap as the reaction coordinate. The latter method also assumes linear response, and both methods obtain the activation entropy and the activation energy from the temperature dependence of the activation free energy. In contrast, transition path sampling does not require knowledge of the reaction coordinate and directly yields the activation energy [C. Dellago and P. G. Bolhuis, Mol. Simul. 30, 795 (2004)]. Benchmark activation energies from transition path sampling agree within statistical uncertainty with activation energies obtained from standard techniques requiring knowledge of the reaction coordinate. In addition, it is found that the activation energy for this model system is significantly smaller than the activation free energy for the Marcus model, approximately half the value, implying an equally large entropy contribution.

  18. Modeling the Orbital Sampling Effect of Extrasolar Moons

    NASA Astrophysics Data System (ADS)

    Heller, René; Hippke, Michael; Jackson, Brian

    2016-04-01

    The orbital sampling effect (OSE) appears in phase-folded transit light curves of extrasolar planets with moons. Analytical OSE models have hitherto neglected stellar limb darkening and non-zero transit impact parameters and assumed that the moon is on a circular, co-planar orbit around the planet. Here, we present an analytical OSE model for eccentric moon orbits, which we implement in a numerical simulator with stellar limb darkening that allows for arbitrary transit impact parameters. We also describe and publicly release a fully numerical OSE simulator (PyOSE) that can model arbitrary inclinations of the transiting moon orbit. Both our analytical solution for the OSE and PyOSE can be used to search for exomoons in long-term stellar light curves such as those by Kepler and the upcoming PLATO mission. Our updated OSE model offers an independent method for the verification of possible future exomoon claims via transit timing variations and transit duration variations. Photometrically quiet K and M dwarf stars are particularly promising targets for an exomoon discovery using the OSE.

  19. Estimating sampling biases and measurement uncertainties of AIRS/AMSU-A temperature and water vapor observations using MERRA reanalysis

    NASA Astrophysics Data System (ADS)

    Hearty, Thomas J.; Savtchenko, Andrey; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-03-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be ± 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and > 30% dry over midlatitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  20. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  1. Design and evaluation of a new Peltier-cooled laser ablation cell with on-sample temperature control.

    PubMed

    Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; Sanz-Medel, Alfredo

    2014-01-27

    A new custom-built Peltier-cooled laser ablation cell is described. The proposed cryogenic cell combines a small internal volume (20 cm(3)) with a unique and reliable on-sample temperature control. The use of a flexible temperature sensor, directly located on the sample surface, ensures a rigorous sample temperature control throughout the entire analysis time and allows instant response to any possible fluctuation. In this way sample integrity and, therefore, reproducibility can be guaranteed during the ablation. The refrigeration of the proposed cryogenic cell combines an internal refrigeration system, controlled by a sensitive thermocouple, with an external refrigeration system. Cooling of the sample is directly carried out by 8 small (1 cm×1 cm) Peltier elements placed in a circular arrangement in the base of the cell. These Peltier elements are located below a copper plate where the sample is placed. Due to the small size of the cooling electronics and their circular allocation it was possible to maintain a peephole under the sample for illumination allowing a much better visualization of the sample, a factor especially important when working with structurally complex tissue sections. The analytical performance of the cryogenic cell was studied using a glass reference material (SRM NIST 612) at room temperature and at -20°C. The proposed cell design shows a reasonable signal washout (signal decay within less than 10 s to background level), high sensitivity and good signal stability (in the range 6.6-11.7%). Furthermore, high precision (0.4-2.6%) and accuracy (0.3-3.9%) in the isotope ratio measurements were also observed operating the cell both at room temperature and at -20°C. Finally, experimental results obtained for the cell application to qualitative elemental imaging of structurally complex tissue samples (e.g. eye sections from a native frozen porcine eye and fresh flower leaves) demonstrate that working in cryogenic conditions is critical in such

  2. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  3. Ozone and temperature: A test of the consistency of models and observations in the middle atmosphere

    NASA Astrophysics Data System (ADS)

    Orris, Rebecca Lyn

    1997-08-01

    Several stratospheric monthly-, zonally-averaged satellite ozone and temperature datasets have been created, merged with other observational datasets, and extrapolated to form ozone climatologies, with coverage from the surface to 80km and from 90oS to 90oN. Equilibrium temperatures in the stratosphere for each ozone dataset are calculated using a fixed dynamical heating (FDH) model and are compared with measured temperatures. An extensive study is conducted of the sensitivity of the modeled temperatures to uncertainties of inputs, with emphasis on the accuracy of the radiative transfer models, the uncertainty of the ozone mixing ratios, and inter-annual variability. We examine the long-term variability of the temperature with 25 years of data from the 3o resolution SKYHI GCM and find evidence of low frequency variation of the 3o model temperatures with a time scale of about 10 years. This long-term variability creates a significant source of uncertainty in our study, since dynamical heating rates derived from only 1 year of 1o SKYHI data are used. Most measured datasets are only available for a few years, which is an inadequate sample for averaging purposes. The uncertainty introduced into the comparison of FDH-modeled temperatures and measurements near 1mb in the tropics due to interannual variability has a maximum of approximately ±8K. Global-mean calculations on isobaric surfaces are shown to eliminate most of the interannual variability of the modeled and measured temperatures. Multiple years of global-mean UARS MLS temperatures, as well as MLS and LIMS temperatures at pressures of 1mb and greater, agree to within ±2K. For most months studied, global-mean Barnett and Corney (BC) temperatures are found to be significantly warmer (3.5-5K) than either the MLS or LIMS temperatures between 2-10mb. Comparisons of global-mean FDH-modeled temperatures with measured LIMS and MLS temperatures show the model is colder than measurements by 3-7K. Consistency between

  4. Ignition temperature of magnesium powder clouds: a theoretical model.

    PubMed

    Chunmiao, Yuan; Chang, Li; Gang, Li; Peihong, Zhang

    2012-11-15

    Minimum ignition temperature of dust clouds (MIT-DC) is an important consideration when adopting explosion prevention measures. This paper presents a model for determining minimum ignition temperature for a magnesium powder cloud under conditions simulating a Godbert-Greenwald (GG) furnace. The model is based on heterogeneous oxidation of metal particles and Newton's law of motion, while correlating particle size, dust concentration, and dust dispersion pressure with MIT-DC. The model predicted values in close agreement with experimental data and is especially useful in predicting temperature and velocity change as particles pass through the furnace tube.

  5. The effects of sampling frequency on the climate statistics of the ECMWF general circulation model

    SciTech Connect

    Phillips, T.J.; Gates, W.L.; Arpe, K.

    1992-09-01

    The effects of sampling frequency on the first- and second-moment statistics of selected EC model variables are investigated in a simulation of ``perpetual July`` with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the enfolding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and run-off, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over the oceans.

  6. The effects of sampling frequency on the climate statistics of the ECMWF general circulation model

    SciTech Connect

    Phillips, T.J.; Gates, W.L. ); Arpe, K. )

    1992-09-01

    The effects of sampling frequency on the first- and second-moment statistics of selected EC model variables are investigated in a simulation of perpetual July'' with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the enfolding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and run-off, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over the oceans.

  7. Wang-Landau sampling in face-centered-cubic hydrophobic-hydrophilic lattice model proteins.

    PubMed

    Liu, Jingfa; Song, Beibei; Yao, Yonglei; Xue, Yu; Liu, Wenjie; Liu, Zhaoxia

    2014-10-01

    Finding the global minimum-energy structure is one of the main problems of protein structure prediction. The face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice model can reach high approximation ratios of real protein structures, so the fcc lattice model is a good choice to predict the protein structures. The lacking of an effective global optimization method is the key obstacle in solving this problem. The Wang-Landau sampling method is especially useful for complex systems with a rough energy landscape and has been successfully applied to solving many optimization problems. We apply the improved Wang-Landau (IWL) sampling method, which incorporates the generation of an initial conformation based on the greedy strategy and the neighborhood strategy based on pull moves into the Wang-Landau sampling method to predict the protein structures on the fcc HP lattice model. Unlike conventional Monte Carlo simulations that generate a probability distribution at a given temperature, the Wang-Landau sampling method can estimate the density of states accurately via a random walk, which produces a flat histogram in energy space. We test 12 general benchmark instances on both two-dimensional and three-dimensional (3D) fcc HP lattice models. The lowest energies by the IWL sampling method are as good as or better than those of other methods in the literature for all instances. We then test five sets of larger-scale instances, denoted by the S, R, F90, F180, and CASP target instances on the 3D fcc HP lattice model. The numerical results show that our algorithm performs better than the other five methods in the literature on both the lowest energies and the average lowest energies in all runs. The IWL sampling method turns out to be a powerful tool to study the structure prediction of the fcc HP lattice model proteins.

  8. A physically based model of global freshwater surface temperature

    NASA Astrophysics Data System (ADS)

    Beek, Ludovicus P. H.; Eikelboom, Tessa; Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  9. Dynamic modeling of temperature change in outdoor operated tubular photobioreactors.

    PubMed

    Androga, Dominic Deo; Uyar, Basar; Koku, Harun; Eroglu, Inci

    2017-04-06

    In this study, a one-dimensional transient model was developed to analyze the temperature variation of tubular photobioreactors operated outdoors and the validity of the model was tested by comparing the predictions of the model with the experimental data. The model included the effects of convection and radiative heat exchange on the reactor temperature throughout the day. The temperatures in the reactors increased with increasing solar radiation and air temperatures, and the predicted reactor temperatures corresponded well to the measured experimental values. The heat transferred to the reactor was mainly through radiation: the radiative heat absorbed by the reactor medium, ground radiation, air radiation, and solar (direct and diffuse) radiation, while heat loss was mainly through the heat transfer to the cooling water and forced convection. The amount of heat transferred by reflected radiation and metabolic activities of the bacteria and pump work was negligible. Counter-current cooling was more effective in controlling reactor temperature than co-current cooling. The model developed identifies major heat transfer mechanisms in outdoor operated tubular photobioreactors, and accurately predicts temperature changes in these systems. This is useful in determining cooling duty under transient conditions and scaling up photobioreactors. The photobioreactor design and the thermal modeling were carried out and experimental results obtained for the case study of photofermentative hydrogen production by Rhodobacter capsulatus, but the approach is applicable to photobiological systems that are to be operated under outdoor conditions with significant cooling demands.

  10. A temperature dependent SPICE macro-model for power MOSFETs

    SciTech Connect

    Pierce, D.G.

    1992-05-01

    A power MOSFET macro-model for use with the circuit simulator SPICE has been developed suitable for use over the temperature range of {minus}55 to 125{degrees}C. The model is comprised of a single parameter set with the temperature dependence accessed through the SPICE TEMP card. This report describes in detail the development of the model and the extraction algorithms used to obtain model parameters. The extraction algorithms are described in sufficient detail to allow for automated measurements which in turn allows for rapid and cost effective development of an accurate SPICE model for any power MOSFET. 22 refs.

  11. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  12. Heat propagation models for superconducting nanobridges at millikelvin temperatures

    NASA Astrophysics Data System (ADS)

    Blois, A.; Rozhko, S.; Hao, L.; Gallop, J. C.; Romans, E. J.

    2017-01-01

    Nanoscale superconducting quantum interference devices (nanoSQUIDs) most commonly use Dayem bridges as Josephson elements to reduce the loop size and achieve high spin sensitivity. Except at temperatures close to the critical temperature T c, the electrical characteristics of these bridges exhibit undesirable thermal hysteresis which complicates device operation. This makes proper thermal analysis an essential design consideration for optimising nanoSQUID performance at ultralow temperatures. However the existing theoretical models for this hysteresis were developed for micron-scale devices operating close to liquid helium temperatures, and are not fully applicable to a new generation of much smaller devices operating at significantly lower temperatures. We have therefore developed a new analytic heat model which enables a more accurate prediction of the thermal behaviour in such circumstances. We demonstrate that this model is in good agreement with experimental results measured down to 100 mK and discuss its validity for different nanoSQUID geometries.

  13. Midnight Temperature Maximum (MTM) in Whole Atmosphere Model (WAM) Simulations

    DTIC Science & Technology

    2016-04-14

    C. G. (1996), Simulations of the low -latitude midnight temperature maximum, J. Geophys. Res., 101, 26,863–26,874. Forbes, J. M., S. L. Bruinsma, Y...Midnight temperature maximum (MTM) in Whole Atmosphere Model (WAM) simulations R. A. Akmaev,1 F. Wu,2 T. J. Fuller-Rowell,2 and H. Wang2 Received 13...February 2009; accepted 18 March 2009; published 14 April 2009. [1] Discovered almost four decades ago, the midnight temperature maximum (MTM) with

  14. Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality

    NASA Astrophysics Data System (ADS)

    Caldwell, R. J.; Rajagopalan, B.

    2011-12-01

    Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other

  15. A generalized conditional heteroscedastic model for temperature downscaling

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  16. Temperature sensitivity of a numerical pollen forecast model

    NASA Astrophysics Data System (ADS)

    Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone

    2016-04-01

    Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.

  17. Modeling the Effect of Temperature on Ozone-Related Mortality.

    EPA Science Inventory

    Modeling the Effect of Temperature on Ozone-Related Mortality. Wilson, Ander, Reich, Brian J, Neas, Lucas M., Rappold, Ana G. Background: Previous studies show ozone and temperature are associated with increased mortality; however, the joint effect is not well explored. Underst...

  18. Chironomids as indicators of climate change: a temperature inference model for Greenland

    NASA Astrophysics Data System (ADS)

    Maddison, Eleanor J.; Long, Antony J.; Woodroffe, Sarah A.; Ranner, P. Helen; Huntley, Brian

    2014-05-01

    Current climate warming is predicted to accelerate melting of the Greenland Ice Sheet and cause global sea level to rise, but there is uncertainty about whether changes will be abrupt or more gradual, and whether the key forcing will be air or ocean temperatures. Examining past ice sheet response to climate change is therefore important, yet only a few quantitative temperature reconstructions exist from the Greenland Ice Sheet margin. Subfossil chironomids are a widely used biological proxy, with modern calibration data-sets used to construct past temperature. Many chironomid-inferred temperature models exist in the northern hemisphere high latitudes, however, no model currently exists for Greenland. Here we present a new model from south-west Greenland which utilises 22 lakes from the Nuup Kangerlua area (samples collected in summer 2011) and 19 lakes from the Kangerlussuaq fjord area (part of a dataset reported in Brodersen and Anderson (2002)). Monthly mean air temperatures were modelled for each lake site from air temperature logger data, collected in 2011-2012 from the Nuup Kangerlua area, and meteorological station temperature data. In the field, lake physical parameters and environmental variables were measured. Collected lake water and sediment samples were analysed in the laboratory. Statistical analysis of air temperature, geographical information, lake water chemistry and contemporary chironomid assemblage data was subsequently undertaken on the 41 lake training set. Mean June air temperature was found to be the main environmental control on the chironomid community, although other factors, including sample depth, conductivity and total nitrogen water content, were also found to be important. Weighted averaging partial least squares (WA-PLS) analysis was used to develop a new mean June air temperature inference model. Analysis indicated that the best model was a two component WA-PLS with r2=0.77, r2boot=0.56 and root mean square error of prediction = 1

  19. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    ERIC Educational Resources Information Center

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  20. Note: A sample holder design for sensitive magnetic measurements at high temperatures in a magnetic properties measurement system

    SciTech Connect

    Arauzo, A.; Guerrero, E.; Urtizberea, A.; Stankiewicz, J.; Rillo, C.

    2012-06-15

    A sample holder design for high temperature measurements in a commercial MPMS SQUID magnetometer from Quantum Design is presented. It fulfills the requirements for the simultaneous use of the oven and reciprocating sample option (RSO) options, thus allowing sensitive magnetic measurements up to 800 K. Alternating current susceptibility can also be measured, since the holder does not induce any phase shift relative to the ac driven field. It is easily fabricated by twisting Constantan Copyright-Sign wires into a braid nesting the sample inside. This design ensures that the sample be placed tightly into a tough holder with its orientation fixed, and prevents any sample displacement during the fast movements of the RSO transport, up to high temperatures.

  1. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    SciTech Connect

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; Carroll, Susan A.

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution features including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.

  2. Evaluation of CIRA temperature model with lidar and future perspectives

    NASA Astrophysics Data System (ADS)

    Keckhut, Philippe; Hauchecorne, Alain

    The CIRA model is widely used for many atmospheric applications. Many comparisons with temperature lidar have all revealed similar bias and will be presented. The mean tempera-ture is today not sufficient and future models will require additional functionalities. The use of statistical temperature mean fields requires some information about the variability to esti-mate the significance of the comparisons with other sources. Some tentative to estimate such variability will be presented. Another crucial issue for temperature comparisons concerns the tidal variability. How this effect can be considered in a model will be discussed. Finally, the pertinence of statistical models in a changing atmosphere is also an issue that needs specific considerations.

  3. ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION

    SciTech Connect

    Martino, C.; Herman, D.; Pike, J.; Peters, T.

    2014-06-05

    Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.

  4. A New Empirical Model of the Temperature Humidity Index.

    NASA Astrophysics Data System (ADS)

    Schoen, Carl

    2005-09-01

    A simplified scale of apparent temperature, considering only dry-bulb temperature and humidity, has become known as the temperature humidity index (THI). The index was empirically constructed and was presented in the form of a table. It is often useful to have a formula instead for use in interpolation or for programming calculators or computers. The National Weather Service uses a polynomial multiple regression formula, but it is in some ways unsatisfactory. A new model of the THI is presented that is much simpler—having only 3 parameters as compared with 16 for the NWS model. The new model also more closely fits the tabulated values and has the advantage that it allows extrapolation outside of the temperature range of the table. Temperature humidity pairs above the effective range of the NWS model are occasionally encountered, and the ability to extrapolate into colder temperature ranges allows the new model to be more effectively contained as part of a more general apparent temperature index.

  5. Defining Predictive Probability Functions for Species Sampling Models.

    PubMed

    Lee, Jaeyong; Quintana, Fernando A; Müller, Peter; Trippa, Lorenzo

    2013-01-01

    We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF.

  6. Elevated body temperature is linked to fatigue in an Italian sample of relapsing-remitting multiple sclerosis patients.

    PubMed

    Leavitt, V M; De Meo, E; Riccitelli, G; Rocca, M A; Comi, G; Filippi, M; Sumowski, J F

    2015-11-01

    Elevated body temperature was recently reported for the first time in patients with relapsing-remitting multiple sclerosis (RRMS) relative to healthy controls. In addition, warmer body temperature was associated with worse fatigue. These findings are highly novel, may indicate a novel pathophysiology for MS fatigue, and therefore warrant replication in a geographically separate sample. Here, we investigated body temperature and its association to fatigue in an Italian sample of 44 RRMS patients and 44 age- and sex-matched healthy controls. Consistent with our original report, we found elevated body temperature in the RRMS sample compared to healthy controls. Warmer body temperature was associated with worse fatigue, thereby supporting the notion of endogenous temperature elevations in patients with RRMS as a novel pathophysiological factor underlying fatigue. Our findings highlight a paradigm shift in our understanding of the effect of heat in RRMS, from exogenous (i.e., Uhthoff's phenomenon) to endogenous. Although randomized controlled trials of cooling treatments (i.e., aspirin, cooling garments) to reduce fatigue in RRMS have been successful, consideration of endogenously elevated body temperature as the underlying target will enhance our development of novel treatments.

  7. Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2016-08-01

    An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an instance-dependent effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to k -step contrastive divergence (CD-k ) with k up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one-step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.

  8. High temperature spice modeling of partially depleted SOI MOSFETs

    SciTech Connect

    Osman, M.A.; Osman, A.A.

    1996-03-01

    Several partially depleted SOI N- and P-mosfets with dimensions ranging from W/L=30/10 to 15/3 were characterized from room temperature up to 300 C. The devices exhibited a well defined and sharp zero temperature coefficient biasing point up to 573 K in both linear and saturation regions. Simulation of the I-V characteristics using a temperature dependent SOI SPICE were in excellent agreement with measurements. Additionally, measured ZTC points agreed favorably with the predicted ZTC points using expressions derived from the temperature dependent SOI model for the ZTC {copyright} {ital 1996 American Institute of Physics.}

  9. Mechanistic modeling of broth temperature in outdoor photobioreactors.

    PubMed

    Béchet, Quentin; Shilton, Andy; Fringer, Oliver B; Muñoz, Raul; Guieysse, Benoit

    2010-03-15

    This study presents the first mechanistic model describing broth temperature in column photobioreactors as a function of static (location, reactor geometry) and dynamic (light irradiance, air temperature, wind velocity) parameters. Based on a heat balance on the liquid phase the model predicted temperature in a pneumatically agitated column photobioreactor (1 m(2) illuminated area, 0.19 m internal diameter, 50 L gas-free cultivation broth) operated outdoor in Singapore to an accuracy of 2.4 °C at the 95% confidence interval over the entire data set used (104 measurements from 7 different batches). Solar radiation (0 to 200 W) and air convection (-30 to 50 W)were the main contributors to broth temperature change. The model predicted broth temperature above 40 °C will be reached during summer months in the same photobioreactor operated in California, a value well over the maximum temperature tolerated by most commercial algae species. Accordingly, 18,000 and 5500 GJ year(-1) ha(-1) of heat energy must be removed to maintain broth temperature at or below 25 and 35 °C, respectively, assuming a reactor density of one reactor per square meter. Clearly, the significant issue of temperature control must be addressed when evaluating the technical feasibility, costs, and sustainability of large-scale algae production.

  10. Modeling the wet bulb globe temperature using standard meteorological measurements.

    PubMed

    Liljegren, James C; Carhart, Richard A; Lawday, Philip; Tschopp, Stephen; Sharp, Robert

    2008-10-01

    The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 degree C based on comparisons with wet bulb globe temperature measurements at all depots.

  11. Modeling the wet bulb globe temperature using standard meteorological measurements.

    SciTech Connect

    Liljegren, J. C.; Carhart, R. A.; Lawday, P.; Tschopp, S.; Sharp, R.; Decision and Information Sciences

    2008-10-01

    The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 C based on comparisons with wet bulb globe temperature measurements at all depots.

  12. The stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures

    USGS Publications Warehouse

    Gordon, J.D.; Schroder, L.J.; Morden-Moore, A. L.; Bowersox, V.C.

    1995-01-01

    Separate experiments by the U.S. Geological Survey (USGS) and the Illinois State Water Survey Central Analytical Laboratory (CAL) independently assessed the stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures. The USGS experiment represented a test of sample stability under a diverse range of conditions, whereas the CAL experiment was a controlled test of sample stability. In the experiment by the USGS, a statistically significant (?? = 0.05) relation between [H+] and time was found for the composited filtered, natural, wet-deposition solution when all reported values are included in the analysis. However, if two outlying pH values most likely representing measurement error are excluded from the analysis, the change in [H+] over time was not statistically significant. In the experiment by the CAL, randomly selected samples were reanalyzed between July 1984 and February 1991. The original analysis and reanalysis pairs revealed that [H+] differences, although very small, were statistically different from zero, whereas specific-conductance differences were not. Nevertheless, the results of the CAL reanalysis project indicate there appears to be no consistent, chemically significant degradation in sample integrity with regard to [H+] and specific conductance while samples are stored at room temperature at the CAL. Based on the results of the CAL and USGS studies, short-term (45-60 day) stability of [H+] and specific conductance in natural filtered wet-deposition samples that are shipped and stored unchilled at ambient temperatures was satisfactory.

  13. Lee-Wick standard model at finite temperature

    NASA Astrophysics Data System (ADS)

    Lebed, Richard F.; Long, Andrew J.; TerBeek, Russell H.

    2013-10-01

    The Lee-Wick Standard Model at temperatures near the electroweak scale is considered, with the aim of studying the electroweak phase transition. While Lee-Wick theories possess states of negative norm, they are not pathological but instead are treated by imposing particular boundary conditions and using particular integration contours in the calculation of S-matrix elements. It is not immediately clear how to extend this prescription to formulate the theory at finite temperature; we explore two different pictures of finite-temperature Lee-Wick theories, and calculate the thermodynamic variables and the (one-loop) thermal effective potential. We apply these results to study the Lee-Wick Standard Model and find that the electroweak phase transition is a continuous crossover, much like in the Standard Model. However, the high-temperature behavior is modified due to cancellations between thermal corrections arising from the negative- and positive-norm states.

  14. A Temperature-Dependent Battery Model for Wireless Sensor Networks

    PubMed Central

    Rodrigues, Leonardo M.; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-01-01

    Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments. PMID:28241444

  15. A Temperature-Dependent Battery Model for Wireless Sensor Networks.

    PubMed

    Rodrigues, Leonardo M; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-02-22

    Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments.

  16. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  17. A model for estimating the value of sampling programs and the optimal number of samples for contaminated soil

    NASA Astrophysics Data System (ADS)

    Back, Pär-Erik

    2007-04-01

    A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.

  18. Assessment of two-temperature kinetic model for ionizing air

    NASA Technical Reports Server (NTRS)

    Park, Chul

    1987-01-01

    A two-temperature chemical-kinetic model for air is assessed by comparing theoretical results with existing experimental data obtained in shock-tubes, ballistic ranges, and flight experiments. In the model, named the TTv model, one temperature (T) is assumed to characterize the heavy-particle translational and molecular rotational energies, and another temperature (Tv) to characterize the molecular vibrational, electron translational, and electronic excitation energies. The theoretical results for nonequilibrium air flow in shock tubes are obtained using the computer code STRAP (Shock-Tube Radiation Program), and for flow along the stagnation streamline in the shock layer over spherical bodies using the newly developed code STRAP (Stagnation-Point Radiation Program). Substantial agreement is shown between the theoretical and experimental results for relaxation times and radiative heat fluxes. At very high temperatures the spectral calculations need further improvement. The present agreement provides strong evidence that the two-temperature model characterizes principal features of nonequilibrium air flow. New theoretical results using the model are presented for the radiative heat fluxes at the stagnation point of a 6-m-radius sphere, representing an aeroassisted orbital transfer vehicle, over a range of free-stream conditions. Assumptions, approximations, and limitations of the model are discussed.

  19. Estimating transient climate response using consistent temperature reconstruction methods in models and observations

    NASA Astrophysics Data System (ADS)

    Richardson, M.; Cowtan, K.; Hawkins, E.; Stolpe, M.

    2015-12-01

    Observational temperature records such as HadCRUT4 typically have incomplete geographical coverage and blend air temperature over land with sea surface temperatures over ocean, in contrast to model output which is commonly reported as global air temperature. This complicates estimation of properties such as the transient climate response (TCR). Observation-based estimates of TCR have been made using energy-budget constraints applied to time series of historical radiative forcing and surface temperature changes, while model TCR is formally derived from simulations where CO2 increases at 1% per year. We perform a like-with-like comparison using three published energy-budget methods to derive modelled TCR from historical CMIP5 temperature series sampled in a manner consistent with HadCRUT4. Observation-based TCR estimates agree to within 0.12 K of the multi-model mean in each case and for 2 of the 3 energy-budget methods the observation-based TCR is higher than the multi-model mean. For one energy-budget method, using the HadCRUT4 blending method leads to a TCR underestimate of 0.3±0.1 K, relative to that estimated using global near-surface air temperatures.

  20. Unraveling the Beautiful Complexity of Simple Lattice Model Polymers and Proteins Using Wang-Landau Sampling

    NASA Astrophysics Data System (ADS)

    Wüst, T.; Li, Y. W.; Landau, D. P.

    2011-08-01

    We describe a class of "bare bones" models of homopolymers which undergo coil-globule collapse and proteins which fold into their native states in free space or into denatured states when captured by an attractive substrate as the temperature is lowered. We then show how, with the use of a properly chosen trial move set, Wang-Landau Monte Carlo sampling can be used to study the rough free energy landscape and ground (native) states of these intriguingly simple systems and thus elucidate their thermodynamic complexity.

  1. A non-intrusive method for temperature measurements in flames produced by milligram-sized solid samples

    NASA Astrophysics Data System (ADS)

    Frances, Colleen Elizabeth

    Fires are responsible for the loss of thousands of lives and billions of dollars in property damage each year in the United States. Flame retardants can assist in the prevention of fires through mechanisms which either prevent or greatly inhibit flame spread and development. In this study samples of both brominated and non-brominated polystyrene were tested in the Milligram-scale Flaming Calorimeter and images captured with two DSL-R cameras were analyzed to determine flame temperatures through use of a non-intrusive method. Based on the flame temperature measurement results, a better understanding of the gas phase mechanisms of flame retardants may result, as temperature is an important diagnostic in the study of fire and combustion. Measurements taken at 70% of the total flame height resulted in average maximum temperatures of about 1656 K for polystyrene and about 1614 K for brominated polystyrene, suggesting that the polymer flame retardant may reduce flame temperatures.

  2. Temperature dependence of heterogeneous nucleation: Extension of the Fletcher model

    NASA Astrophysics Data System (ADS)

    McGraw, Robert; Winkler, Paul; Wagner, Paul

    2015-04-01

    Recently there have been several cases reported where the critical saturation ratio for onset of heterogeneous nucleation increases with nucleation temperature (positive slope dependence). This behavior contrasts with the behavior observed in homogeneous nucleation, where a decreasing critical saturation ratio with increasing nucleation temperature (negative slope dependence) seems universal. For this reason the positive slope dependence is referred to as anomalous. Negative slope dependence is found in heterogeneous nucleation as well, but because so few temperature-dependent measurements have been reported, it is not presently clear which slope condition (positive or negative) will become more frequent. Especially interesting is the case of water vapor condensation on silver nanoparticles [Kupc et al., AS&T 47: i-iv, 2013] where the critical saturation ratio for heterogeneous nucleation onset passes through a maximum, at about 278K, with higher (lower) temperatures showing the usual (anomalous) temperature dependence. In the present study we develop an extension of Fletcher's classical, capillarity-based, model of heterogeneous nucleation that explicitly resolves the roles of surface energy and surface entropy in determining temperature dependence. Application of the second nucleation theorem, which relates temperature dependence of nucleation rate to cluster energy, yields both necessary and sufficient conditions for anomalous temperature behavior in the extended Fletcher model. In particular it is found that an increasing contact angle with temperature is a necessary, but not sufficient, condition for anomalous temperature dependence to occur. Methods for inferring microscopic contact angle and its temperature dependence from heterogeneous nucleation probability measurements are discussed in light of the new theory.

  3. LOW TEMPERATURE X-RAY DIFFRACTION STUDIES OF NATURAL GAS HYDRATE SAMPLES FROM THE GULF OF MEXICO

    SciTech Connect

    Rawn, Claudia J; Sassen, Roger; Ulrich, Shannon M; Phelps, Tommy Joe; Chakoumakos, Bryan C; Payzant, E Andrew

    2008-01-01

    Clathrate hydrates of methane and other small alkanes occur widespread terrestrially in marine sediments of the continental margins and in permafrost sediments of the arctic. Quantitative study of natural clathrate hydrates is hampered by the difficulty in obtaining pristine samples, particularly from submarine environments. Bringing samples of clathrate hydrate from the seafloor at depths without compromising their integrity is not trivial. Most physical property measurements are based on studies of laboratory-synthesized samples. Here we report X-ray powder diffraction measurements of a natural gas hydrate sample from the Green Canyon, Gulf of Mexico. The first data were collected in 2002 and revealed ice and structure II gas hydrate. In the subsequent time the sample has been stored in liquid nitrogen. More recent X-ray powder diffraction data have been collected as functions of temperature and time. This new data indicates that the larger sample is heterogeneous in ice content and shows that the amount of sII hydrate decreases with increasing temperature and time as expected. However, the dissociation rate is higher at lower temperatures and earlier in the experiment.

  4. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  5. An Analytic Function of Lunar Surface Temperature for Exospheric Modeling

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David

    2014-01-01

    We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.

  6. Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.

    PubMed

    Figura, Simon; Livingstone, David M; Kipfer, Rolf

    2015-01-01

    Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed.

  7. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2004-01-01

    Through Monte Carlo simulation, small sample methods for evaluating overall data-model fit in structural equation modeling were explored. Type I error behavior and power were examined using maximum likelihood (ML), Satorra-Bentler scaled and adjusted (SB; Satorra & Bentler, 1988, 1994), residual-based (Browne, 1984), and asymptotically…

  8. Space Weathering of Olivine: Samples, Experiments and Modeling

    NASA Technical Reports Server (NTRS)

    Keller, L. P.; Berger, E. L.; Christoffersen, R.

    2016-01-01

    Olivine is a major constituent of chondritic bodies and its response to space weathering processes likely dominates the optical properties of asteroid regoliths (e.g. S- and many C-type asteroids). Analyses of olivine in returned samples and laboratory experiments provide details and insights regarding the mechanisms and rates of space weathering. Analyses of olivine grains from lunar soils and asteroid Itokawa reveal that they display solar wind damaged rims that are typically not amorphized despite long surface exposure ages, which are inferred from solar flare track densities (up to 10 (sup 7 y)). The olivine damaged rim width rapidly approaches approximately 120 nm in approximately 10 (sup 6 y) and then reaches steady-state with longer exposure times. The damaged rims are nanocrystalline with high dislocation densities, but crystalline order exists up to the outermost exposed surface. Sparse nanophase Fe metal inclusions occur in the damaged rims and are believed to be produced during irradiation through preferential sputtering of oxygen from the rims. The observed space weathering effects in lunar and Itokawa olivine grains are difficult to reconcile with laboratory irradiation studies and our numerical models that indicate that olivine surfaces should readily blister and amorphize on relatively short time scales (less than 10 (sup 3 y)). These results suggest that it is not just the ion fluence alone, but other variable, the ion flux that controls the type and extent of irradiation damage that develops in olivine. This flux dependence argues for caution in extrapolating between high flux laboratory experiments and the natural case. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind processing of olivine.

  9. Ignition and temperature behavior of a single-wall carbon nanotube sample.

    PubMed

    Volotskova, O; Shashurin, A; Keidar, M; Raitses, Y; Demidov, V; Adams, S

    2010-03-05

    The electrical resistance of mats of single-wall carbon nanotubes (SWNTs) is measured as a function of mat temperature under various helium pressures, in vacuum and in atmospheric air. The objective of this paper is to study the thermal stability of SWNTs produced in a helium arc discharge in the experimental conditions close to natural conditions of SWNT growth in an arc, using a furnace instead of an arc discharge. For each tested condition, there is a temperature threshold at which the mat's resistance reaches its minimum. The threshold value depends on the helium pressure. An increase of the temperature above the temperature threshold leads to the destruction of SWNT bundles at a certain critical temperature. For instance, the critical temperature is about 1100 K in the case of helium background at a pressure of about 500 Torr. Based on experimental data on critical temperature it is suggested that SWNTs produced by an anodic arc discharge and collected in the web area outside the arc plasma most likely originate from the arc discharge peripheral region.

  10. River water temperature and fish growth forecasting models

    NASA Astrophysics Data System (ADS)

    Danner, E.; Pike, A.; Lindley, S.; Mendelssohn, R.; Dewitt, L.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.

    2010-12-01

    Water is a valuable, limited, and highly regulated resource throughout the United States. When making decisions about water allocations, state and federal water project managers must consider the short-term and long-term needs of agriculture, urban users, hydroelectric production, flood control, and the ecosystems downstream. In the Central Valley of California, river water temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. There is consequently strong interest in modeling water temperature dynamics and the subsequent impacts on fish growth in such regulated rivers. However, the accuracy of current stream temperature models is limited by the lack of spatially detailed meteorological forecasts. To address these issues, we developed a high-resolution deterministic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) in a state-space framework, and applied this model to Upper Sacramento River. We then adapted salmon bioenergetics models to incorporate the temperature data at sub-hourly time steps to provide more realistic estimates of salmon growth. The temperature model uses physically-based heat budgets to calculate the rate of heat transfer to/from the river. We use variables provided by the TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) model—a high-resolution assimilation of satellite-derived meteorological observations and numerical weather simulations—as inputs. The TOPS-WRF framework allows us to improve the spatial and temporal resolution of stream temperature predictions. The salmon growth models are adapted from the Wisconsin bioenergetics model. We have made the output from both models available on an interactive website so that water and fisheries managers can determine the past, current and three day forecasted water temperatures at

  11. A constitutive model with damage for high temperature superalloys

    NASA Technical Reports Server (NTRS)

    Sherwood, J. A.; Stouffer, D. C.

    1988-01-01

    A unified constitutive model is searched for that is applicable for high temperature superalloys used in modern gas turbines. Two unified inelastic state variable constitutive models were evaluated for use with the damage parameter proposed by Kachanov. The first is a model (Bodner, Partom) in which hardening is modeled through the use of a single state variable that is similar to drag stress. The other (Ramaswamy) employs both a drag stress and back stress. The extension was successful for predicting the tensile, creep, fatigue, torsional and nonproportional response of Rene' 80 at several temperatures. In both formulations, a cumulative damage parameter is introduced to model the changes in material properties due to the formation of microcracks and microvoids that ultimately produce a macroscopic crack. A back stress/drag stress/damage model was evaluated for Rene' 95 at 1200 F and is shown to predict the tensile, creep, and cyclic loading responses reasonably well.

  12. Applications of a New England stream temperature model to ...

    EPA Pesticide Factsheets

    We have applied a statistical stream network (SSN) model to predict stream thermal metrics (summer monthly medians, growing season maximum magnitude and timing, and daily rates of change) across New England nontidal streams and rivers, excluding northern Maine watersheds that extend into Canada (Detenbeck et al., in review). We excluded stream temperature observations within one kilometer downstream of dams from our model development, so our predictions for those reaches represent potential thermal regimes in the absence of dam effects. We used stream thermal thresholds for mean July temperatures delineating transitions between coldwater, transitional coolwater, and warmwater fish communities derived by Beauchene et al. (2014) to classify expected stream and river thermal regimes across New England. Within the model domain and based on 2006 land-use and air temperatures, the model predicts that 21.8% of stream + river kilometers would support coldwater fish communities (mean July water temperatures 22.3 degrees C mean July temperatures). Application of the model allows us to assess potential condition given full riparian zone restoration as well as potential loss of cold or coolwater habitat given loss of riparian shading. Given restoration of all ripa

  13. Quantum coherence of spin-boson model at finite temperature

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Xu, Jing-Bo

    2017-02-01

    We investigate the dynamical behavior of quantum coherence in spin-boson model, which consists of a qubit coupled to a finite-temperature bosonic bath with power-law spectral density beyond rotating wave approximation, by employing l1-norm as well as quantum relative entropy. It is shown that the temperature of bosonic bath and counter-rotating terms significantly affect the decoherence rate in sub-Ohmic, Ohmic and super-Ohmic baths. At high temperature, we find the counter-rotating terms of spin-boson model are able to increase the decoherence rate for sub-Ohmic baths, however, for Ohmic and super-Ohmic baths, the counter-rotating terms tend to decrease the value of decoherence rate. At low temperature, we find the counter-rotating terms always play a positive role in preserving the qubit's quantum coherence regardless of sub-Ohmic, Ohmic and super-Ohmic baths.

  14. Three-point bending setup for piezoresistive gauge factor measurement of thin-film samples at high temperatures.

    PubMed

    Madsen, Nis Dam; Kjelstrup-Hansen, Jakob

    2017-01-01

    We present a new method for measuring the piezoresistive gauge factor of a thin-film resistor based on three-point bending. A ceramic fixture has been designed and manufactured to fit a state-of-the-art mechanical testing apparatus (TA Instruments Q800). The method has been developed to test thin-film samples deposited on silicon substrates with an insulating layer of SiO2. The electrical connections to the resistor are achieved through contacts in the support points. This insures that the influence of the electrical contacts is reduced to a minimum and eliminates wire-bonding or connectors attached to the sample. During measurement, both force and deflection of the sample are recorded simultaneously with the electrical data. The data analysis extracts a precise measurement of the sample thickness (<1% error) in addition to the gauge factor and the temperature coefficient of resistivity. The sample thickness is a critical parameter for an accurate calculation of the strain in the thin-film resistor. This method provides a faster sample evaluation by eliminating an additional sample thickness measurement or alternatively an option for cross checking data. Furthermore, the method implements a full compensation of thermoelectrical effects, which could otherwise lead to significant errors at high temperature. We also discuss the magnitude of the error sources in the setup. The performance of the setup is demonstrated using a titanium nitride thin-film, which is tested up to 400 °C revealing the gauge factor behavior in this temperature span and the temperature coefficient of resistivity.

  15. Low-temperature dynamic nuclear polarization with helium-cooled samples and nitrogen-driven magic-angle spinning.

    PubMed

    Thurber, Kent; Tycko, Robert

    2016-03-01

    We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized (13)C NMR signals in the 100-200 range are demonstrated with DNP at 25K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states.

  16. Effect of the sample annealing temperature and sample crystallographic orientation on the charge kinetics of MgO single crystals subjected to keV electron irradiation.

    PubMed

    Boughariou, A; Damamme, G; Kallel, A

    2015-04-01

    This paper focuses on the effect of sample annealing temperature and crystallographic orientation on the secondary electron yield of MgO during charging by a defocused electron beam irradiation. The experimental results show that there are two regimes during the charging process that are better identified by plotting the logarithm of the secondary electron emission yield, lnσ, as function of the total trapped charge in the material QT. The impact of the annealing temperature and crystallographic orientation on the evolution of lnσ is presented here. The slope of the asymptotic regime of the curve lnσ as function of QT, expressed in cm(2) per trapped charge, is probably linked to the elementary cross section of electron-hole recombination, σhole, which controls the trapping evolution in the reach of the stationary flow regime.

  17. Effects of coagulation temperature on measurements of complement function in serum samples from patients with systemic lupus erythematosus.

    PubMed Central

    Baatrup, G; Sturfelt, G; Junker, A; Svehag, S E

    1992-01-01

    Blood samples from 15 patients with systemic lupus erythematosus (SLE) and 15 healthy blood donors were allowed to coagulate for one hour at room temperature, followed by one hour at 4 or 37 degrees C. The complement activity of the serum samples was assessed by three different functional assays. Serum samples from patients with SLE obtained by coagulation at 37 degrees C had a lower complement activity than serum samples from blood coagulated at 4 degrees C when the capacity of the serum samples to solubilise precipitable immune complexes and to support the attachment of complement factors to solid phase immune complexes was determined. Haemolytic complement activity was not affected by the coagulation temperature. The content of C1q binding immune complexes in paired serum samples obtained after coagulation at 4 and 37 degrees C was similar and the size distribution of the immune complexes, determined by high performance gel permeation chromatography, was also similar. This study shows that the results of functional complement assays, applied to serum samples from patients with SLE cannot be compared unless the conditions for blood coagulation and serum handling are defined and are the same. The data also indicate that assays measuring complement mediated solubilisation of immune complexes and the fixation of complement factors to solid phase immune complexes are more sensitive indicators of complement activity than the haemolytic assay. PMID:1632665

  18. On the temperature model of CO{sub 2} lasers

    SciTech Connect

    Nevdakh, Vladimir V; Ganjali, Monireh; Arshinov, K I

    2007-03-31

    A refined temperature model of CO{sub 2} lasers is presented, which takes into account the fact that vibrational modes of the CO{sub 2} molecule have the common ground vibrational level. New formulas for the occupation numbers and the vibrational energy storage in individual modes are obtained as well as expressions relating the vibrational temperatures of the CO{sub 2} molecules with the excitation and relaxation rates of lower vibrational levels of modes upon excitation of the CO{sub 2}-N{sub 2}-He mixture in an electric discharge. The character of dependences of the vibrational temperatures on the discharge current is discussed. (active media)

  19. Temperature-dependent bursting pattern analysis by modified Plant model

    PubMed Central

    2014-01-01

    Many electrophysiological properties of neuron including firing rates and rhythmical oscillation change in response to a temperature variation, but the mechanism underlying these correlations remains unverified. In this study, we analyzed various action potential (AP) parameters of bursting pacemaker neurons in the abdominal ganglion of Aplysia juliana to examine whether or not bursting patterns are altered in response to temperature change. Here we found that the inter-burst interval, burst duration, and number of spike during burst decreased as temperature increased. On the other hand, the numbers of bursts per minute and numbers of spikes per minute increased and then decreased, but interspike interval during burst firstly decreased and then increased. We also tested the reproducibility of temperature-dependent changes in bursting patterns and AP parameters. Finally we performed computational simulations of these phenomena by using a modified Plant model composed of equations with temperature-dependent scaling factors to mathematically clarify the temperature-dependent changes of bursting patterns in burst-firing neurons. Taken together, we found that the modified Plant model could trace the ionic mechanism underlying the temperature-dependent change in bursting pattern from experiments with bursting pacemaker neurons in the abdominal ganglia of Aplysia juliana. PMID:25051923

  20. Heat Transfer Modeling for Rigid High-Temperature Fibrous Insulation

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Cunnington, George R.; Knutson, Jeffrey R.

    2012-01-01

    Combined radiation and conduction heat transfer through a high-temperature, high-porosity, rigid multiple-fiber fibrous insulation was modeled using a thermal model previously used to model heat transfer in flexible single-fiber fibrous insulation. The rigid insulation studied was alumina enhanced thermal barrier (AETB) at densities between 130 and 260 kilograms per cubic meter. The model consists of using the diffusion approximation for radiation heat transfer, a semi-empirical solid conduction model, and a standard gas conduction model. The relevant parameters needed for the heat transfer model were estimated from steady-state thermal measurements in nitrogen gas at various temperatures and environmental pressures. The heat transfer modeling methodology was evaluated by comparison with standard thermal conductivity measurements, and steady-state thermal measurements in helium and carbon dioxide gases. The heat transfer model is applicable over the temperature range of 300 to 1360 K, pressure range of 0.133 to 101.3 x 10(exp 3) Pa, and over the insulation density range of 130 to 260 kilograms per cubic meter in various gaseous environments.

  1. Friedberg-Lee model at finite temperature and density

    NASA Astrophysics Data System (ADS)

    Mao, Hong; Yao, Minjie; Zhao, Wei-Qin

    2008-06-01

    The Friedberg-Lee model is studied at finite temperature and density. By using the finite temperature field theory, the effective potential of the Friedberg-Lee model and the bag constant B(T) and B(T,μ) have been calculated at different temperatures and densities. It is shown that there is a critical temperature TC≃106.6 MeV when μ=0 MeV and a critical chemical potential μ≃223.1 MeV for fixing the temperature at T=50 MeV. We also calculate the soliton solutions of the Friedberg-Lee model at finite temperature and density. It turns out that when T⩽TC (or μ⩽μC), there is a bag constant B(T) [or B(T,μ)] and the soliton solutions are stable. However, when T>TC (or μ>μC) the bag constant B(T)=0 MeV [or B(T,μ)=0 MeV] and there is no soliton solution anymore, therefore, the confinement of quarks disappears quickly.

  2. Corn blight review: Sampling model and ground data measurements program

    NASA Technical Reports Server (NTRS)

    Allen, R. D.

    1972-01-01

    The sampling plan involved the selection of the study area, determination of the flightline and segment sample design within the study area, and determination of a field sample design. Initial interview survey data consisting of crop species acreage and land use were collected. On all corn fields, additional information such as seed type, row direction, population, planting date, ect. were also collected. From this information, sample corn fields were selected to be observed through the growing season on a biweekly basis by county extension personnel.

  3. Monte Carlo path sampling approach to modeling aeolian sediment transport

    NASA Astrophysics Data System (ADS)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient

  4. Heating and temperature gradients of lipid bilayer samples induced by RF irradiation in MAS solid-state NMR experiments.

    PubMed

    Wang, Jing; Zhang, Zhengfeng; Zhao, Weijing; Wang, Liying; Yang, Jun

    2016-05-09

    The MAS solid-state NMR has been a powerful technique for studying membrane proteins within the native-like lipid bilayer environment. In general, RF irradiation in MAS NMR experiments can heat and potentially destroy expensive membrane protein samples. However, under practical MAS NMR experimental conditions, detailed characterization of RF heating effect of lipid bilayer samples is still lacking. Herein, using (1) H chemical shift of water for temperature calibration, we systematically study the dependence of RF heating on hydration levels and salt concentrations of three lipids in MAS NMR experiments. Under practical (1) H decoupling conditions used in biological MAS NMR experiments, three lipids show different dependence of RF heating on hydration levels as well as salt concentrations, which are closely associated with the properties of lipids. The maximum temperature elevation of about 10 °C is similar for the three lipids containing 200% hydration, which is much lower than that in static solid-state NMR experiments. The RF heating due to salt is observed to be less than that due to hydration, with a maximum temperature elevation of less than 4 °C in the hydrated samples containing 120 mmol l(-1) of salt. Upon RF irradiation, the temperature gradient across the sample is observed to be greatly increased up to 20 °C, as demonstrated by the remarkable broadening of (1) H signal of water. Based on detailed characterization of RF heating effect, we demonstrate that RF heating and temperature gradient can be significantly reduced by decreasing the hydration levels of lipid bilayer samples from 200% to 30%. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Field and sample history dependence of the compensation temperature in Sm 0.97Gd 0.03Al 2

    NASA Astrophysics Data System (ADS)

    Vaidya, U. V.; Rakhecha, V. C.; Sumithra, S.; Ramakrishnan, S.; Grover, A. K.

    2007-03-01

    We present magnetization data on three polycrystalline specimens of Sm 0.97Gd 0.03Al 2: (1) as-cast (grainy texture), (2) powder, and (3) re-melted fast-quenched (plate). The data are presented for nominally zero- (ZFC) and high-field-cooling (HFC) histories. A zero cross-over in magnetization curve at some temperature T= T0 was seen in ZFC data on grainy and powder samples, but not in the plate sample. At fields surpassing magnetocrystalline anisotropy, a 4f magnetic moment flip was still evidenced by HFC data in all samples at a compensation temperature Tcomp, which must necessarily be treated as distinct from T0 ( T0 may not even exist). Proper understanding of Tcomp should take account of thermomagnetic history effects.

  6. Understanding and quantifying foliar temperature acclimation for Earth System Models

    NASA Astrophysics Data System (ADS)

    Smith, N. G.; Dukes, J.

    2015-12-01

    Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly

  7. High-Temperature Expansions for Frenkel-Kontorova Model

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Mannari, I.; Ishii, T.

    1995-02-01

    Two high-temperature series expansions of the Frenkel-Kontorova (FK) model are investigated: the high-temperature approximation of Schneider-Stoll is extended to the FK model having the density ρ ≠ 1, and an alternative series expansion in terms of the modified Bessel function is examined. The first six-order terms for both expansions in free energy are explicitly obtained and compared with Ishii's approximation of the transfer-integral method. The specific heat based on the expansions is discussed by comparing with those of the transfer-integral method and Monte Carlo simulation.

  8. Matrix-assisted laser desorption/ionization mass spectrometry of covalently cationized polyethylene as a function of sample temperature

    NASA Astrophysics Data System (ADS)

    Wallace, W. E.; Blair, W. R.

    2007-05-01

    A pre-charged, low molecular mass, low polydispersity linear polyethylene was analyzed with matrix-assisted laser desorption/ionization (MALDI) mass spectrometry as a function of sample temperature between 25 °C and 150 °C. This temperature range crosses the polyethylene melting temperature. Buckminsterfullerene (C60) was used as MALDI matrix due to the high volatility of typical MALDI matrices making them unsuitable for heating in vacuum. Starting at 90 °C there is an increase in polyethylene ion intensity at fixed laser energy. By 150 °C the integrated total ion intensity had grown by six-fold indicating that melting did indeed increase ion yield. At 150 °C the threshold laser intensity to produce intact polyethylene ions decreased by about 25%. Nevertheless, significant fragmentation accompanied the intact polyethylene ions even at the highest temperatures and the lowest laser energies.

  9. Measuring and modeling hemoglobin aggregation below the freezing temperature.

    PubMed

    Rosa, Mónica; Lopes, Carlos; Melo, Eduardo P; Singh, Satish K; Geraldes, Vitor; Rodrigues, Miguel A

    2013-08-01

    Freezing of protein solutions is required for many applications such as storage, transport, or lyophilization; however, freezing has inherent risks for protein integrity. It is difficult to study protein stability below the freezing temperature because phase separation constrains solute concentration in solution. In this work, we developed an isochoric method to study protein aggregation in solutions at -5, -10, -15, and -20 °C. Lowering the temperature below the freezing point in a fixed volume prevents the aqueous solution from freezing, as pressure rises until equilibrium (P,T) is reached. Aggregation rates of bovine hemoglobin (BHb) increased at lower temperature (-20 °C) and higher BHb concentration. However, the addition of sucrose substantially decreased the aggregation rate and prevented aggregation when the concentration reached 300 g/L. The unfolding thermodynamics of BHb was studied using fluorescence, and the fraction of unfolded protein as a function of temperature was determined. A mathematical model was applied to describe BHb aggregation below the freezing temperature. This model was able to predict the aggregation curves for various storage temperatures and initial concentrations of BHb. The aggregation mechanism was revealed to be mediated by an unfolded state, followed by a fast growth of aggregates that readily precipitate. The aggregation kinetics increased for lower temperature because of the higher fraction of unfolded BHb closer to the cold denaturation temperature. Overall, the results obtained herein suggest that the isochoric method could provide a relatively simple approach to obtain fundamental thermodynamic information about the protein and the aggregation mechanism, thus providing a new approach to developing accelerated formulation studies below the freezing temperature.

  10. Sample Size Considerations in Prevention Research Applications of Multilevel Modeling and Structural Equation Modeling.

    PubMed

    Hoyle, Rick H; Gottfredson, Nisha C

    2015-10-01

    When the goal of prevention research is to capture in statistical models some measure of the dynamic complexity in structures and processes implicated in problem behavior and its prevention, approaches such as multilevel modeling (MLM) and structural equation modeling (SEM) are indicated. Yet the assumptions that must be satisfied if these approaches are to be used responsibly raise concerns regarding their use in prevention research involving smaller samples. In this article, we discuss in nontechnical terms the role of sample size in MLM and SEM and present findings from the latest simulation work on the performance of each approach at sample sizes typical of prevention research. For each statistical approach, we draw from extant simulation studies to establish lower bounds for sample size (e.g., MLM can be applied with as few as ten groups comprising ten members with normally distributed data, restricted maximum likelihood estimation, and a focus on fixed effects; sample sizes as small as N = 50 can produce reliable SEM results with normally distributed data and at least three reliable indicators per factor) and suggest strategies for making the best use of the modeling approach when N is near the lower bound.

  11. Sample Size Considerations in Prevention Research Applications of Multilevel Modeling and Structural Equation Modeling

    PubMed Central

    Hoyle, Rick H.; Gottfredson, Nisha C.

    2014-01-01

    When the goal of prevention research is to capture in statistical models some measure of the dynamic complexity in structures and processes implicated in problem behavior and its prevention, approaches such as multilevel modeling (MLM) and structural equation modeling (SEM) are indicated. Yet the assumptions that must be satisfied if these approaches are to be used responsibly raise concerns regarding their use in prevention research involving smaller samples. In this manuscript we discuss in nontechnical terms the role of sample size in MLM and SEM and present findings from the latest simulation work on the performance of each approach at sample sizes typical of prevention research. For each statistical approach, we draw from extant simulation studies to establish lower bounds for sample size (e.g., MLM can be applied with as few as 10 groups comprising 10 members with normally distributed data, restricted maximum likelihood estimation, and a focus on fixed effects; sample sizes as small as N = 50 can produce reliable SEM results with normally distributed data and at least three reliable indicators per factor) and suggest strategies for making the best use of the modeling approach when N is near the lower bound. PMID:24752569

  12. Modeling acclimation of photosynthesis to temperature in evergreen conifer forests.

    PubMed

    Gea-Izquierdo, Guillermo; Mäkelä, Annikki; Margolis, Hank; Bergeron, Yves; Black, T Andrew; Dunn, Allison; Hadley, Julian; Kyaw Tha Paw U; Falk, Matthias; Wharton, Sonia; Monson, Russell; Hollinger, David Y; Laurila, Tuomas; Aurela, Mika; McCaughey, Harry; Bourque, Charles; Vesala, Timo; Berninger, Frank

    2010-10-01

    • In this study, we used a canopy photosynthesis model which describes changes in photosynthetic capacity with slow temperature-dependent acclimations. • A flux-partitioning algorithm was applied to fit the photosynthesis model to net ecosystem exchange data for 12 evergreen coniferous forests from northern temperate and boreal regions. • The model accounted for much of the variation in photosynthetic production, with modeling efficiencies (mean > 67%) similar to those of more complex models. The parameter describing the rate of acclimation was larger at the northern sites, leading to a slower acclimation of photosynthesis to temperature. The response of the rates of photosynthesis to air temperature in spring was delayed up to several days at the coldest sites. Overall photosynthesis acclimation processes were slower at colder, northern locations than at warmer, more southern, and more maritime sites. • Consequently, slow changes in photosynthetic capacity were essential to explaining variations of photosynthesis for colder boreal forests (i.e. where acclimation of photosynthesis to temperature was slower), whereas the importance of these processes was minor in warmer conifer evergreen forests.

  13. Sampling biases in datasets of historical mean air temperature over land.

    PubMed

    Wang, Kaicun

    2014-04-10

    Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends.

  14. Sampling Biases in Datasets of Historical Mean Air Temperature over Land

    NASA Astrophysics Data System (ADS)

    Wang, Kaicun

    2014-04-01

    Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends.

  15. Modeling apple surface temperature dynamics based on weather data.

    PubMed

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-10-27

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management.

  16. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms 1. Modifications. Institutions that.... Institutions may use inserts to a document (see Sample Form B-4) or fill-in blanks (see Sample Forms B-5,...

  17. 3.5 D temperature model of a coal stockpile

    SciTech Connect

    Ozdeniz, A.H.; Corumluoglu, O.; Kalayci, I.; Sensogut, C.

    2008-07-01

    Overproduced coal mines that are not sold should remain in coal stock sites. If these coal stockpiles remain at the stock yards over a certain period of time, a spontaneous combustion can be started. Coal stocks under combustion threat can cost too much economically to coal companies. Therefore, it is important to take some precautions for saving the stockpiles from the spontaneous combustion. In this research, a coal stock which was 5 m wide, 10 m long, and 3 m in height, with a weight of 120 tons, was monitored to observe internal temperature changes with respect to time under normal atmospheric conditions. Internal temperature measurements were obtained at 20 points distributed all over the two layers in the stockpile. Temperatures measured by a specially designed mechanism were then stored into a computer every 3 h for a period of 3 months. Afterward, this dataset was used to delineate 3.5 D temporal temperature distribution models for these two levels, and they were used to analyze and interpret what was seen in these models to derive some conclusions. It was openly seen, followed, and analyzed that internal temperature changes in the stockpile went up to 31{sup o}C by 3.5 D models created for this research.

  18. Modelling Brain Temperature and Perfusion for Cerebral Cooling

    NASA Astrophysics Data System (ADS)

    Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael

    2015-11-01

    Brain temperature relies heavily on two aspects: i) blood perfusion and porous heat transport through tissue and ii) blood flow and heat transfer through embedded arterial and venous vasculature. Moreover brain temperature cannot be measured directly unless highly invasive surgical procedures are used. A 3D two-phase fluid-porous model for mapping flow and temperature in brain is presented with arterial and venous vessels extracted from MRI scans. Heat generation through metabolism is also included. The model is robust and reveals flow and temperature maps in unprecedented 3D detail. However, the Karmen-Kozeny parameters of the porous (tissue) phase need to be optimised for expected perfusion profiles. In order to optimise the K-K parameters a reduced order two-phase model is developed where 1D vessels are created with a tree generation algorithm embedded inside a 3D porous domain. Results reveal that blood perfusion is a strong function of the porosity distribution in the tissue. We present a qualitative comparison between the simulated perfusion maps and those obtained clinically. We also present results studying the effect of scalp cooling on core brain temperature and preliminary results agree with those observed clinically.

  19. Modeling the effect of temperature on survival rate of Salmonella Enteritidis in yogurt.

    PubMed

    Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A

    2014-01-01

    The aim of the study was to determine the inactivation rates of Salmonella Enteritidis in commercially produced yogurt and to generate primary and secondary mathematical models to predict the behaviour of these bacteria during storage at different temperatures. The samples were inoculated with the mixture of three S. Enteritidis strains and stored at 5 degrees C, 10 degrees C, 15 degrees C, 20 degrees C and 25 degrees C for 24 h. The number of salmonellae was determined every two hours. It was found that the number of bacteria decreased linearly with storage time in all samples. Storage temperature and pH of yogurt significantly influenced survival rate of S. Enteritidis (p < 0.05). In samples kept at 5 degrees C the number of salmonellae decreased at the lowest rate, whereas at 25 degrees C the reduction in number of bacteria was the most dynamic. The natural logarithm of mean inactivation rates of Salmonella calculated from primary model was fitted to two secondary models: linear and polynomial. Equations obtained from both secondary models can be applied as a tool for prediction of inactivation rate of Salmonella in yogurt stored under temperature range from 5 to 25 degrees C; however, polynomial model gave the better fit to the experimental data.

  20. Empirical model of temperature structure, Anadarko basin, Oklahoma

    SciTech Connect

    Gallardo, J.D.; Blackwell, D.D. )

    1989-08-01

    Attempts at mapping the thermal structure of sedimentary basins most often are based on bottom-hole temperature (BHT) data. Aside from the inaccuracy of the BHT data itself, this approach uses a straight-line geothermal gradient, which is an unrealistic representation of the thermal structure. In fact, the temperature gradient is dependent upon the lithology of the rocks because each rock type has a different thermal conductivity. The mean gradient through a given sedimentary section is a composite of the gradients through the individual sedimentary units. Thus, a more accurate representation of the temperature variations within a basin can be obtained by calculating the temperature gradient through each layer of contrasting conductivity. In this study, synthetic temperature profiles are calculated from lithologic data interpreted from well logs, and these profiles are used to build a three-dimensional model of the temperature structure of the Anadarko basin. The lithologies that control the temperature in the Anadarko basin include very high-conductivity evaporites in the Permian, low-conductivity shales dominating the thick Pennsylvanian section, and relatively intermediate conductivity carbonates throughout the lower Paleozoic. Shale is the primary controlling factor because it is the most abundant lithology in the basin and has a low thermal conductivity. This is unfortunate because shale thermal conductivity is the factor least well constrained by laboratory measurements.

  1. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    DOE PAGES

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; ...

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution featuresmore » including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.« less

  2. A simple model for electron temperature in dilute plasma flows

    NASA Astrophysics Data System (ADS)

    Cai, Chunpei; Cooke, David L.

    2016-10-01

    In this short note, we present some work on investigating electron temperatures and potentials in steady dilute plasma flows. The analysis is based on the detailed fluid model for electrons. Ionizations, normalized electron number density gradients, and magnetic fields are neglected. The transport properties are assumed as local constants. With these treatments, the partial differential equation for electron temperature degenerates as an ordinary differential equation. Along an electron streamline, two simple formulas for electron temperature and plasma potential are obtained. These formulas offer some insights, e.g., the electron temperature and plasma potential distributions along an electron streamline include two exponential functions, and the one for plasma potential includes an extra linear distribution function.

  3. On Modeling and Measuring the Temperature of the z ~ 5 Intergalactic Medium

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Malloy, Matthew

    2014-06-01

    The temperature of the low-density intergalactic medium (IGM) at high redshift is sensitive to the timing and nature of hydrogen and He II reionization, and can be measured from Lyman-alpha (Lyα) forest absorption spectra. Since the memory of intergalactic gas to heating during reionization gradually fades, measurements as close as possible to reionization are desirable. In addition, measuring the IGM temperature at sufficiently high redshifts should help to isolate the effects of hydrogen reionization since He II reionization starts later, at lower redshift. Motivated by this, we model the IGM temperature at z >~ 5 using semi-numeric models of patchy reionization. We construct mock Lyα forest spectra from these models and consider their observable implications. We find that the small-scale structure in the Lyα forest is sensitive to the temperature of the IGM even at redshifts where the average absorption in the forest is as high as 90%. We forecast the accuracy at which the z >~ 5 IGM temperature can be measured using existing samples of high resolution quasar spectra, and find that interesting constraints are possible. For example, an early reionization model in which reionization ends at z ~ 10 should be distinguishable—at high statistical significance—from a lower redshift model where reionization completes at z ~ 6. We discuss improvements to our modeling that may be required to robustly interpret future measurements.

  4. Land-surface temperature measurement from space - Physical principles and inverse modeling

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming; Dozier, Jeff

    1989-01-01

    To apply the multiple-wavelength (split-window) method used for satellite measurement of sea-surface temperature from thermal-infrared data to land-surface temperatures, the authors statistically analyze simulations using an atmospheric radiative transfer model. The range of atmospheric conditions and surface temperatures simulated is wide enough to cover variations in clear atmospheric properties and surface temperatures, both of which are larger over land than over sea. Surface elevation is also included in the simulation as the most important topographic effect. Land covers characterized by measured or modeled spectral emissivities include snow, clay, sands, and tree leaf samples. The empirical inverse model can estimate the surface temperature with a standard deviation less than 0.3 K and a maximum error less than 1 K, for viewing angles up to 40 degrees from nadir under cloud-free conditions, given satellite measurements in three infrared channels. A band in the region from 10.2 to 11.0 microns will usually give the most reliable single-band estimate of surface temperature. In addition, a band in either the 3.5-4.0-micron region or in the 11.5-12.6-micron region must be included for accurate atmospheric correction, and a band below the ozone absorption feature at 9.6 microns (e.g., 8.2-8.8 microns) will increase the accuracy of the estimate of surface temperature.

  5. Determination of cross-grain properties of clearwood samples under kiln-drying conditions at temperature up to 140 C

    SciTech Connect

    Keep, L.B.; Keey, R.B.

    2000-07-01

    Small specimens of Pinus radiata have been tested to determine the creep strain that occurs during the kiln drying of boards. The samples have been tested over a range of temperatures from 20 C to 140 C. The samples, measuring 150 x 50 x 5 mm, were conditioned at various relative humidities in a pilot-plant kiln, in which the experiments at constant moisture content (MC) in the range of 5--20% MC were undertaken to eliminate mechano-sorptive strains. To determine the creep strain, the samples were brought to their equilibrium moisture content (EMC), then mechanically loaded under tension in the direction perpendicular to the grain. The strain was measured using small linear position sensors (LPS) which detect any elongation or shrinkage in the sample. The instantaneous compliance was measured within 60 sec of the application of the load (stress). The subsequent creep was monitored by the continued logging of strain data from the LPS units. The results of these experiments are consistent with previous studies of Wu and Milota (1995) on Douglas-fir (Pseudotsuga Menziesii). An increase in temperature or moisture content causes a rise in the creep strain while the sample is under tension. Values for the instantaneous compliance range from 1.7 x 10{sup {minus}3} to 1.28 x 10{sup {minus}2} M/Pa at temperatures between 20 C and 140 C and moisture content in the range of 5--20%. The rates of change of the creep strains are in the order of magnitude 10{sup {minus}7} to 10{sup {minus}8 s{sup {minus}1}} for these temperatures and moisture contents. The experimental data have been fitted to the constitutive equations of Wu and Milota (1996) for Douglas-fir to give material parameters for the instantaneous and creep strain components for Pinus radiata.

  6. Automated biowaste sampling system, solids subsystem operating model, part 2

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Stauffer, R. E.

    1973-01-01

    The detail design and fabrication of the Solids Subsystem were implemented. The system's capacity for the collection, storage or sampling of feces and vomitus from six subjects was tested and verified.

  7. Sample stream distortion modeled in continuous-flow electrophoresis

    NASA Technical Reports Server (NTRS)

    Rhodes, P. H.

    1979-01-01

    Buoyancy-induced disturbances in an electrophoresis-type chamber were investigated. Five tracer streams (latex) were used to visualize the flows while a nine-thermistor array sensed the temperature field. The internal heating to the chamber was provided by a 400 Hz electrical field. Cooling to the chamber was provided on the front and back faces and, in addition, on both chamber side walls. Disturbances to the symmetric base flow in the chamber occurred in the broad plane of the chamber and resulted from the formation of lateral and axial temperature gradients. The effect of these gradients was to retard or increase local flow velocities at different positions in the chamber cross section, which resulted in lateral secondary flows being induced in the broad plane of the chamber. As the adverse temperature gradients increased in magnitude, the critical Rayleigh number was approached and reverse (separated) flow became apparent, which, subsequently, led to the onset of time variant secondary flows.

  8. Modeling the effect of temperature on survival rate of Listeria monocytogenes in yogurt.

    PubMed

    Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A

    2016-01-01

    The aim of the study was to (i) evaluate the behavior of Listeria monocytogenes in a commercially produced yogurt, (ii) determine the survival/inactivation rates of L. monocytogenes during cold storage of yogurt and (iii) to generate primary and secondary mathematical models to predict the behavior of these bacteria during storage at different temperatures. The samples of yogurt were inoculated with the mixture of three L. monocytogenes strains and stored at 3, 6, 9, 12 and 15°C for 16 days. The number of listeriae was determined after 0, 1, 2, 3, 5, 7, 9, 12, 14 and 16 days of storage. From each sample a series of decimal dilutions were prepared and plated onto ALOA agar (agar for Listeria according to Ottaviani and Agosti). It was found that applied temperature and storage time significantly influenced the survival rate of listeriae (p<0.01). The number of L. monocytogenes in all the samples decreased linearly with storage time. The slowest decrease in the number of the bacteria was found in the samples stored at 6°C (D-10 value = 243.9 h), whereas the highest reduction in the number of the bacteria was observed in the samples stored at 15°C (D-10 value = 87.0 h). The number of L. monocytogenes was correlated with the pH value of the samples (p<0.01). The natural logarithm of the mean survival/inactivation rates of L. monocytogenes calculated from the primary model was fitted to two secondary models, namely linear and polynomial. Mathematical equations obtained from both secondary models can be applied as a tool for the prediction of the survival/inactivation rate of L. monocytogenes in yogurt stored under temperature range from 3 to 15°C, however, the polynomial model gave a better fit to the experimental data.

  9. Temperature Driven Annealing of Perforations in Bicellar Model Membranes

    SciTech Connect

    Nieh, Mu-Ping; Raghunathan, V.A.; Pabst, Georg; Harroun, Thad; Nagashima, K; Morales, H; Katsaras, John; Macdonald, P

    2011-01-01

    Bicellar model membranes composed of 1,2-dimyristoylphosphatidylcholine (DMPC) and 1,2-dihexanoylphosphatidylcholine (DHPC), with a DMPC/DHPC molar ratio of 5, and doped with the negatively charged lipid 1,2-dimyristoylphosphatidylglycerol (DMPG), at DMPG/DMPC molar ratios of 0.02 or 0.1, were examined using small angle neutron scattering (SANS), {sup 31}P NMR, and {sup 1}H pulsed field gradient (PFG) diffusion NMR with the goal of understanding temperature effects on the DHPC-dependent perforations in these self-assembled membrane mimetics. Over the temperature range studied via SANS (300-330 K), these bicellar lipid mixtures exhibited a well-ordered lamellar phase. The interlamellar spacing d increased with increasing temperature, in direct contrast to the decrease in d observed upon increasing temperature with otherwise identical lipid mixtures lacking DHPC. {sup 31}P NMR measurements on magnetically aligned bicellar mixtures of identical composition indicated a progressive migration of DHPC from regions of high curvature into planar regions with increasing temperature, and in accord with the 'mixed bicelle model' (Triba, M. N.; Warschawski, D. E.; Devaux, P. E. Biophys. J.2005, 88, 1887-1901). Parallel PFG diffusion NMR measurements of transbilayer water diffusion, where the observed diffusion is dependent on the fractional surface area of lamellar perforations, showed that transbilayer water diffusion decreased with increasing temperature. A model is proposed consistent with the SANS, {sup 31}P NMR, and PFG diffusion NMR data, wherein increasing temperature drives the progressive migration of DHPC out of high-curvature regions, consequently decreasing the fractional volume of lamellar perforations, so that water occupying these perforations redistributes into the interlamellar volume, thereby increasing the interlamellar spacing.

  10. Monitoring temperature for gas turbine blade: correction of reflection model

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Xiao, Yihan; Daniel, Ketui

    2015-06-01

    For a gas turbine blade working in a narrow space, the accuracy of blade temperature measurements is greatly impacted by environmental irradiation. A reflection model is established by using discrete irregular surfaces to calculate the angle factor between the blade surface and the hot adjacent parts. The model is based on the rotational angles and positions of the blades, and can correct for measurement error caused by background radiation when the blade is located at different rotational positions. This method reduces the impact of reflected radiation on the basis of the turbine's known geometry and the physical properties of the material. The experimental results show that when the blade temperature is 911.2±5 K and the vane temperature ranges from 1011.3 to 1065.8 K, the error decreases from 4.21 to 0.75%.

  11. Method for Effective Calibration of Temperature Loggers with Automated Data Sampling and Evaluation

    NASA Astrophysics Data System (ADS)

    Ljungblad, S.; Josefson, L. E.; Holmsten, M.

    2011-12-01

    A highly automated calibration method for temperature loggers is presented. By using an automated procedure, a time- and cost-efficient calibration of temperature loggers is made possible. The method is directed at loggers that lack the function/property of direct reading from a display. This type of logger has to be connected to a computer for the setting-up of the measurement and again for collection of the measurement results. During the calibration, the loggers are offline. This method has been developed for temperature loggers from Gemini Data loggers, but the software and method could be modified to suit other types of loggers as well. Calibration is performed by comparison to a reference thermometer in liquid baths; and for loggers which have external sensors, only the sensor is normally placed in the bath. Loggers with internal sensors are protected from the liquid by placing them in an exterior plastic or metallic cover, and thereafter the entire loggers are placed in the bath. A digital thermometer measures the reference temperature of the bath and transmits it to a computer by way of Bluetooth. The developed calibration software, SPTempLogger, controls the logger software, and thus the communication protocol of the logger software does not need to be known. The previous method, with manual handling of the start and termination of every measuring sequence, evaluation of the resulting data and its corresponding uncertainty components, can be replaced by this automated method. Both the logger and reference measurement data are automatically downloaded once the logger has been connected to a computer after the calibration, and the calibration software started. The data are then evaluated automatically, and by statistical analysis of the confidence coefficient and standard deviation, the temperature plateaus that the calibration includes are identified. If a number of control parameters comply with the requirements, then the correction, resolution, and short

  12. A computer model of global thermospheric winds and temperatures

    NASA Technical Reports Server (NTRS)

    Killeen, T. L.; Roble, R. G.; Spencer, N. W.

    1987-01-01

    Output data from the NCAR Thermospheric GCM and a vector-spherical-harmonic (VSH) representation of the wind field are used in constructing a computer model of time-dependent global horizontal vector neutral wind and temperature fields at altitude 130-300 km. The formulation of the VSH model is explained in detail, and some typical results obtained with a preliminary version (applicable to December solstice at solar maximum) are presented graphically. Good agreement with DE-2 satellite measurements is demonstrated.

  13. Dynamic mechanical response and a constitutive model of Fe-based high temperature alloy at high temperatures and strain rates.

    PubMed

    Su, Xiang; Wang, Gang; Li, Jianfeng; Rong, Yiming

    2016-01-01

    The effects of strain rate and temperature on the dynamic behavior of Fe-based high temperature alloy was studied. The strain rates were 0.001-12,000 s(-1), at temperatures ranging from room temperature to 800 °C. A phenomenological constitutive model (Power-Law constitutive model) was proposed considering adiabatic temperature rise and accurate material thermal physical properties. During which, the effects of the specific heat capacity on the adiabatic temperature rise was studied. The constitutive model was verified to be accurate by comparison between predicted and experimental results.

  14. Apply a hydrological model to estimate local temperature trends

    NASA Astrophysics Data System (ADS)

    Igarashi, Masao; Shinozawa, Tatsuya

    2014-03-01

    Continuous times series {f(x)} such as a depth of water is written f(x) = T(x)+P(x)+S(x)+C(x) in hydrological science where T(x),P(x),S(x) and C(x) are called the trend, periodic, stochastic and catastrophic components respectively. We simplify this model and apply it to the local temperature data such as given E. Halley (1693), the UK (1853-2010), Germany (1880-2010), Japan (1876-2010). We also apply the model to CO2 data. The model coefficients are evaluated by a symbolic computation by using a standard personal computer. The accuracy of obtained nonlinear curve is evaluated by the arithmetic mean of relative errors between the data and estimations. E. Halley estimated the temperature of Gresham College from 11/1692 to 11/1693. The simplified model shows that the temperature at the time rather cold compared with the recent of London. The UK and Germany data sets show that the maximum and minimum temperatures increased slowly from the 1890s to 1940s, increased rapidly from the 1940s to 1980s and have been decreasing since the 1980s with the exception of a few local stations. The trend of Japan is similar to these results.

  15. STREAM TEMPERATURE SIMULATION OF FORESTED RIPARIAN AREAS: II. MODEL APPLICATION

    EPA Science Inventory

    The SHADE-HSPF modeling system described in a companion paper has been tested and applied to the Upper Grande Ronde (UGR) watershed in northeast Oregon. Sensitivities of stream temperature to the heat balance parameters in Hydrologic Simulation Program-FORTRAN (HSPF) and the ripa...

  16. Temperature dependence of bag pressure from quasiparticle model

    NASA Astrophysics Data System (ADS)

    Prasad, N.; Singh, C. P.

    2001-03-01

    A quasiparticle model with effective thermal gluon and quark masses is used to derive a temperature /T- and baryon chemical potential /μ-dependent bag constant /B(μ,T). Consequences of such a bag constant are obtained on the equation of state (EOS) for a deconfined quark-gluon plasma (QGP).

  17. Modeling temperature variations in a pilot plant thermophilic anaerobic digester.

    PubMed

    Valle-Guadarrama, Salvador; Espinosa-Solares, Teodoro; López-Cruz, Irineo L; Domaschko, Max

    2011-05-01

    A model that predicts temperature changes in a pilot plant thermophilic anaerobic digester was developed based on fundamental thermodynamic laws. The methodology utilized two simulation strategies. In the first, model equations were solved through a searching routine based on a minimal square optimization criterion, from which the overall heat transfer coefficient values, for both biodigester and heat exchanger, were determined. In the second, the simulation was performed with variable values of these overall coefficients. The prediction with both strategies allowed reproducing experimental data within 5% of the temperature span permitted in the equipment by the system control, which validated the model. The temperature variation was affected by the heterogeneity of the feeding and extraction processes, by the heterogeneity of the digestate recirculation through the heating system and by the lack of a perfect mixing inside the biodigester tank. The use of variable overall heat transfer coefficients improved the temperature change prediction and reduced the effect of a non-ideal performance of the pilot plant modeled.

  18. Models of Solar Irradiance Variability and the Instrumental Temperature Record

    NASA Technical Reports Server (NTRS)

    Marcus, S. L.; Ghil, M.; Ide, K.

    1998-01-01

    The effects of decade-to-century (Dec-Cen) variations in total solar irradiance (TSI) on global mean surface temperature Ts during the pre-Pinatubo instrumental era (1854-1991) are studied by using two different proxies for TSI and a simplified version of the IPCC climate model.

  19. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    2000-05-01

    The Vapor Liquid Equilibrium measurement setup of this work was first established several years ago. It is a flow type high temperature high pressure apparatus which was designed to operate below 500 C temperature and 2000 psia pressure. Compared with the static method, this method has three major advantages: the first is that large quantity of sample can be obtained from the system without disturbing the equilibrium state which was established before; the second is that the residence time of the sample in the equilibrium cell is greatly reduced, thus decomposition or contamination of the sample can be effectively prevented; the third is that the flow system allows the sample to degas as it heats up since any non condensable gas will exit in the vapor stream, accumulate in the vapor condenser, and not be recirculated. The first few runs were made with Quinoline-Tetralin system, the results were fairly in agreement with the literature data . The former graduate student Amad used the same apparatus acquired the Benzene-Ethylbenzene system VLE data. This work used basically the same setup (several modifications had been made) to get the VLE data of Ethylbenzene-Quinoline system.

  20. Modeling Allometric Relationships in Leaves of Young Rapeseed (Brassica napus L.) Grown at Different Temperature Treatments

    PubMed Central

    Tian, Tian; Wu, Lingtong; Henke, Michael; Ali, Basharat; Zhou, Weijun; Buck-Sorlin, Gerhard

    2017-01-01

    Functional–structural plant modeling (FSPM) is a fast and dynamic method to predict plant growth under varying environmental conditions. Temperature is a primary factor affecting the rate of plant development. In the present study, we used three different temperature treatments (10/14°C, 18/22°C, and 26/30°C) to test the effect of temperature on growth and development of rapeseed (Brassica napus L.) seedlings. Plants were sampled at regular intervals (every 3 days) to obtain growth data during the length of the experiment (1 month in total). Total leaf dry mass, leaf area, leaf mass per area (LMA), width-length ratio, and the ratio of petiole length to leaf blade length (PBR), were determined and statistically analyzed, and contributed to a morphometric database. LMA under high temperature was significantly smaller than LMA under medium and low temperature, while leaves at high temperature were significantly broader. An FSPM of rapeseed seedlings featuring a growth function used for leaf extension and biomass accumulation was implemented by combining measurement with literature data. The model delivered new insights into growth and development dynamics of winter oilseed rape seedlings. The present version of the model mainly focuses on the growth of plant leaves. However, future extensions of the model could be used in practice to better predict plant growth in spring and potential cold damage of the crop. PMID:28377775

  1. Daily indoor-to-outdoor temperature and humidity relationships: a sample across seasons and diverse climatic regions

    PubMed Central

    Nguyen, Jennifer L.; Dockery, Douglas W.

    2015-01-01

    The health consequences of heat and cold are usually evaluated based on associations with outdoor measurements at the nearest weather reporting station. However, people in the developed world spend little time outdoors, especially during extreme temperature events. We examined the association between indoor and outdoor temperature and humidity in a range of climates. We measured indoor temperature, apparent temperature, relative humidity, dew point, and specific humidity (a measure of moisture content in air) for one calendar year (2012) in a convenience sample of eight diverse locations ranging from the equatorial region (10°N) to the Arctic (64°N). We then compared the indoor conditions to outdoor values recorded at the nearest airport weather station. We found that the shape of the indoor-to-outdoor temperature and humidity relationships varied across seasons and locations. Indoor temperatures showed little variation across season and location. There was large variation in indoor relative humidity between seasons and between locations which was independent of outdoor, airport measurements. On the other hand, indoor specific humidity, and to a lesser extent dew point, tracked with outdoor, airport measurements both seasonally and between climates, across a wide range of outdoor temperatures. Our results suggest that, depending on the measure, season, and location, outdoor weather measurements can be reliably used to represent indoor exposures and that, in general, outdoor measures of actual moisture content in air better capture indoor exposure than temperature and relative humidity. Therefore, absolute measures of water vapor should be examined in conjunction with other measures (e.g. temperature, relative humidity) in studies of the effect of weather and climate on human health. PMID:26054827

  2. Daily indoor-to-outdoor temperature and humidity relationships: a sample across seasons and diverse climatic regions.

    PubMed

    Nguyen, Jennifer L; Dockery, Douglas W

    2016-02-01

    The health consequences of heat and cold are usually evaluated based on associations with outdoor measurements collected at a nearby weather reporting station. However, people in the developed world spend little time outdoors, especially during extreme temperature events. We examined the association between indoor and outdoor temperature and humidity in a range of climates. We measured indoor temperature, apparent temperature, relative humidity, dew point, and specific humidity (a measure of moisture content in air) for one calendar year (2012) in a convenience sample of eight diverse locations ranging from the equatorial region (10 °N) to the Arctic (64 °N). We then compared the indoor conditions to outdoor values recorded at the nearest airport weather station. We found that the shape of the indoor-to-outdoor temperature and humidity relationships varied across seasons and locations. Indoor temperatures showed little variation across season and location. There was large variation in indoor relative humidity between seasons and between locations which was independent of outdoor airport measurements. On the other hand, indoor specific humidity, and to a lesser extent dew point, tracked with outdoor, airport measurements both seasonally and between climates, across a wide range of outdoor temperatures. These results suggest that, in general, outdoor measures of actual moisture content in air better capture indoor conditions than outdoor temperature and relative humidity. Therefore, in studies where water vapor is among the parameters of interest for examining weather-related health effects, outdoor measurements of actual moisture content can be more reliably used as a proxy for indoor exposure than the more commonly examined variables of temperature and relative humidity.

  3. Study of Aerothermodynamic Modeling Issues Relevant to High-Speed Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.

    2014-01-01

    This paper examines the application of state-of-the-art coupled ablation and radiation simulations to highspeed sample return vehicles, such as those returning from Mars or an asteroid. A defining characteristic of these entries is that the surface recession rates and temperatures are driven by nonequilibrium convective and radiative heating through a boundary layer with significant surface blowing and ablation products. Measurements relevant to validating the simulation of these phenomena are reviewed and the Stardust entry is identified as providing the best relevant measurements. A coupled ablation and radiation flowfield analysis is presented that implements a finite-rate surface chemistry model. Comparisons between this finite-rate model and a equilibrium ablation model show that, while good agreement is seen for diffusion-limited oxidation cases, the finite-rate model predicts up to 50% lower char rates than the equilibrium model at sublimation conditions. Both the equilibrium and finite rate models predict significant negative mass flux at the surface due to sublimation of atomic carbon. A sensitivity analysis to flowfield and surface chemistry rates show that, for a sample return capsule at 10, 12, and 14 km/s, the sublimation rates for C and C3 provide the largest changes to the convective flux, radiative flux, and char rate. A parametric uncertainty analysis of the radiative heating due to radiation modeling parameters indicates uncertainties ranging from 27% at 10 km/s to 36% at 14 km/s. Applying the developed coupled analysis to the Stardust entry results in temperatures within 10% of those inferred from observations, and final recession values within 20% of measurements, which improves upon the 60% over-prediction at the stagnation point obtained through an uncoupled analysis. Emission from CN Violet is shown to be over-predicted by nearly and order-of-magnitude, which is consistent with the results of previous independent analyses. Finally, the

  4. RMSRo: A vitrinite reflectance model consistent with the temperature-apatite fission track system

    NASA Astrophysics Data System (ADS)

    Nielsen, Søren B.; Clausen, Ole R.; McGregor, Eoin D.

    2014-05-01

    Observed temperature, vitrinite reflectance and apatite fission tracks provide different but related information regarding temperature history. Their combined use in borehole heat flow determination as well as thermal and tectonic reconstruction requires a set of predictive models which are internally consistent. While the temperature-fission track system seems well-calibrated, several different vitrinite reflectance models exist. Although variability in vitrinite reflectance values is related to natural variations in the organic material such as; initial composition, depositional environment, degree of oxygenation etc., the most important factor affecting the construction of vitrinite reflectance models is bias in the geological temperature history of the samples used for calibration. Here we add to the vitrinite reflectance calibration data set of Suggate (1998) with more borehole data and construct a kinetic vitrinite reflectance model by minimizing the root mean square (RMS) distance between the calibration data set and model predictions. We validate this kinetic model on wells in the North Sea which have maximum temperature at the present day, and on two wells in the eastern North Sea, which have experience cooling since the early Eocene thermal maximum. The two latter wells have unusually high quality temperature, vitrinite reflectance and fission track data, and it appears that the independently derived RMSRo-model is consistent with the temperature-apatite fission track system. Keywords: vitrinite reflectance, basin analysis, thermal history, hydrocarbon exploration, apatite fission tracks Suggate, R.P., 1998. Relations between depth of burial, vitrinite reflectance and geothermal gradient. Journal of Petroleum Geology, v. 21(1), January 1998, 5-32.

  5. Modelling of temperature and perfusion during scalp cooling.

    PubMed

    Janssen, F E M; Van Leeuwen, G M J; Van Steenhoven, A A

    2005-09-07

    Hair loss is a feared side effect of chemotherapy treatment. It may be prevented by cooling the scalp during administration of cytostatics. The supposed mechanism is that by cooling the scalp, both temperature and perfusion are diminished, affecting drug supply and drug uptake in the hair follicle. However, the effect of scalp cooling varies strongly. To gain more insight into the effect of cooling, a computer model has been developed that describes heat transfer in the human head during scalp cooling. Of main interest in this study are the mutual influences of scalp temperature and perfusion during cooling. Results of the standard head model show that the temperature of the scalp skin is reduced from 34.4 degrees C to 18.3 degrees C, reducing tissue blood flow to 25%. Based upon variations in both thermal properties and head anatomies found in the literature, a parameter study was performed. The results of this parameter study show that the most important parameters affecting both temperature and perfusion are the perfusion coefficient Q10 and the thermal resistances of both the fat and the hair layer. The variations in the parameter study led to skin temperature ranging from 10.1 degrees C to 21.8 degrees C, which in turn reduced relative perfusion to 13% and 33%, respectively.

  6. Modelling of temperature and perfusion during scalp cooling

    NASA Astrophysics Data System (ADS)

    Janssen, F. E. M.; Van Leeuwen, G. M. J.; Van Steenhoven, A. A.

    2005-09-01

    Hair loss is a feared side effect of chemotherapy treatment. It may be prevented by cooling the scalp during administration of cytostatics. The supposed mechanism is that by cooling the scalp, both temperature and perfusion are diminished, affecting drug supply and drug uptake in the hair follicle. However, the effect of scalp cooling varies strongly. To gain more insight into the effect of cooling, a computer model has been developed that describes heat transfer in the human head during scalp cooling. Of main interest in this study are the mutual influences of scalp temperature and perfusion during cooling. Results of the standard head model show that the temperature of the scalp skin is reduced from 34.4 °C to 18.3 °C, reducing tissue blood flow to 25%. Based upon variations in both thermal properties and head anatomies found in the literature, a parameter study was performed. The results of this parameter study show that the most important parameters affecting both temperature and perfusion are the perfusion coefficient Q10 and the thermal resistances of both the fat and the hair layer. The variations in the parameter study led to skin temperature ranging from 10.1 °C to 21.8 °C, which in turn reduced relative perfusion to 13% and 33%, respectively.

  7. Numerical modeling of high temperature fracture of metallic composites

    NASA Astrophysics Data System (ADS)

    Cendales, E. D.; García, A.

    2016-02-01

    Mechanical properties of materials are strongly affected by increasing temperature, showing behaviors that could cause failure as creep. This article provides a brief theoretical description about fracture of materials, deepening on creep and intergranular creep. Some parameters as creep strain, strain rate, time to failure and displacement of the crack tip of a metallic glass selected at high temperature were studied. This paper shows a computer numerical model that permits establish mechanical behavior of a metal composite material Zr52.5Cu18Ni14.5Al10Ti5, bulk metallic glass. In the presence of cracking when the material is subjected to temperatures exceeding 30% of the melt temperature of material. The results obtained by computer simulation show correlation with the results about the behavior of the material viewed through the creep test. From the results we conclude that the mechanical properties of the material generally do not undergo major changes at high temperatures. However, at temperatures greater than 650C, the effect of the application of stress during creep entails failures in this kind of material.

  8. Effects of high temperature on different restorations in forensic identification: Dental samples and mandible

    PubMed Central

    Patidar, Kalpana A; Parwani, Rajkumar; Wanjari, Sangeeta

    2010-01-01

    Introduction: The forensic odontologist strives to utilize the charred human dentition throughout each stage of dental evaluation, and restorations are as unique as fingerprints and their radiographic morphology as well as the types of filling materials are often the main feature for identification. The knowledge of detecting residual restorative material and composition of unrecovered adjacent restoration is a valuable tool-mark in the presumptive identification of the dentition of a burned victim. Gold, silver amalgam, silicate restoration, and so on, have a different resistance to prolonged high temperature, therefore, the identification of burned bodies can be correlated with adequate qualities and quantities of the traces. Most of the dental examination relies heavily on the presence of the restoration as well as the relationship of one dental structure to another. This greatly narrows the research for the final identification that is based on postmortem data. Aim: The purpose of this study is to examine the resistance of teeth and different restorative materials, and the mandible, to variable temperature and duration, for the purpose of identification. Materials and Methods: The study was conducted on 72 extracted teeth which were divided into six goups of 12 teeth each based on the type of restorative material. (Group 1 - unrestored teeth, group 2 - teeth restored with Zn3(PO4)2, group 3 - with silver amalgam, group 4 with glass ionomer cement, group 5 - Ni-Cr-metal crown, group 6 - metal ceramic crown) and two specimens of the mandible. The effect of incineration at 400°C (5 mins, 15 mins, 30 mins) and 1100°C (15 mins) was studied. Results: Damage to the teeth subjected to variable temperatures and time can be categorized as intact (no damage), scorched (superficially parched and discolored), charred (reduced to carbon by incomplete combustion) and incinerated (burned to ashes). PMID:21189989

  9. Geostationary Operational Environmental Satellite (GOES) Gyro Temperature Model

    NASA Technical Reports Server (NTRS)

    Rowe, J. N.; Noonan, C. H.; Garrick, J.

    1996-01-01

    The geostationary Operational Environmental Satellite (GOES) 1/M series of spacecraft are geostationary weather satellites that use the latest in weather imaging technology. The inertial reference unit package onboard consists of three gyroscopes measuring angular velocity along each of the spacecraft's body axes. This digital integrating rate assembly (DIRA) is calibrated and used to maintain spacecraft attitude during orbital delta-V maneuvers. During the early orbit support of GOES-8 (April 1994), the gyro drift rate biases exhibited a large dependency on gyro temperature. This complicated the calibration and introduced errors into the attitude during delta-V maneuvers. Following GOES-8, a model of the DIRA temperature and drift rate bias variation was developed for GOES-9 (May 1995). This model was used to project a value of the DIRA bias to use during the orbital delta-V maneuvers based on the bias change observed as the DIRA warmed up during the calibration. The model also optimizes the yaw reorientation necessary to achieve the correct delta-V pointing attitude. As a result, a higher accuracy was achieved on GOES-9 leading to more efficient delta-V maneuvers and a propellant savings. This paper summarizes the: Data observed on GOES-8 and the complications it caused in calibration; DIRA temperature/drift rate model; Application and results of the model on GOES-9 support.

  10. Comparison of climate model simulated and observed borehole temperature profiles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Rouco, J. F.; Stevens, M. B.; Beltrami, H.; Goosse, H.; Rath, V.; Zorita, E.; Smerdon, J.

    2009-04-01

    Advances in understanding climate variability through the last millennium lean on simulation and reconstruction efforts. Progress in the integration of both approaches can potentially provide new means of assessing confidence on model projections of future climate change, of constraining the range of climate sensitivity and/or attributing past changes found in proxy evidence to external forcing. This work addresses specifically possible strategies for comparison of paleoclimate model simulations and the information recorded in borehole temperature profiles (BTPs). First efforts have allowed to design means of comparison of model simulated and observed BTPs in the context of the climate of the last millennium. This can be done by diffusing the simulated temperatures into the ground in order to produce synthetic BTPs that can be in turn assigned to collocated, real BTPs. Results suggest that there is sensitivity of borehole temperatures at large and regional scales to changes in external forcing over the last centuries. The comparison between borehole climate reconstructions and model simulations may also be subjected to non negligible uncertainties produced by the influence of past glacial and Holocene changes. While the thermal climate influence of the last deglaciation can be found well below 1000 m depth, such type of changes can potentially exert an influence on our understanding of subsurface climate in the top ca. 500 m. This issue is illustrated in control and externally forced climate simulations of the last millennium with the ECHO-G and LOVECLIM models, respectively.

  11. A model of the tropical Pacific sea surface temperature climatology

    NASA Technical Reports Server (NTRS)

    Seager, Richard; Zebiak, Stephen E.; Cane, Mark A.

    1988-01-01

    A model for the climatological mean sea surface temperature (SST) of the tropical Pacific Ocean is developed. The upper ocean response is computed using a time dependent, linear, reduced gravity model, with the addition of a constant depth frictional surface layer. The full three-dimensional temperature equation and a surface heat flux parameterization that requires specification of only wind speed and total cloud cover are used to evaluate the SST. Specification of atmospheric parameters, such as air temperature and humidity, over which the ocean has direct influence, is avoided. The model simulates the major features of the observed tropical Pacific SST. The seasonal evolution of these features is generally captured by the model. Analysis of the results demonstrates the control the ocean has over the surface heat flux from ocean to atmosphere and the crucial role that dynamics play in determining the mean SST in the equatorial Pacific. The sensitivity of the model to perturbations in the surface heat flux, cloud cover specification, diffusivity, and mixed layer depth is discussed.

  12. Modelling spoilage of fresh turbot and evaluation of a time-temperature integrator (TTI) label under fluctuating temperature.

    PubMed

    Nuin, Maider; Alfaro, Begoña; Cruz, Ziortza; Argarate, Nerea; George, Susie; Le Marc, Yvan; Olley, June; Pin, Carmen

    2008-10-31

    Kinetic models were developed to predict the microbial spoilage and the sensory quality of fresh fish and to evaluate the efficiency of a commercial time-temperature integrator (TTI) label, Fresh Check(R), to monitor shelf life. Farmed turbot (Psetta maxima) samples were packaged in PVC film and stored at 0, 5, 10 and 15 degrees C. Microbial growth and sensory attributes were monitored at regular time intervals. The response of the Fresh Check device was measured at the same temperatures during the storage period. The sensory perception was quantified according to a global sensory indicator obtained by principal component analysis as well as to the Quality Index Method, QIM, as described by Rahman and Olley [Rahman, H.A., Olley, J., 1984. Assessment of sensory techniques for quality assessment of Australian fish. CSIRO Tasmanian Regional Laboratory. Occasional paper n. 8. Available from the Australian Maritime College library. Newnham. Tasmania]. Both methods were found equally valid to monitor the loss of sensory quality. The maximum specific growth rate of spoilage bacteria, the rate of change of the sensory indicators and the rate of change of the colour measurements of the TTI label were modelled as a function of temperature. The temperature had a similar effect on the bacteria, sensory and Fresh Check kinetics. At the time of sensory rejection, the bacterial load was ca. 10(5)-10(6) cfu/g. The end of shelf life indicated by the Fresh Check label was close to the sensory rejection time. The performance of the models was validated under fluctuating temperature conditions by comparing the predicted and measured values for all microbial, sensory and TTI responses. The models have been implemented in a Visual Basic add-in for Excel called "Fish Shelf Life Prediction (FSLP)". This program predicts sensory acceptability and growth of spoilage bacteria in fish and the response of the TTI at constant and fluctuating temperature conditions. The program is freely

  13. Zero temperature landscape of the random sine-Gordon model

    SciTech Connect

    Sanchez, A.; Bishop, A.R.; Cai, D.

    1997-04-01

    We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length ({approx} 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r){sup 2} type up to a certain correlation length ({xi} {approx} 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis.

  14. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  15. Hall Thruster Modeling with a Given Temperature Profile

    SciTech Connect

    L. Dorf; V. Semenov; Y. Raitses; N.J. Fisch

    2002-06-12

    A quasi one-dimensional steady-state model of the Hall thruster is presented. For given mass flow rate, magnetic field profile, and discharge voltage the unique solution can be constructed, assuming that the thruster operates in one of the two regimes: with or without the anode sheath. It is shown that for a given temperature profile, the applied discharge voltage uniquely determines the operating regime; for discharge voltages greater than a certain value, the sheath disappears. That result is obtained over a wide range of incoming neutral velocities, channel lengths and widths, and cathode plane locations. A good correlation between the quasi one-dimensional model and experimental results can be achieved by selecting an appropriate temperature profile. We also show how the presented model can be used to obtain a two-dimensional potential distribution.

  16. A middle atmosphere temperature reference model from satellite measurements

    NASA Astrophysics Data System (ADS)

    Barnett, J. J.; Corney, M.

    Temperature fields in the stratosphere and mesosphere have been derived from radiance measurements made by the Nimbus 5 SCR, the Nimbus 6 PMR, and the Nimbus 7 SAMS and LIMS radiometers. These instruments cover different latitude and height ranges and different times during the 1973-1983 period. The problems of combining different data sets are discussed, and examples from a proposed model atmosphere for the stratosphere and mesosphere are presented. The model is given in terms of zonal means and amplitude and phase of zonal waves 1 and 2 for temperature and geopotential height, as functions of latitude and pressure for each calendar month. Comparisons are made with the CIRA 1972 and the Koshelkov Southern Hemisphere models and with the SAMS results and in-situ rocket/radio sondes.

  17. Effects of Low-Temperature Plasma-Sterilization on Mars Analog Soil Samples Mixed with Deinococcus radiodurans

    PubMed Central

    Schirmack, Janosch; Fiebrandt, Marcel; Stapelmann, Katharina; Schulze-Makuch, Dirk

    2016-01-01

    We used Ar plasma-sterilization at a temperature below 80 °C to examine its effects on the viability of microorganisms when intermixed with tested soil. Due to a relatively low temperature, this method is not thought to affect the properties of a soil, particularly its organic component, to a significant degree. The method has previously been shown to work well on spacecraft parts. The selected microorganism for this test was Deinococcus radiodurans R1, which is known for its remarkable resistance to radiation effects. Our results showed a reduction in microbial counts after applying a low temperature plasma, but not to a degree suitable for a sterilization of the soil. Even an increase of the treatment duration from 1.5 to 45 min did not achieve satisfying results, but only resulted in in a mean cell reduction rate of 75% compared to the untreated control samples. PMID:27240407

  18. Numerical Modeling of High-Temperature Corrosion Processes

    NASA Technical Reports Server (NTRS)

    Nesbitt, James A.

    1995-01-01

    Numerical modeling of the diffusional transport associated with high-temperature corrosion processes is reviewed. These corrosion processes include external scale formation and internal subscale formation during oxidation, coating degradation by oxidation and substrate interdiffusion, carburization, sulfidation and nitridation. The studies that are reviewed cover such complexities as concentration-dependent diffusivities, cross-term effects in ternary alloys, and internal precipitation where several compounds of the same element form (e.g., carbides of Cr) or several compounds exist simultaneously (e.g., carbides containing varying amounts of Ni, Cr, Fe or Mo). In addition, the studies involve a variety of boundary conditions that vary with time and temperature. Finite-difference (F-D) techniques have been applied almost exclusively to model either the solute or corrodant transport in each of these studies. Hence, the paper first reviews the use of F-D techniques to develop solutions to the diffusion equations with various boundary conditions appropriate to high-temperature corrosion processes. The bulk of the paper then reviews various F-D modeling studies of diffusional transport associated with high-temperature corrosion.

  19. Numerical modeling of high-temperature corrosion processes

    SciTech Connect

    Nesbitt, J.A.

    1995-08-01

    Numerical modeling of the diffusional transport associated with high-temperature corrosion processes is reviewed. These corrosion processes include external scale formation and internal subscale formation during oxidation, coating degradation by oxidation and substrate interdiffusion, carburization, sulfidation and nitridation. The studies that are reviewed cover such complexities as concentration-dependent diffusivities, cross-term effects in ternary alloys, and internal precipitation where several compounds of the same element may form (e.g., carbides of Cr) or several compounds exist simultaneously (e.g., carbides containing amounts of Ni, Cr, Fe or Mo). In addition, the studies involve a variety of boundary conditions that vary with time and temperature. Finite-difference (F-D) techniques have been applied almost exclusively to model either the solute or corrodant transport in each of these studies. Hence, the paper first reviews the use of F-D techniques to develop solutions to the diffusion equations with various boundary conditions appropriate to high-temperature corrosion processes. The bulk of the paper then reviews various F-D modeling studies of diffusional transport associated with high-temperature corrosion.

  20. Sample-dependent phase transitions in disordered exclusion models

    NASA Astrophysics Data System (ADS)

    Enaud, C.; Derrida, B.

    2004-04-01

    We give numerical evidence that the location of the first-order phase transition between the low- and the high-density phases of the one-dimensional asymmetric simple exclusion process with open boundaries becomes sample dependent when quenched disorder is introduced for the hopping rates.

  1. The topomer-sampling model of protein folding

    PubMed Central

    Debe, Derek A.; Carlson, Matt J.; Goddard, William A.

    1999-01-01

    Clearly, a protein cannot sample all of its conformations (e.g., ≈3100 ≈ 1048 for a 100 residue protein) on an in vivo folding timescale (<1 s). To investigate how the conformational dynamics of a protein can accommodate subsecond folding time scales, we introduce the concept of the native topomer, which is the set of all structures similar to the native structure (obtainable from the native structure through local backbone coordinate transformations that do not disrupt the covalent bonding of the peptide backbone). We have developed a computational procedure for estimating the number of distinct topomers required to span all conformations (compact and semicompact) for a polypeptide of a given length. For 100 residues, we find ≈3 × 107 distinct topomers. Based on the distance calculated between different topomers, we estimate that a 100-residue polypeptide diffusively samples one topomer every ≈3 ns. Hence, a 100-residue protein can find its native topomer by random sampling in just ≈100 ms. These results suggest that subsecond folding of modest-sized, single-domain proteins can be accomplished by a two-stage process of (i) topomer diffusion: random, diffusive sampling of the 3 × 107 distinct topomers to find the native topomer (≈0.1 s), followed by (ii) intratopomer ordering: nonrandom, local conformational rearrangements within the native topomer to settle into the precise native state. PMID:10077555

  2. Activation energy for a model ferrous-ferric half reaction from transition path sampling

    NASA Astrophysics Data System (ADS)

    Drechsel-Grau, Christof; Sprik, Michiel

    2012-01-01

    Activation parameters for the model oxidation half reaction of the classical aqueous ferrous ion are compared for different molecular simulation techniques. In particular, activation free energies are obtained from umbrella integration and Marcus theory based thermodynamic integration, which rely on the diabatic gap as the reaction coordinate. The latter method also assumes linear response, and both methods obtain the activation entropy and the activation energy from the temperature dependence of the activation free energy. In contrast, transition path sampling does not require knowledge of the reaction coordinate and directly yields the activation energy [C. Dellago and P. G. Bolhuis, Mol. Simul. 30, 795 (2004), 10.1080/08927020412331294869]. Benchmark activation energies from transition path sampling agree within statistical uncertainty with activation energies obtained from standard techniques requiring knowledge of the reaction coordinate. In addition, it is found that the activation energy for this model system is significantly smaller than the activation free energy for the Marcus model, approximately half the value, implying an equally large entropy contribution.

  3. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    NASA Astrophysics Data System (ADS)

    Gubler, S.; Endrizzi, S.; Gruber, S.; Purves, R. S.

    2013-02-01

    Before operational use or for decision making, models must be validated, and the degree of trust in model outputs should be quantified. Often, model validation is performed at single locations due to the lack of spatially-distributed data. Since the analysis of parametric model uncertainties can be performed independently of observations, it is a suitable method to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainty of a physically-based mountain permafrost model are quantified within an artificial topography consisting of different elevations and exposures combined with six ground types characterized by their hydraulic properties. The analyses performed for all combinations of topographic factors and ground types allowed to quantify the variability of model sensitivity and uncertainty within mountain regions. We found that modeled snow duration considerably influences the mean annual ground temperature (MAGT). The melt-out day of snow (MD) is determined by processes determining snow accumulation and melting. Parameters such as the temperature and precipitation lapse rate and the snow correction factor have therefore a great impact on modeled MAGT. Ground albedo changes MAGT from 0.5 to 4°C in dependence of the elevation, the aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter snow cover. Snow albedo and other parameters determining the amount of reflected solar radiation are important, changing MAGT at different depths by more than 1°C. Parameters influencing the turbulent fluxes as the roughness length or the dew temperature are more sensitive at low elevation sites due to higher air temperatures and decreased solar radiation. Modeling the individual terms of the energy balance correctly is

  4. Annual cycle and temperature dependence of pinene oxidation products and other water-soluble organic compounds in coarse and fine aerosol samples

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Müller, L.; Winterhalter, R.; Moortgat, G. K.; Hoffmann, T.; Pöschl, U.

    2010-05-01

    Filter samples of fine and coarse particulate matter were collected over a period of one year and analyzed for water-soluble organic compounds, including the pinene oxidation products pinic acid, pinonic acid, 3-methyl-1,2,3-butanetricarboxylic acid (3-MBTCA) and a variety of dicarboxylic acids (C5-C16) and nitrophenols. Seasonal variations and other characteristic features are discussed with regard to aerosol sources and sinks and data from other studies and regions. The ratios of adipic acid (C6) and phthalic acid (Ph) to azelaic acid (C9) indicate that the investigated aerosols samples were mainly influenced by biogenic sources. An Arrhenius-type correlation was found between the 3-MBTCA concentration and inverse temperature. Model calculations suggest that the temperature dependence is largely due to enhanced emissions and OH radical concentrations at elevated temperatures, whereas the influence of gas-particle partitioning appears to play a minor role. Enhanced ratios of pinic acid to 3-MBTCA indicate strong chemical aging of the investigated aerosols in summer and spring. Acknowledgment: The authors would like to thank M. Claeys for providing synthetic 3-methyl-1,2,3-butanetricarboxylic acid standards for LC-MS analysis and J. Fröhlich for providing filter samples and related information.

  5. Homogenous Nucleation and Crystal Growth in a Model Liquid from Direct Energy Landscape Sampling Simulation

    NASA Astrophysics Data System (ADS)

    Walter, Nathan; Zhang, Yang

    Nucleation and crystal growth are understood to be activated processes involving the crossing of free-energy barriers. Attempts to capture the entire crystallization process over long timescales with molecular dynamic simulations have met major obstacles because of molecular dynamics' temporal constraints. Herein, we circumvent this temporal limitation by using a brutal-force, metadynamics-like, adaptive basin-climbing algorithm and directly sample the free-energy landscape of a model liquid Argon. The algorithm biases the system to evolve from an amorphous liquid like structure towards an FCC crystal through inherent structure, and then traces back the energy barriers. Consequently, the sampled timescale is macroscopically long. We observe that the formation of a crystal involves two processes, each with a unique temperature-dependent energy barrier. One barrier corresponds to the crystal nucleus formation; the other barrier corresponds to the crystal growth. We find the two processes dominate in different temperature regimes. Compared to other computation techniques, our method requires no assumptions about the shape or chemical potential of the critical crystal nucleus. The success of this method is encouraging for studying the crystallization of more complex

  6. Elevated temperature alters carbon cycling in a model microbial community

    NASA Astrophysics Data System (ADS)

    Mosier, A.; Li, Z.; Thomas, B. C.; Hettich, R. L.; Pan, C.; Banfield, J. F.

    2013-12-01

    Earth's climate is regulated by biogeochemical carbon exchanges between the land, oceans and atmosphere that are chiefly driven by microorganisms. Microbial communities are therefore indispensible to the study of carbon cycling and its impacts on the global climate system. In spite of the critical role of microbial communities in carbon cycling processes, microbial activity is currently minimally represented or altogether absent from most Earth System Models. Method development and hypothesis-driven experimentation on tractable model ecosystems of reduced complexity, as presented here, are essential for building molecularly resolved, benchmarked carbon-climate models. Here, we use chemoautotropic acid mine drainage biofilms as a model community to determine how elevated temperature, a key parameter of global climate change, regulates the flow of carbon through microbial-based ecosystems. This study represents the first community proteomics analysis using tandem mass tags (TMT), which enable accurate, precise, and reproducible quantification of proteins. We compare protein expression levels of biofilms growing over a narrow temperature range expected to occur with predicted climate changes. We show that elevated temperature leads to up-regulation of proteins involved in amino acid metabolism and protein modification, and down-regulation of proteins involved in growth and reproduction. Closely related bacterial genotypes differ in their response to temperature: Elevated temperature represses carbon fixation by two Leptospirillum genotypes, whereas carbon fixation is significantly up-regulated at higher temperature by a third closely related genotypic group. Leptospirillum group III bacteria are more susceptible to viral stress at elevated temperature, which may lead to greater carbon turnover in the microbial food web through the release of viral lysate. Overall, this proteogenomics approach revealed the effects of climate change on carbon cycling pathways and other

  7. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras

    PubMed Central

    Wolf, Alejandro; Pezoa, Jorge E.; Figueroa, Miguel

    2016-01-01

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘C, when the array’s temperature varies by approximately 15 ∘C. PMID:27447637

  8. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras.

    PubMed

    Wolf, Alejandro; Pezoa, Jorge E; Figueroa, Miguel

    2016-07-19

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘ C, when the array's temperature varies by approximately 15 ∘ C.

  9. An Importance Sampling EM Algorithm for Latent Regression Models

    ERIC Educational Resources Information Center

    von Davier, Matthias; Sinharay, Sandip

    2007-01-01

    Reporting methods used in large-scale assessments such as the National Assessment of Educational Progress (NAEP) rely on latent regression models. To fit the latent regression model using the maximum likelihood estimation technique, multivariate integrals must be evaluated. In the computer program MGROUP used by the Educational Testing Service for…

  10. Quasi-steady model for predicting temperature of aqueous foams circulating in geothermal wellbores

    SciTech Connect

    Blackwell, B.F.; Ortega, A.

    1983-01-01

    A quasi-steady model has been developed for predicting the temperature profiles of aqueous foams circulating in geothermal wellbores. The model assumes steady one-dimensional incompressible flow in the wellbore; heat transfer by conduction from the geologic formation to the foam is one-dimensional radially and time-dependent. The vertical temperature distribution in the undisturbed geologic formation is assumed to be composed of two linear segments. For constant values of the convective heat-transfer coefficient, a closed-form analytical solution is obtained. It is demonstrated that the Prandtl number of aqueous foams is large (1000 to 5000); hence, a fully developed temperature profile may not exist for representative drilling applications. Existing convective heat-transfer-coefficient solutions are adapted to aqueous foams. The simplified quasi-steady model is successfully compared with a more-sophisticated finite-difference computer code. Sample temperature-profile calculations are presented for representative values of the primary parameters. For a 5000-ft wellbore with a bottom hole temperature of 375{sup 0}F, the maximum foam temperature can be as high as 300{sup 0}F.

  11. The room temperature preservation of filtered environmental DNA samples and assimilation into a phenol-chloroform-isoamyl alcohol DNA extraction.

    PubMed

    Renshaw, Mark A; Olds, Brett P; Jerde, Christopher L; McVeigh, Margaret M; Lodge, David M

    2015-01-01

    Current research targeting filtered macrobial environmental DNA (eDNA) often relies upon cold ambient temperatures at various stages, including the transport of water samples from the field to the laboratory and the storage of water and/or filtered samples in the laboratory. This poses practical limitations for field collections in locations where refrigeration and frozen storage is difficult or where samples must be transported long distances for further processing and screening. This study demonstrates the successful preservation of eDNA at room temperature (20 °C) in two lysis buffers, CTAB and Longmire's, over a 2-week period of time. Moreover, the preserved eDNA samples were seamlessly integrated into a phenol-chloroform-isoamyl alcohol (PCI) DNA extraction protocol. The successful application of the eDNA extraction to multiple filter membrane types suggests the methods evaluated here may be broadly applied in future eDNA research. Our results also suggest that for many kinds of studies recently reported on macrobial eDNA, detection probabilities could have been increased, and at a lower cost, by utilizing the Longmire's preservation buffer with a PCI DNA extraction.

  12. The room temperature preservation of filtered environmental DNA samples and assimilation into a phenol–chloroform–isoamyl alcohol DNA extraction

    PubMed Central

    Renshaw, Mark A; Olds, Brett P; Jerde, Christopher L; McVeigh, Margaret M; Lodge, David M

    2015-01-01

    Current research targeting filtered macrobial environmental DNA (eDNA) often relies upon cold ambient temperatures at various stages, including the transport of water samples from the field to the laboratory and the storage of water and/or filtered samples in the laboratory. This poses practical limitations for field collections in locations where refrigeration and frozen storage is difficult or where samples must be transported long distances for further processing and screening. This study demonstrates the successful preservation of eDNA at room temperature (20 °C) in two lysis buffers, CTAB and Longmire's, over a 2-week period of time. Moreover, the preserved eDNA samples were seamlessly integrated into a phenol–chloroform–isoamyl alcohol (PCI) DNA extraction protocol. The successful application of the eDNA extraction to multiple filter membrane types suggests the methods evaluated here may be broadly applied in future eDNA research. Our results also suggest that for many kinds of studies recently reported on macrobial eDNA, detection probabilities could have been increased, and at a lower cost, by utilizing the Longmire's preservation buffer with a PCI DNA extraction. PMID:24834966

  13. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    SciTech Connect

    Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  14. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    NASA Astrophysics Data System (ADS)

    Li, Hong-Yi; Ruby Leung, L.; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan

    2015-12-01

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. It is then shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparing against the observed data from over 320 USGS stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. Furthermore, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August-October) by as much as 1˜2°C due to enhanced low-flow conditions, which have important implications to aquatic ecosystems. Sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  15. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    DOE PAGES

    Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; ...

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less

  16. Mathematical model of the metal mould surface temperature optimization

    SciTech Connect

    Mlynek, Jaroslav Knobloch, Roman; Srb, Radek

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  17. Mathematical model of the metal mould surface temperature optimization

    NASA Astrophysics Data System (ADS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-11-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  18. A multifluid model extended for strong temperature nonequilibrium

    SciTech Connect

    Chang, Chong

    2016-08-08

    We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.

  19. Melting Temperature Mapping Method: A Novel Method for Rapid Identification of Unknown Pathogenic Microorganisms within Three Hours of Sample Collection

    PubMed Central

    Niimi, Hideki; Ueno, Tomohiro; Hayashi, Shirou; Abe, Akihito; Tsurue, Takahiro; Mori, Masashi; Tabata, Homare; Minami, Hiroshi; Goto, Michihiko; Akiyama, Makoto; Yamamoto, Yoshihiro; Saito, Shigeru; Kitajima, Isao

    2015-01-01

    Acquiring the earliest possible identification of pathogenic microorganisms is critical for selecting the appropriate antimicrobial therapy in infected patients. We herein report the novel “melting temperature (Tm) mapping method” for rapidly identifying the dominant bacteria in a clinical sample from sterile sites. Employing only seven primer sets, more than 100 bacterial species can be identified. In particular, using the Difference Value, it is possible to identify samples suitable for Tm mapping identification. Moreover, this method can be used to rapidly diagnose the absence of bacteria in clinical samples. We tested the Tm mapping method using 200 whole blood samples obtained from patients with suspected sepsis, 85% (171/200) of which matched the culture results based on the detection level. A total of 130 samples were negative according to the Tm mapping method, 98% (128/130) of which were also negative based on the culture method. Meanwhile, 70 samples were positive according to the Tm mapping method, and of the 59 suitable for identification, 100% (59/59) exhibited a “match” or “broad match” with the culture or sequencing results. These findings were obtained within three hours of whole blood collection. The Tm mapping method is therefore useful for identifying infectious diseases requiring prompt treatment. PMID:26218169

  20. Melting Temperature Mapping Method: A Novel Method for Rapid Identification of Unknown Pathogenic Microorganisms within Three Hours of Sample Collection.

    PubMed

    Niimi, Hideki; Ueno, Tomohiro; Hayashi, Shirou; Abe, Akihito; Tsurue, Takahiro; Mori, Masashi; Tabata, Homare; Minami, Hiroshi; Goto, Michihiko; Akiyama, Makoto; Yamamoto, Yoshihiro; Saito, Shigeru; Kitajima, Isao

    2015-07-28

    Acquiring the earliest possible identification of pathogenic microorganisms is critical for selecting the appropriate antimicrobial therapy in infected patients. We herein report the novel "melting temperature (Tm) mapping method" for rapidly identifying the dominant bacteria in a clinical sample from sterile sites. Employing only seven primer sets, more than 100 bacterial species can be identified. In particular, using the Difference Value, it is possible to identify samples suitable for Tm mapping identification. Moreover, this method can be used to rapidly diagnose the absence of bacteria in clinical samples. We tested the Tm mapping method using 200 whole blood samples obtained from patients with suspected sepsis, 85% (171/200) of which matched the culture results based on the detection level. A total of 130 samples were negative according to the Tm mapping method, 98% (128/130) of which were also negative based on the culture method. Meanwhile, 70 samples were positive according to the Tm mapping method, and of the 59 suitable for identification, 100% (59/59) exhibited a "match" or "broad match" with the culture or sequencing results. These findings were obtained within three hours of whole blood collection. The Tm mapping method is therefore useful for identifying infectious diseases requiring prompt treatment.

  1. Visual Sample Plan (VSP) Models and Code Verification

    SciTech Connect

    Gilbert, Richard O.; Davidson, James R.; Wilson, John E.; Pulsipher, Brent A.

    2001-03-06

    VSP is an easy to use, visual and graphic software tool being developed to select the right number and location of environmental samples so that the results of statistical tests performed to provide input to environmental decisions have the required confidence and performance. It is a significant help for implementing the 6th and 7th steps of the Data Quality Objectives (DQO) planning process ("Specify Tolerable Limits on Decision Errors" and "Optimize the Design for Obtaining Data," respectively).

  2. High-Pressure, High-Temperature Equations of State Using Fabricated Controlled-Geometry Ni/SiO2 Double Hot-Plate Samples

    NASA Astrophysics Data System (ADS)

    Pigott, J. S.; Ditmer, D. A.; Fischer, R. A.; Reaman, D. M.; Davis, R. J.; Panero, W. R.

    2014-12-01

    To model and predict the structure, dynamics, and composition of Earth's deep interior, accurate and precise measurements of thermal expansion and compressibility are required. The laser-heated diamond-anvil cell (LHDAC) coupled with synchrotron-based x-ray diffraction (XRD) is a powerful tool to determine pressure-volume-temperature (P-V-T) relationships. However, LHDAC experiments may be hampered by non-uniform heating caused by the mixing of transparent materials with opaque laser absorbers. Additionally, radial temperature gradients are exacerbated by small misalignments (1-3 µm) of the x-ray beam with respect to the center of the laser-heated hotspot. We have fabricated three-dimensional, controlled-geometry, double hot-plate samples. In this double hot-plate arrangement, a transparent oxide layer (SiO2) is sandwiched between two laser absorbing layers (Ni) in a single, cohesive sample. These samples were mass manufactured (>105 samples) using a combination of physical vapor deposition, photolithography, wet etching, and plasma etching. The double hot-plate arrangement coupled with the chemical and spatial homogeneity of the laser absorbing layers addresses problems caused by mixtures of transparent and opaque samples. The controlled-geometry samples have dimensions of 50 μm x 50 μm x 1.4 μm. The dimensions of the samples are much larger than the synchrotron x-ray beam. With a heating laser FWHM of ~50 μm, the radial temperature gradients within the volume probed by the x-ray are reduced. We conducted XRD experiments to P > 50 GPa and T > 2200 K at beamline 16-ID-B (HPCAT) of the Advanced Photon Source. Here we present relevant thermal modeling of the LHDAC environment along with Ni and SiO2 P-V-T equations of state. Our photolithography method of sample fabrication can be extended to different materials including but not limited to Fe and MgO.

  3. A novel powder sample holder for the determination of glass transition temperatures by DMA.

    PubMed

    Mahlin, Denny; Wood, John; Hawkins, Nicholas; Mahey, Jas; Royall, Paul G

    2009-04-17

    The use of a new sample holder for dynamic mechanical analysis (DMA) as a means to characterise the Tg of powdered hydroxypropyl methyl cellulose (HPMC) has been investigated. A sample holder was constructed consisting of a rectangular stainless steel container and a lid engineered to fit exactly within the walls of the container when clamped within a TA instruments Q800 DMA in dual cantilever configuration. Physical mixtures of HPMC (E4M) and aluminium oxide powders were placed in the holder and subjected to oscillating strains (1 Hz, 10 Hz and 100 Hz) whilst heated at 3 degrees C/min. The storage and loss modulus signals showed a large reduction in the mechanical strength above 150 degrees C which was attributed to a glass transition. Optimal experimental parameters were determined using a design of experiment procedure and by analysing the frequency dependence of Tg in Arrhenius plots. The parameters were a clamping pressure of 62 kPa, a mass ratio of 0.2 HPMC in aluminium oxide, and a loading mass of either 120 mg or 180 mg. At 1 Hz, a Tg of 177+/-1.2 degrees C (n=6) for powdered HPMC was obtained. In conclusion, the new powder holder was capable of measuring the Tg of pharmaceutical powders and a simple optimization protocol was established, useful in further applications of the DMA powder holder.

  4. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    2000-05-01

    The flow VLE apparatus designed and built for a previous project was upgraded and recalibrated for data measurements for this project. The modifications include better and more accurate sampling technique, addition of a digital recorder to monitor temperature and pressure inside the VLE cell, and a new technique for remote sensing of the liquid level in the cell. VLE data measurements for three binary systems, tetralin-quinoline, benzene--ethylbenzene and ethylbenzene--quinoline, have been completed. The temperature ranges of data measurements were 325 C to 370 C for the first system, 180 C to 300 C for the second system, and 225 C to 380 C for the third system. The smoothed data were found to be fairly well behaved when subjected to thermodynamic consistency tests. SETARAM C-80 calorimeter was used for incremental enthalpy and heat capacity measurements for benzene--ethylbenzene binary liquid mixtures. Data were measured from 30 C to 285 C for liquid mixtures covering the entire composition range. An apparatus has been designed for simultaneous measurement of excess volume and incremental enthalpy of liquid mixtures at temperatures from 30 C to 300 C. The apparatus has been tested and is ready for data measurements. A flow apparatus for measurement of heat of mixing of liquid mixtures at high temperatures has also been designed, and is currently being tested and calibrated.

  5. Stream Segment Temperature Model (SSTEMP) Version 2.0

    USGS Publications Warehouse

    Bartholow, John

    2002-01-01

    SSTEMP is a much-scaled down version of the Stream Network Temperature Model (SNTEMP) by Theurer et al. (1984). SSTEMP may be used to evaluate alternative reservoir release proposals, analyze the effects of changing riparian shade or the physical features of a stream, and examine the effects of different stream withdrawals and returns on instream temperature. Unlike the large network model, SNTEMP, this program handles only single stream segments for a single time period (e.g., month, week, day) for any given “run”. Initially designed as a training tool, SSTEMP may be used satisfactorily for a variety of simple cases that one might face on a day-to-day basis. It is especially useful to perform sensitivity and uncertainty analysis. The program requires inputs describing the average stream geometry, as well as (steady-state) hydrology and meteorology, and stream shading. SSTEMP optionally estimates the combined topographic and vegetative shade as well as solar radiation penetrating the water. It then predicts the mean daily water temperatures at specified distances downstream. It also estimates the daily maximum and minimum temperatures, and unlike SNTEMP, handles the special case of a dam with steady-state release at the upstream end of the segment. With good quality input data, SSTEMP should faithfully reproduce mean daily water temperatures throughout a stream reach. If it does not, there is a research opportunity to explain why not. One should not expect too much from SSTEMP if the input values are of poor quality or if the modeler has not adhered to the model’s assumptions.

  6. Modeling the effect of water activity and storage temperature on chemical stability of coffee brews.

    PubMed

    Manzocco, Lara; Nicoli, Maria Cristina

    2007-08-08

    This work was addressed to study the chemical stability of coffee brew derivatives as a function of water activity (aw) and storage temperature. To this purpose, coffee brew was freeze-dried, equilibrated at increasing aw values, and stored for up to 10 months at different temperatures from -30 to 60 degrees C. The chemical stability of the samples was assessed by measuring H3O+ formation during storage. Independently of storage temperature, the rate of H3O+ formation was considerably low only when aw was reduced below 0.5 (94% w/w). Beyond this critical boundary, the rate increased, reaching a maximum value at ca. 0.8 aw (78% w/w). Further hydration up to the aw of the freshly prepared beverage significantly increased chemical stability. It was suggested that mechanisms other than lactones' hydrolysis, probably related to nonenzymatic browning pathways, could contribute to the observed increase in acidity during coffee staling. The temperature dependence of H3O+ formation was well-described by the Arrhenius equation in the entire aw range considered. However, aw affected the apparent activation energy and frequency factor. These effects were described by simple equations that were used to set up a modified Arrhenius equation. This model was validated by comparing experimental values, not used to generate the model, with those estimated by the model itself. The model allowed efficient prediction of the chemical stability of coffee derivatives on the basis of only the aw value and storage temperature.

  7. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    NASA Astrophysics Data System (ADS)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  8. Improving Shade Modelling in a Regional River Temperature Model Using Fine-Scale LIDAR Data

    NASA Astrophysics Data System (ADS)

    Hannah, D. M.; Loicq, P.; Moatar, F.; Beaufort, A.; Melin, E.; Jullian, Y.

    2015-12-01

    Air temperature is often considered as a proxy of the stream temperature to model the distribution areas of aquatic species water temperature is not available at a regional scale. To simulate the water temperature at a regional scale (105 km²), a physically-based model using the equilibrium temperature concept and including upstream-downstream propagation of the thermal signal was developed and applied to the entire Loire basin (Beaufort et al., submitted). This model, called T-NET (Temperature-NETwork) is based on a hydrographical network topology. Computations are made hourly on 52,000 reaches which average 1.7 km long in the Loire drainage basin. The model gives a median Root Mean Square Error of 1.8°C at hourly time step on the basis of 128 water temperature stations (2008-2012). In that version of the model, tree shadings is modelled by a constant factor proportional to the vegetation cover on 10 meters sides the river reaches. According to sensitivity analysis, improving the shade representation would enhance T-NET accuracy, especially for the maximum daily temperatures, which are currently not very well modelized. This study evaluates the most efficient way (accuracy/computing time) to improve the shade model thanks to 1-m resolution LIDAR data available on tributary of the LoireRiver (317 km long and an area of 8280 km²). Two methods are tested and compared: the first one is a spatially explicit computation of the cast shadow for every LIDAR pixel. The second is based on averaged vegetation cover characteristics of buffers and reaches of variable size. Validation of the water temperature model is made against 4 temperature sensors well spread along the stream, as well as two airborne thermal infrared imageries acquired in summer 2014 and winter 2015 over a 80 km reach. The poster will present the optimal length- and crosswise scale to characterize the vegetation from LIDAR data.

  9. [Temperature dependence of parameters of plant photosynthesis models: a review].

    PubMed

    Borjigidai, Almaz; Yu, Gui-Rui

    2013-12-01

    This paper reviewed the progress on the temperature response models of plant photosynthesis. Mechanisms involved in changes in the photosynthesis-temperature curve were discussed based on four parameters, intercellular CO2 concentration, activation energy of the maximum rate of RuBP (ribulose-1,5-bisphosphate) carboxylation (V (c max)), activation energy of the rate of RuBP regeneration (J(max)), and the ratio of J(max) to V(c max) All species increased the activation energy of V(c max) with increasing growth temperature, while other parameters changed but differed among species, suggesting the activation energy of V(c max) might be the most important parameter for the temperature response of plant photosynthesis. In addition, research problems and prospects were proposed. It's necessary to combine the photosynthesis models at foliage and community levels, and to investigate the mechanism of plants in response to global change from aspects of leaf area, solar radiation, canopy structure, canopy microclimate and photosynthetic capacity. It would benefit the understanding and quantitative assessment of plant growth, carbon balance of communities and primary productivity of ecosystems.

  10. On the fate of the Standard Model at finite temperature

    NASA Astrophysics Data System (ADS)

    Rose, Luigi Delle; Marzo, Carlo; Urbano, Alfredo

    2016-05-01

    In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 1018 GeV, we find that the instability bound excludes values of the top mass M t ≳ 173 .6 GeV, with M h ≃ 125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.

  11. Systems Modeling for Crew Core Body Temperature Prediction Postlanding

    NASA Technical Reports Server (NTRS)

    Cross, Cynthia; Ochoa, Dustin

    2010-01-01

    The Orion Crew Exploration Vehicle, NASA s latest crewed spacecraft project, presents many challenges to its designers including ensuring crew survivability during nominal and off nominal landing conditions. With a nominal water landing planned off the coast of San Clemente, California, off nominal water landings could range from the far North Atlantic Ocean to the middle of the equatorial Pacific Ocean. For all of these conditions, the vehicle must provide sufficient life support resources to ensure that the crew member s core body temperatures are maintained at a safe level prior to crew rescue. This paper will examine the natural environments, environments created inside the cabin and constraints associated with post landing operations that affect the temperature of the crew member. Models of the capsule and the crew members are examined and analysis results are compared to the requirement for safe human exposure. Further, recommendations for updated modeling techniques and operational limits are included.

  12. Measurement of Laser Weld Temperatures for 3D Model Input

    SciTech Connect

    Dagel, Daryl; Grossetete, Grant; Maccallum, Danny O.

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  13. Modeling of Boehmite Leaching from Actual Hanford High-Level Waste Samples

    SciTech Connect

    Peterson, Reid A.; Lumetta, Gregg J.; Rapko, Brian M.; Poloski, Adam P.

    2007-06-27

    The Department of Energy plans to vitrify approximately 60,000 metric tons of high level waste sludge from underground storage tanks at the Hanford Nuclear Reservation. To reduce the volume of high level waste requiring treatment, a goal has been set to remove about 90 percent of the aluminum, which comprises nearly 70 percent of the sludge. Aluminum in the form of gibbsite and sodium aluminate can be easily dissolved by washing the waste stream with caustic, but boehmite, which comprises nearly half of the total aluminum, is more resistant to caustic dissolution and requires higher treatment temperatures and hydroxide concentrations. In this work, the dissolution kinetics of aluminum species during caustic leaching of actual Hanford high level waste samples is examined. The experimental results are used to develop a shrinking core model that provides a basis for prediction of dissolution dynamics from known process temperature and hydroxide concentration. This model is further developed to include the effects of particle size polydispersity, which is found to strongly influence the rate of dissolution.

  14. Application of Sampling Based Model Predictive Control to an Autonomous Underwater Vehicle

    DTIC Science & Technology

    2010-07-01

    55 Application of Sampling Based Model Predictive Control to an Autonomous Underwater Vehicle Unmanned Underwater Vehicles (UUVs) can be utilized...the vehicle can feasibly traverse. As a result, Sampling- Based Model Predictive Control (SBMPC) is proposed to simultaneously generate control...inputs and system trajectories for an autonomous underwater vehicle (AUV). The algorithm combines the benefits of sampling- based motion planning with

  15. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  16. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  17. Sample environment for neutron scattering measurements of internal stresses in engineering materials in the temperature range of 6 K to 300 K

    NASA Astrophysics Data System (ADS)

    Kirichek, O.; Timms, J. D.; Kelleher, J. F.; Down, R. B. E.; Offer, C. D.; Kabra, S.; Zhang, S. Y.

    2017-02-01

    Internal stresses in materials have a considerable effect on material properties including strength, fracture toughness, and fatigue resistance. The ENGIN-X beamline is an engineering science facility at ISIS optimized for the measurement of strain and stress using the atomic lattice planes as a strain gauge. Nowadays, the rapidly rising interest in the mechanical properties of engineering materials at low temperatures has been stimulated by the dynamic development of the cryogenic industry and the advanced applications of the superconductor technology. Here we present the design and discuss the test results of a new cryogenic sample environment system for neutron scattering measurements of internal stresses in engineering materials under a load of up to 100 kN and in the temperature range of 6 K to 300 K. Complete cooling of the system starting from the room temperature down to the base temperature takes around 90 min. Understanding of internal stresses in engineering materials at cryogenic temperatures is vital for the modelling and designing of cutting-edge superconducting magnets and other superconductor based applications.

  18. Large Sample Hydrology : Building an international sample of watersheds to improve consistency and robustness of model evaluation

    NASA Astrophysics Data System (ADS)

    Mathevet, Thibault; Kumar, Rohini; Gupta, Hoshin; Vaze, Jai; Andréassian, Vazken

    2015-04-01

    This poster introduces the aims of the Large Sample Hydrology working group (LSH-WG) of the new IAHS Panta Rhei decade (2013-2022). The aim of the LSH-WG is to promote large sample hydrology, as discussed by Gupta et al. (2014) and to invite the community to collaborate on building and sharing a comprehensive and representative world-wide sample of watershed datasets. By doing so, LSH will allow the community to work towards 'hydrological consistency' (Martinez and Gupta, 2011) as a basis for hydrologic model development and evaluation, thereby increasing robustness of the model evaluation process. Classical model evaluation metrics based on 'robust statistics' are needed, but clearly not sufficient: multi-criteria assessments based on multiple hydrological signatures can help to better characterize hydrological functioning. Further, large-sample data sets can greatly facilitate: (i) improved understanding through rigorous testing and comparison of competing model hypothesis and structures, (ii) improved robustness of generalizations through statistical analyses that minimize the influence of outliers and case-specific studies, (iii) classification, regionalization and model transfer across a broad diversity of hydrometeorological contexts, and (iv) estimation of predictive uncertainties at a location and across locations (Mathevet et al., 2006; Andréassian et al., 2009; Gupta et al., 2014) References Andréassian, V., Perrin, C., Berthet, L., Le Moine, N., Lerat, J., Loumagne, C., Oudin, L., Mathevet, T., Ramos, M. H., and Valéry, A.: Crash tests for a standardized evaluation of hydrological models, Hydrology and Earth System Sciences, 1757-1764, 2009. Gupta, H. V., Perrin, C., Blöschl, G., Montanari, A., Kumar, R., Clark, M., and Andréassian, V.: Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., 18, 463-477, doi:10.5194/hess-18-463-2014, 2014. Martinez, G. F., and H. V.Gupta (2011), Hydrologic consistency as a basis for

  19. Chaos in Temperature in Generic 2 p-Spin Models

    NASA Astrophysics Data System (ADS)

    Panchenko, Dmitry

    2016-09-01

    We prove chaos in temperature for even p-spin models which include sufficiently many p-spin interaction terms. Our approach is based on a new invariance property for coupled asymptotic Gibbs measures, similar in spirit to the invariance property that appeared in the proof of ultrametricity in Panchenko (Ann Math (2) 177(1):383-393, 2013), used in combination with Talagrand's analogue of Guerra's replica symmetry breaking bound for coupled systems.

  20. Reheating temperature in non-minimal derivative coupling model

    SciTech Connect

    Sadjadi, H. Mohseni; Goodarzi, Parviz E-mail: p_goodarzi@ut.ac.ir

    2013-07-01

    We consider the inflaton as a scalar field described by a non-minimal derivative coupling model with a power law potential. We study the slow roll inflation, the rapid oscillation phase, the radiation dominated and the recombination eras respectively, and estimate e-folds numbers during these epochs. Using these results and recent astrophysical data we determine the reheating temperature in terms of the spectral index and the amplitude of the power spectrum of scalar perturbations.

  1. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  2. A regional neural network model for predicting mean daily river water temperature

    USGS Publications Warehouse

    Wagner, Tyler; DeWeber, Jefferson Tyrell

    2014-01-01

    Water temperature is a fundamental property of river habitat and often a key aspect of river resource management, but measurements to characterize thermal regimes are not available for most streams and rivers. As such, we developed an artificial neural network (ANN) ensemble model to predict mean daily water temperature in 197,402 individual stream reaches during the warm season (May–October) throughout the native range of brook trout Salvelinus fontinalis in the eastern U.S. We compared four models with different groups of predictors to determine how well water temperature could be predicted by climatic, landform, and land cover attributes, and used the median prediction from an ensemble of 100 ANNs as our final prediction for each model. The final model included air temperature, landform attributes and forested land cover and predicted mean daily water temperatures with moderate accuracy as determined by root mean squared error (RMSE) at 886 training sites with data from 1980 to 2009 (RMSE = 1.91 °C). Based on validation at 96 sites (RMSE = 1.82) and separately for data from 2010 (RMSE = 1.93), a year with relatively warmer conditions, the model was able to generalize to new stream reaches and years. The most important predictors were mean daily air temperature, prior 7 day mean air temperature, and network catchment area according to sensitivity analyses. Forest land cover at both riparian and catchment extents had relatively weak but clear negative effects. Predicted daily water temperature averaged for the month of July matched expected spatial trends with cooler temperatures in headwaters and at higher elevations and latitudes. Our ANN ensemble is unique in predicting daily temperatures throughout a large region, while other regional efforts have predicted at relatively coarse time steps. The model may prove a useful tool for predicting water temperatures in sampled and unsampled rivers under current conditions and future projections of climate

  3. Transient magnetic field and temperature modeling in large magnet applications

    SciTech Connect

    Gurol, H.; Hardy, G.E.; Peck, S.D.; Leung, E. . Space Systems Div.)

    1989-07-01

    This paper discusses a coupled magnetic/thermal model developed to study heat and magnetic field diffusion in conducting materials subject to time-varying external fields. There are numerous applications, both military and commercial. These include: energy storage devices, pulsed power transformers, and electromagnetic launchers. The time scales of interest may range from a magnetic field pulse of a microsecond in an electromagnetic launcher, to hundreds of seconds in an energy storage magnet. The problem can be dominated by either the magnetic field or heat diffusion, depending on the temperature and the material properties of the conductor. In general, heat diffuses much more rapidly in high electrical conductivity materials of cryogenic temperatures. The magnetic field takes longer to diffuse, since screening currents can be rapidly set up which shield the interior of the material from further magnetic field penetration. Conversely, in high resistivity materials, the magnetic field diffuses much more rapidly. A coupled two-dimensional thermal/magnetic model has been developed. The results of this model, showing the time and spatial variation of the magnetic field and temperature, are discussed for the projectile of an electromagnetic launcher.

  4. Finite-temperature corrections in the dilated chiral quark model

    SciTech Connect

    Kim, Y.; Lee, Hyun Kyu |; Rho, M. |

    1995-03-01

    We calculate the finite-temperature corrections in the dilated chiral quark model using the effective potential formalism. Assuming that the dilaton limit is applicable at some short length scale, we interpret the results to represent the behavior of hadrons in dense and hot matter. We obtain the scaling law, f{sub {pi}}(T)/f{sub {pi}} = m{sub Q}(T)/m{sub Q} {approx_equal} m{sub {sigma}}(T)/m{sub {sigma}}while we argue, using PCAC, that pion mass does not scale within the temperature range involved in our Lagrangian. It is found that the hadron masses and the pion decay constant drop faster with temperature in the dilated chiral quark model than in the conventional linear sigma model that does not take into account the QCD scale anomaly. We attribute the difference in scaling in heat bath to the effect of baryonic medium on thermal properties of the hadrons. Our finding would imply that the AGS experiments (dense and hot matter) and the RHIC experiments (hot and dilute matter) will ``see`` different hadron properties in the hadronization exit phase.

  5. Stability of 11 prevalent synthetic cannabinoids in authentic neat oral fluid samples: glass versus polypropylene containers at different temperatures.

    PubMed

    Kneisel, Stefan; Speck, Michael; Moosmann, Bjoern; Auwärter, Volker

    2013-07-01

    Although synthetic cannabinoids have been intensively investigated in recent years and oral fluid testing is becoming increasingly popular in suspected driving under the influence of drugs cases, only scarce data on their stability in authentic neat oral fluid (nOF) samples are yet available. However, especially for these new psychoactive drugs, investigations focusing on stability issues are necessary as inappropriate storage conditions may lead to considerable analytical problems. Since it has been shown for Δ(9) -tetrahydrocannabinol that adsorption to plastic surfaces may lead to considerable drug loss, we aimed to evaluate whether adsorption also has to be taken into account for synthetic cannabinoids in nOF samples. In this paper, the results of investigations on the recovery of 11 prevalent synthetic cannabinoids from authentic nOF samples stored over 72 h in RapidEASE (high quality borosilicate glass) and Sciteck Saliva Split Collector (polypropylene) tubes at 4 and 25 °C are presented. Our findings clearly demonstrate that lipophilic synthetic cannabinoids present in nOF samples adsorb to the surface of polypropylene containers when stored at room temperature, leading to considerable drug loss. Hence, when using polypropylene tubes, samples should be shipped cooled in order to avoid a substantial decrease of the analyte concentration during transportation.

  6. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  7. Precipitates/Salts Model Calculations for Various Drift Temperature Environments

    SciTech Connect

    P. Marnier

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b).

  8. Chemical vapor deposition modeling for high temperature materials

    NASA Technical Reports Server (NTRS)

    Goekoglu, Sueleyman

    1992-01-01

    The formalism for the accurate modeling of chemical vapor deposition (CVD) processes has matured based on the well established principles of transport phenomena and chemical kinetics in the gas phase and on surfaces. The utility and limitations of such models are discussed in practical applications for high temperature structural materials. Attention is drawn to the complexities and uncertainties in chemical kinetics. Traditional approaches based on only equilibrium thermochemistry and/or transport phenomena are defended as useful tools, within their validity, for engineering purposes. The role of modeling is discussed within the context of establishing the link between CVD process parameters and material microstructures/properties. It is argued that CVD modeling is an essential part of designing CVD equipment and controlling/optimizing CVD processes for the production and/or coating of high performance structural materials.

  9. Comparison of Temperature-Index Snowmelt Models for Use within an Operational Water Quality Model.

    PubMed

    Watson, Brett M; Putz, Gordon

    2014-01-01

    The accurate prediction of snowmelt runoff is a critical component of integrated hydrological and water quality models in regions where snowfall constitutes a significant portion of the annual precipitation. In cold regions, the accumulation of a snowpack and the subsequent spring snowmelt generally constitutes a major proportion of the annual water yield. Furthermore, the snowmelt runoff transports significant quantities of sediment and nutrients to receiving streams and strongly influences downstream water quality. Temperature-index models are commonly used in operational hydrological and water quality models to predict snowmelt runoff. Due to their simplicity, computational efficiency, low data requirements, and ability to consistently achieve good results, numerous temperature-index models of varying complexity have been developed in the past few decades. The objective of this study was to determine how temperature-index models of varying complexity would affect the performance of the water quality model SWAT (a modified version of SWAT that was developed for watersheds dominated by boreal forest) for predicting runoff. Temperature-index models used by several operational hydrological models were incorporated into SWAT. Model performance was tested on five watersheds on the Canadian Boreal Plain whose hydrologic response is dominated by snowmelt runoff. The results of this study indicate that simpler temperature-index models can perform as well as more complex temperature-index models for predicting runoff from the study watersheds. The outcome of this study has important implications because the incorporation of simpler temperature-index snowmelt models into hydrological and water quality models can lead to a reduction in the number of parameters that need to be optimized without sacrificing predictive accuracy.

  10. Effects of electrostatic discharge on three cryogenic temperature sensor models

    SciTech Connect

    Courts, S. Scott; Mott, Thomas B.

    2014-01-29

    Cryogenic temperature sensors are not usually thought of as electrostatic discharge (ESD) sensitive devices. However, the most common cryogenic thermometers in use today are thermally sensitive diodes or resistors - both electronic devices in their base form. As such, they are sensitive to ESD at some level above which either catastrophic or latent damage can occur. Instituting an ESD program for safe handling and installation of the sensor is costly and it is desirable to balance the risk of ESD damage against this cost. However, this risk cannot be evaluated without specific knowledge of the ESD vulnerability of the devices in question. This work examines three types of cryogenic temperature sensors for ESD sensitivity - silicon diodes, Cernox(trade mark, serif) resistors, and wire wound platinum resistors, all manufactured by Lake Shore Cryotronics, Inc. Testing was performed per TIA/EIA FOTP129 (Human Body Model). Damage was found to occur in the silicon diode sensors at discharge levels of 1,500 V. For Cernox(trade mark, serif) temperature sensors, damage was observed at 3,500 V. The platinum temperature sensors were not damaged by ESD exposure levels of 9,900 V. At the lower damage limit, both the silicon diode and the Cernox(trade mark, serif) temperature sensors showed relatively small calibration shifts of 1 to 3 K at room temperature. The diode sensors were stable with time and thermal cycling, but the long term stability of the Cernox(trade mark, serif) sensors was degraded. Catastrophic failure occurred at higher levels of ESD exposure.

  11. Rasch-modeling the Portuguese SOCRATES in a clinical sample.

    PubMed

    Lopes, Paulo; Prieto, Gerardo; Delgado, Ana R; Gamito, Pedro; Trigo, Hélder

    2010-06-01

    The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES) assesses motivation for treatment in the drug-dependent population. The development of adequate measures of motivation is needed in order to properly understand the role of this construct in rehabilitation. This study probed the psychometric properties of the SOCRATES in the Portuguese population by means of the Rasch Rating Scale Model, which allows the conjoint measurement of items and persons. The participants were 166 substance abusers under treatment for their addiction. Results show that the functioning of the five response categories is not optimal; our re-analysis indicates that a three-category system is the most appropriate one. By using this response category system, both model fit and estimation accuracy are improved. The discussion takes into account other factors such as item format and content in order to make suggestions for the development of better motivation-for-treatment scales.

  12. A TWO-TEMPERATURE MODEL OF MAGNETIZED PROTOSTELLAR OUTFLOWS

    SciTech Connect

    Wang, Liang-Yao; Shang, Hsien; Krasnopolsky, Ruben; Chiang, Tzu-Yang

    2015-12-10

    We explore kinematics and morphologies of molecular outflows driven by young protostars using magnetohydrodynamic simulations in the context of the unified wind model of Shang et al. The model explains the observed high-velocity jet and low-velocity shell features. In this work we investigate how these characteristics are affected by the underlying temperature and magnetic field strength. We study the problem of a warm wind running into a cold ambient toroid by using a tracer field that keeps track of the wind material. While an isothermal equation of state is adopted, the effective temperature is determined locally based on the wind mass fraction. In the unified wind model, the density of the wind is cylindrically stratified and highly concentrated toward the outflow axis. Our simulations show that for a sufficiently magnetized wind, the jet identity can be well maintained even at high temperatures. However, for a high temperature wind with low magnetization, the thermal pressure of the wind gas can drive material away from the axis, making the jet less collimated as it propagates. We also study the role of the poloidal magnetic field of the toroid. It is shown that the wind-ambient interface becomes more resistant to corrugation when the poloidal field is present, and the poloidal field that bunches up within the toroid prevents the swept-up material from being compressed into a thin layer. This suggests that the ambient poloidal field may play a role in producing a smoother and thicker swept-up shell structure in the molecular outflow.

  13. Determination of plasma temperature and electron density of iron in iron slag samples using laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Hussain, T.; Gondal, M. A.; Shamraiz, M.

    2016-08-01

    Plasma temperature and electron density of iron in iron slag samples taken from a local plant is studied. Optimal experimental conditions were evaluated using Nd: YAG laser at 1064 nm. Some toxic elements were identified and quantitative measurements were also made. Plasma temperature and electron density were estimated using standard equations and well resolved iron spectral lines in the 229.06-358.11 nm region at 10, 20, 30 and 40 mJ laser pulse energy with 4.5 μs delay time. These parameters were found to increase with increase in laser pulse energy. The Boltzmann distribution and experimentally measured line intensities support the assumption that the laser-induced plasma was in local thermal equilibrium. It is worth mentioning that iron and steel sector generates tons of solid waste and residues annually containing variety of contaminants which can be harmful to the environment and therefore knowledge, proper analysis and investigation of such iron slag is important.

  14. Computer Modeling of Planetary Surface Temperatures in Introductory Astronomy Courses

    NASA Astrophysics Data System (ADS)

    Barker, Timothy; Goodman, J.

    2013-01-01

    Barker, T., and Goodman, J. C., Wheaton College, Norton, MA Computer modeling is an essential part of astronomical research, and so it is important that students be exposed to its powers and limitations in the first (and, perhaps, only) astronomy course they take in college. Building on the ideas of Walter Robinson (“Modeling Dynamic Systems,” Springer, 2002) we have found that STELLA software (ISEE Systems) allows introductory astronomy students to do sophisticated modeling by the end of two classes of instruction, with no previous experience in computer programming or calculus. STELLA’s graphical interface allows students to visualize systems in terms of “flows” in and out of “stocks,” avoiding the need to invoke differential equations. Linking flows and stocks allows feedback systems to be constructed. Students begin by building an easily understood system: a leaky bucket. This is a simple negative feedback system in which the volume in the bucket (a “stock”) depends on a fixed inflow rate and an outflow that increases in proportion to the volume in the bucket. Students explore how changing inflow rate and feedback parameters affect the steady-state volume and equilibration time of the system. This model is completed within a 50-minute class meeting. In the next class, students are given an analogous but more sophisticated problem: modeling a planetary surface temperature (“stock”) that depends on the “flow” of energy from the Sun, the planetary albedo, the outgoing flow of infrared radiation from the planet’s surface, and the infrared return from the atmosphere. Students then compare their STELLA model equilibrium temperatures to observed planetary temperatures, which agree with model ones for worlds without atmospheres, but give underestimates for planets with atmospheres, thus introducing students to the concept of greenhouse warming. We find that if we give the students part of this model at the start of a 50-minute class they are

  15. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 4 2014-01-01 2014-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  16. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 4 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  17. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 4 2012-01-01 2012-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  18. Low reheating temperatures in monomial and binomial inflationary models

    NASA Astrophysics Data System (ADS)

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-06-01

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied phi2 inflationary potential is no longer favored by current CMB data, as well as phip with p>2, a phi1 potential and canonical reheating (0wre=) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, ns, implies an upper bound on the reheating temperature of Trelesssim 6× 1010 GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a phi1 potential. We find that as a subdominant phi2 term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of Tre=4 MeV is excluded by the Planck 2015 68% confidence limit.

  19. Rheological modelling of physiological variables during temperature variations at rest

    NASA Astrophysics Data System (ADS)

    Vogelaere, P.; de Meyer, F.

    1990-06-01

    The evolution with time of cardio-respiratory variables, blood pressure and body temperature has been studied on six males, resting in semi-nude conditions during short (30 min) cold stress exposure (0°C) and during passive recovery (60 min) at 20°C. Passive cold exposure does not induce a change in HR but increases VO 2, VCO 2 Ve and core temperature T re, whereas peripheral temperature is significantly lowered. The kinetic evolution of the studied variables was investigated using a Kelvin-Voigt rheological model. The results suggest that the human body, and by extension the measured physiological variables of its functioning, does not react as a perfect viscoelastic system. Cold exposure induces a more rapid adaptation for heart rate, blood pressure and skin temperatures than that observed during the rewarming period (20°C), whereas respiratory adjustments show an opposite evolution. During the cooling period of the experiment the adaptative mechanisms, taking effect to preserve core homeothermy and to obtain a higher oxygen supply, increase the energy loss of the body.

  20. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. PHD TUTORIAL: Finite-temperature models of Bose Einstein condensation

    NASA Astrophysics Data System (ADS)

    Proukakis, Nick P.; Jackson, Brian

    2008-10-01

    The theoretical description of trapped weakly interacting Bose-Einstein condensates is characterized by a large number of seemingly very different approaches which have been developed over the course of time by researchers with very distinct backgrounds. Newcomers to this field, experimentalists and young researchers all face a considerable challenge in navigating through the 'maze' of abundant theoretical models, and simple correspondences between existing approaches are not always very transparent. This tutorial provides a generic introduction to such theories, in an attempt to single out common features and deficiencies of certain 'classes of approaches' identified by their physical content, rather than their particular mathematical implementation. This tutorial is structured in a manner accessible to a non-specialist with a good working knowledge of quantum mechanics. Although some familiarity with concepts of quantum field theory would be an advantage, key notions, such as the occupation number representation of second quantization, are nonetheless briefly reviewed. Following a general introduction, the complexity of models is gradually built up, starting from the basic zero-temperature formalism of the Gross-Pitaevskii equation. This structure enables readers to probe different levels of theoretical developments (mean field, number conserving and stochastic) according to their particular needs. In addition to its 'training element', we hope that this tutorial will prove useful to active researchers in this field, both in terms of the correspondences made between different theoretical models, and as a source of reference for existing and developing finite-temperature theoretical models.

  2. Effect of gravity on the erosion of model samples

    NASA Astrophysics Data System (ADS)

    Larionov, G. A.; Bushueva, O. G.; Dobrovol'skaya, N. G.; Kiryukhina, Z. P.; Krasnov, S. F.; Litvin, L. F.

    2015-07-01

    The article is devoted to the theoretical analysis and experimental investigation of the bottom and lateral erosion in shallow flows on slopes. The analysis of the ratio between the forces detaching and retaining a soil particle has showed that the erosion of the bed sidewall exceeds manifold the erosion of the bottom at the flow velocity close to the threshold value. When the flow velocity increases, the differences in the rate of erosion between the bottom and the sidewalls of the rill are leveled. The rate of sidewall erosion strongly depends on the slope of rill sides. The experimental studies of the effect of the sample surface inclination have completely confirmed the theoretical conclusions. It should be kept in mind that the lateral erosion under natural conditions is also limited by the laws of hydraulics. When the rill bed is widened due to the lateral erosion, the flow width increases and, hence, its velocity decreases to below the threshold value, which stops the erosion of the bed.

  3. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    1999-02-24

    The enthalpy of a fluid measured with respect to some reference temperature and pressure (enthalpy increment or Cp) is required for many engineering designs. Different techniques for determining enthalpy increments include direct measurement, integration of heat capacity as a function of temperature at constant pressure, and calculation from accurate density measurements as a function of temperature and pressure with ideal-gas enthalpies. Techniques have been developed for measurement of heat capacities using differential scanning calorimeters, but routine measurements with a precision better than 3% are rare. For thermodynamic model development, excess enthalpies or enthalpies of mixing of binary and ternary systems are generally required. Although these data can be calculated from measured values of incremental enthalpies of mixtures and corresponding pure components, the method of calculation involves subtraction of large numbers, and it is impossible to obtain accurate results from relatively accurate incremental enthalpy data. Directly measured heats of mixing provide better data for model development. In what follows, we give a brief literature survey of experimental methods available for measurement of incremental enthalpies as well as heats of mixing.

  4. Viability of human chondrocytes in an ex vivo model in relation to temperature and cartilage depth.

    PubMed

    Drobnic, M; Mars, T; Alibegović, A; Bole, V; Balazic, J; Grubic, Z; Brecelj, J

    2005-01-01

    Chondrocytes in human articular cartilage remain viable post-mortem. It has however not been established yet how the storage temperature affects their survival, which is essential information when post-mortem cartilage is used for toxicologic studies. Our aim was to construct a simple model of explanted knee cartilage and to test the influences of time and temperature on the viability of chondrocytes in the ex vivo conditions. Osteochondral cylinders were procured from the cadaveric femoral condyles. The cylinders were embedded in water-tight rubber tubes, which formed separate chondral and osteal compartments. Tubes were filled with normal saline, without additives, to keep chondrocytes under close-to-normal conditions. The samples were divided into two groups stored at 4 degrees C and 35 degrees C, respectively. Three samples of each of these two groups were analysed at the time of removal, and then three and nine days later. Images of Live-Dead staining were scanned by a confocal laser microscope. Count of viable chondrocytes in four regions, from surface to bone, was obtained using image analysis software. The regression model revealed that the number of viable chondrocytes decreased every day by 19% and that an increase in temperature by 1 degree C decreased their viability by 5.8%. The temperature effect fell by 0.2 percentage points for every 100 microm from the surface to the bone. Herein we demonstrate that chondrocytes remain viable in the ex vivo model of human knee cartilage long enough to be able to serve as a model for toxicologic studies. Their viability is, however, significantly influenced by time and temperature.

  5. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet (VUV) synchrotron ionization quadrupole mass spectrometry: Application to low--temperature kinetics and product detection

    SciTech Connect

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D; Ferrell, Sarah J; Leone, Stephen R; Wilson, Kevin R

    2011-10-12

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radicalneutralchemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has beendeveloped that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion withexcellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by theairfoil is negligible. The reaction of C2H with C2H2 is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification basedon the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic ratesclose to the collision-determined limit.

  6. Baryon number dissipation at finite temperature in the standard model

    SciTech Connect

    Mottola, E. ); Raby, S. . Dept. of Physics); Starkman, G. . Dept. of Astronomy)

    1990-01-01

    We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, {gamma} is given in terms of real time correlation functions of the operator E{center dot}B, and is directly proportional to the sphaleron transition rate, {Gamma}: {gamma} {preceq} n{sub f}{Gamma}/T{sup 3}. Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs.

  7. Modeling Verwey transition temperature of Fe3O4 nanocrystals

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao bao; Xiao, Bei bei; Yang, Hong yu; Gu, Xiao yan; Sheng, Hong chao; Zhang, Xing hua

    2016-11-01

    The Verwey transition in nanoscale is an important physical property for Fe3O4 nanocrystals and has attracted extensive attention in recent years. In this work, an analytic thermodynamic model without any adjusting parameters is developed to estimate the size and shape effects on modulating the Verwey transition temperature of Fe3O4 nanocrystals. The results show that the Verwey transition temperature reduces with increasing shape parameter λ or decreasing size D. A good agreement between the prediction and the experimental data verified our physical insight that the Verwey transition of Fe3O4 can be directly related to the atomic thermal vibration. The results presented in this work will be of benefit to the understanding of the microscopic mechanism of the Verwey transition and the design of future generation switching and memory devices.

  8. Comparison of ET estimations by the three-temperature model, SEBAL model and eddy covariance observations

    NASA Astrophysics Data System (ADS)

    Zhou, Xinyao; Bi, Shaojie; Yang, Yonghui; Tian, Fei; Ren, Dandan

    2014-11-01

    The three-temperature (3T) model is a simple model which estimates plant transpiration from only temperature data. In-situ field experimental results have shown that 3T is a reliable evapotranspiration (ET) estimation model. Despite encouraging results from recent efforts extending the 3T model to remote sensing applications, literature shows limited comparisons of the 3T model with other remote sensing driven ET models. This research used ET obtained from eddy covariance to evaluate the 3T model and in turn compared the model-simulated ET with that of the more traditional SEBAL (Surface Energy Balance Algorithm for Land) model. A field experiment was conducted in the cotton fields of Taklamakan desert oasis in Xinjiang, Northwest China. Radiation and surface temperature were obtained from hyperspectral and thermal infrared images for clear days in 2013. The images covered the time period of 0900-1800 h at four different phenological stages of cotton. Meteorological data were automatically recorded in a station located at the center of the cotton field. Results showed that the 3T model accurately captured daily and seasonal variations in ET. As low dry soil surface temperatures induced significant errors in the 3T model, it was unsuitable for estimating ET in the early morning and late afternoon periods. The model-simulated ET was relatively more accurate for squaring, bolling and boll-opening stages than for seedling stage of cotton during when ET was generally low. Wind speed was apparently not a limiting factor of ET in the 3T model. This was attributed to the fact that surface temperature, a vital input of the model, indirectly accounted for the effect of wind speed on ET. Although the 3T model slightly overestimated ET compared with SEBAL and eddy covariance, it was generally reliable for estimating daytime ET during 0900-1600 h.

  9. Modeling of the viscoelastic behavior of a polyimide matrix at elevated temperature

    NASA Astrophysics Data System (ADS)

    Crochon, Thibaut

    Use of Polymer Matrix Composite Materials (PMCMs) in aircraft engines requires materials able to withstand extreme service conditions, such as elevated temperatures, high mechanical loadings and an oxidative environment. In such an environment, the polymer matrix is likely to exhibit a viscoelastic behavior dependent on the mechanical loading and temperature. In addition, the combined effects of elevated temperature and the environment near the engines are likely to increase physical as well as chemical aging. These various parameters need to be taken into consideration for the designer to be able to predict the material behavior over the service life of the components. The main objective of this thesis was to study the viscoelastic behavior of a high temperature polyimide matrix and develop a constitutive theory able to predict the material behavior for every of service condition. Then, the model had to have to be implemented into commercially available finite-element software such as ABAQUS or ANSYS. Firstly, chemical aging of the material at service temperature was studied. To that end, a thermogravimetric analysis of the matrix was conducted on powder samples in air atmosphere. Two kinds of tests were performed: i) kinetic tests in which powder samples were heated at a constant rate until complete sublimation; ii) isothermal tests in which the samples were maintained at a constant temperature for 24 hours. The first tests were used to develop a degradation model, leading to an excellent fit of the experimental data. Then, the model was used to predict the isothermal data but which much less success, particularly for the lowest temperatures. At those temperatures, the chemical degradation was preceded by an oxidation phase which the model was not designed to predict. Other isothermal degradation tests were also performed on tensile tests samples instead of powders. Those tests were conducted at service temperature for a much longer period of time. The samples

  10. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction

    PubMed Central

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-01-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance (‘activity-density’) because individual activity – which is affected by climatic factors – affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data. Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r, consistent with an exponential Q10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies. The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications. The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for

  11. Modeling of Boehmite Leaching from Actual Hanford High-Level Waste Samples

    SciTech Connect

    Snow, L.A.; Rapko, B.M.; Poloski, A.P.; Peterson, R.A.

    2007-07-01

    The U.S. Department of Energy plans to vitrify approximately 60,000 metric tons of high-level waste (HLW) sludge from underground storage tanks at the Hanford Site in Southwest Washington State. To reduce the volume of HLW requiring treatment, a goal has been set to remove a significant quantity of the aluminum, which comprises nearly 70 percent of the sludge. Aluminum is found in the form of gibbsite and sodium aluminate, which can be easily dissolved by washing the waste stream with caustic, and boehmite, which comprises nearly half of the total aluminum, but is more resistant to caustic dissolution and requires higher treatment temperatures and hydroxide concentrations. Chromium, which makes up a much smaller amount ({approx}3%) of the sludge, must also be removed because there is a low tolerance for chromium in the HLW immobilization process. In this work, the coupled dissolution kinetics of aluminum and chromium species during caustic leaching of actual Hanford HLW samples is examined. The experimental results are used to develop a model that provides a basis for predicting dissolution dynamics from known process temperature and hydroxide concentration. (authors)

  12. Computational modelling of bone cement polymerization: temperature and residual stresses.

    PubMed

    Pérez, M A; Nuño, N; Madrala, A; García-Aznar, J M; Doblaré, M

    2009-09-01

    The two major concerns associated with the use of bone cement are the generation of residual stresses and possible thermal necrosis of surrounding bone. An accurate modelling of these two factors could be a helpful tool to improve cemented hip designs. Therefore, a computational methodology based on previous published works is presented in this paper combining a kinetic and an energy balance equation. New assumptions are that both the elasticity modulus and the thermal expansion coefficient depend on the bone cement polymerization fraction. This model allows to estimate the thermal distribution in the cement which is later used to predict the stress-locking effect, and to also estimate the cement residual stresses. In order to validate the model, computational results are compared with experiments performed on an idealized cemented femoral implant. It will be shown that the use of the standard finite element approach cannot predict the exact temporal evolution of the temperature nor the residual stresses, underestimating and overestimating their value, respectively. However, this standard approach can estimate the peak and long-term values of temperature and residual stresses within acceptable limits of measured values. Therefore, this approach is adequate to evaluate residual stresses for the mechanical design of cemented implants. In conclusion, new numerical techniques should be proposed in order to achieve accurate simulations of the problem involved in cemented hip replacements.

  13. High-temperature, microwave-assisted UV digestion: a promising sample preparation technique for trace element analysis.

    PubMed

    Florian, D; Knapp, G

    2001-04-01

    A novel, microwave-assisted, high-temperature UV digestion procedure was developed for the accelerated decomposition of interfering dissolved organic carbon (DOC) prior to trace element analysis of liquid samples such as, industrial/municipal wastewater, groundwater, and surface water, body fluids, infusions, beverages, and sewage. The technique is based on a closed, pressurized, microwave digestion device. UV irradiation is generated by immersed electrodeless Cd discharge lamps (228 nm) operated by the microwave field in the oven cavity. To enhance excitation efficiency an antenna was fixed on top of the microwave lamp. The established immersion system enables maximum reaction temperatures up to 250-280 degrees C, resulting in a tremendous increase of mineralization efficiency. Compared to open UV digestion devices, decomposition time is reduced by a factor of 5 and the maximum initial concentration of DOC can be raised by at least a factor of 50. The system's performance on a real-type sample was evaluated for the mineralization of skimmed milk (IRMM, CRM 151) and subsequent determination of trace elements using standard spectroscopic techniques. Recovery for Cd (109%), Cu (112%), Fe (99%), and Pb (96%) showed good agreement with the 95% confidence interval of the certified values.

  14. Assessment of precipitation and temperature data from CMIP3 global climate models for hydrologic simulation

    NASA Astrophysics Data System (ADS)

    McMahon, T. A.; Peel, M. C.; Karoly, D. J.

    2015-01-01

    The objective of this paper is to identify better performing Coupled Model Intercomparison Project phase 3 (CMIP3) global climate models (GCMs) that reproduce grid-scale climatological statistics of observed precipitation and temperature for input to hydrologic simulation over global land regions. Current assessments are aimed mainly at examining the performance of GCMs from a climatology perspective and not from a hydrology standpoint. The performance of each GCM in reproducing the precipitation and temperature statistics was ranked and better performing GCMs identified for later analyses. Observed global land surface precipitation and temperature data were drawn from the Climatic Research Unit (CRU) 3.10 gridded data set and re-sampled to the resolution of each GCM for comparison. Observed and GCM-based estimates of mean and standard deviation of annual precipitation, mean annual temperature, mean monthly precipitation and temperature and Köppen-Geiger climate type were compared. The main metrics for assessing GCM performance were the Nash-Sutcliffe efficiency (NSE) index and root mean square error (RMSE) between modelled and observed long-term statistics. This information combined with a literature review of the performance of the CMIP3 models identified the following better performing GCMs from a hydrologic perspective: HadCM3 (Hadley Centre for Climate Prediction and Research), MIROCm (Model for Interdisciplinary Research on Climate) (Center for Climate System Research (The University of Tokyo), National Institute for Environmental Studies, and Frontier Research Center for Global Change), MIUB (Meteorological Institute of the University of Bonn, Meteorological Research Institute of KMA, and Model and Data group), MPI (Max Planck Institute for Meteorology) and MRI (Japan Meteorological Research Institute). The future response of these GCMs was found to be representative of the 44 GCM ensemble members which confirms that the selected GCMs are reasonably

  15. Multi-water-bag models of ion temperature gradient instability in cylindrical geometry

    SciTech Connect

    Coulette, David; Besse, Nicolas

    2013-05-15

    Ion temperature gradient instabilities play a major role in the understanding of anomalous transport in core fusion plasmas. In the considered cylindrical geometry, ion dynamics is described using a drift-kinetic multi-water-bag model for the parallel velocity dependency of the ion distribution function. In a first stage, global linear stability analysis is performed. From the obtained normal modes, parametric dependencies of the main spectral characteristics of the instability are then examined. Comparison of the multi-water-bag results with a reference continuous Maxwellian case allows us to evaluate the effects of discrete parallel velocity sampling induced by the Multi-Water-Bag model. Differences between the global model and local models considered in previous works are discussed. Using results from linear, quasilinear, and nonlinear numerical simulations, an analysis of the first stage saturation dynamics of the instability is proposed, where the divergence between the three models is examined.

  16. Closed-population capture-recapture modeling of samples drawn one at a time.

    PubMed

    Barker, Richard J; Schofield, Matthew R; Wright, Janine A; Frantz, Alain C; Stevens, Chris

    2014-12-01

    Motivated by field sampling of DNA fragments, we describe a general model for capture-recapture modeling of samples drawn one at a time in continuous-time. Our model is based on Poisson sampling where the sampling time may be unobserved. We show that previously described models correspond to partial likelihoods from our Poisson model and their use may be justified through arguments concerning S- and Bayes-ancillarity of discarded information. We demonstrate a further link to continuous-time capture-recapture models and explain observations that have been made about this class of models in terms of partial ancillarity. We illustrate application of our models using data from the European badger (Meles meles) in which genotyping of DNA fragments was subject to error.

  17. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  18. Control and diagnosis of temperature, density, and uniformity in x-ray heated iron/magnesium samples for opacity measurements

    SciTech Connect

    Nagayama, T.; Bailey, J. E.; Loisel, G.; Hansen, S. B.; Rochau, G. A.; Mancini, R. C.; MacFarlane, J. J.; Golovkin, I.

    2014-05-15

    Experimental tests are in progress to evaluate the accuracy of the modeled iron opacity at solar interior conditions, in particular to better constrain the solar abundance problem [S. Basu and H. M. Antia, Phys. Rep. 457, 217 (2008)]. Here, we describe measurements addressing three of the key requirements for reliable opacity experiments: control of sample conditions, independent sample condition diagnostics, and verification of sample condition uniformity. The opacity samples consist of iron/magnesium layers tamped by plastic. By changing the plastic thicknesses, we have controlled the iron plasma conditions to reach (1) T{sub e} = 167 ± 3 eV and n{sub e} = (7.1 ± 1.5)× 10{sup 21} cm{sup −3}, (2) T{sub e} = 170 ± 2 eV and n{sub e} = (2.0 ± 0.2) × 10{sup 22} cm{sup −3}, and (3) T{sub e} = 196 ± 6 eV and n{sub e} = (3.8 ± 0.8) × 10{sup 22} cm{sup −3}, which were measured by magnesium tracer K-shell spectroscopy. The opacity sample non-uniformity was directly measured by a separate experiment where Al is mixed into the side of the sample facing the radiation source and Mg into the other side. The iron condition was confirmed to be uniform within their measurement uncertainties by Al and Mg K-shell spectroscopy. The conditions are suitable for testing opacity calculations needed for modeling the solar interior, other stars, and high energy density plasmas.

  19. Spring Fluids from a Low-temperature Hydrothermal System at Dorado Outcrop: The First Samples of a Massive Global Flux

    NASA Astrophysics Data System (ADS)

    Wheat, C. G.; Fisher, A. T.; McManus, J.; Hulme, S.; Orcutt, B.

    2015-12-01

    Hydrothermal circulation through the volcanic ocean crust extracts about one fourth of Earth's lithospheric heat. Most of this advective heat loss occurs through ridge flanks, areas far from the magmatic influence of seafloor spreading, at relatively low temperatures (2-25 degrees Celsius). This process results in a flux of seawater through the oceanic crust that is commensurate with that delivered to the ocean from rivers. Given this large flow, even a modest (1-5 percent) change in concentration during circulation would impact geochemical cycles for many ions. Until recently such fluids that embody this process have not been collected or quantified despite the importance of this process, mainly because no site of focused, low-temperature discharge has been found. In 2013 we used Sentry (an AUV) and Jason II (an ROV) to generate a bathymetric map and locate springs within a geologic context on Dorado Outcrop, a ridge flank hydrothermal system that typifies such hydrothermal processes in the Pacific. Dorado Outcrop is located on 23 M.y. old seafloor of the Cocos Plate, where 70-90 percent of the lithospheric heat is removed. Spring fluids collected in 2013 confirmed small chemical anomalies relative to seawater, requiring new methods to collect, analyze, and interpret samples and data. In 2014 the submersible Alvin utilized these methods to recover the first high-quality spring samples from this system and year-long experiments. These unique data and samples represent the first of their type. For example, the presence of dissolved oxygen is the first evidence of an oxic ridge flank hydrothermal fluid, even though such fluids have been postulated to exist throughout a vast portion of the oceanic crust. Furthermore, chemical data confirm modest anomalies relative to seawater for some elements. Such anomalies, if characteristic throughout the global ocean, impact global geochemical cycles, crustal evolution, and subsurface microbial activity.

  20. Order-parameter-aided temperature-accelerated sampling for the exploration of crystal polymorphism and solid-liquid phase transitions

    NASA Astrophysics Data System (ADS)

    Yu, Tang-Qing; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Vanden-Eijnden, Eric; Tuckerman, Mark

    2014-06-01

    The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency.

  1. Order-parameter-aided temperature-accelerated sampling for the exploration of crystal polymorphism and solid-liquid phase transitions

    PubMed Central

    Yu, Tang-Qing; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Vanden-Eijnden, Eric; Tuckerman, Mark

    2014-01-01

    The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency. PMID:24907992

  2. Capillary gas chromatography with cryogenic oven temperature for headspace samples: analysis of chloroform or methylene chloride in whole blood.

    PubMed

    Watanabe, K; Seno, H; Ishii, A; Suzuki, O; Kumazawa, T

    1997-12-15

    A new and sensitive gas chromatography (GC) method for measurement of chloroform or methylene chloride in whole blood is presented. Trace levels of these analytes present in the headspace of samples were cryogenically trapped prior to on-line GC analysis. After heating of a blood sample containing chloroform and methylene chloride (internal standard, and vice versa) in a 7.0-mL vial at 55 degrees C for 20 min, 5 mL of the headspace vapor was drawn into a glass syringe. All vapor was introduced into an Rtx-Volatiles middle-bore capillary column in the splitless mode at -30 degrees C oven temperature to trap the entire analytes, and the oven temperature was programmed up to 280 degrees C for detection of the compounds and for cleaning of the column. The present conditions gave sharp peaks for both chloroform and methylene chloride and very low background noises for whole blood samples. As much as 11.5 and 20.0% of chloroform and methylene chloride, respectively, which had been added to whole blood in a vial, could be introduced into the GC column. The calibration curves showed linearity in the range of 0.05-5.0 micrograms/0.5 mL of whole blood. The detection limit was estimated to be about 2 ng/0.5 mL. The coefficients of intraday and interday variations were 1.31 and 8.90% for chloroform and 1.37 and 9.03% for methylene chloride, respectively. The data on chloroform or methylene chloride in rat blood after inhalation of each compound were also presented.

  3. Improving the Performance of Temperature Index Snowmelt Model of SWAT by Using MODIS Land Surface Temperature Data

    PubMed Central

    Yang, Yan; Onishi, Takeo; Hiramatsu, Ken

    2014-01-01

    Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R2) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations. PMID:25165746

  4. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  5. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  6. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b)) B-2—Model...

  7. 12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Model Clauses and Sample Forms B Appendix B to...) Account Disclosures) B-10—Sample Form (Periodic Statement) B-11—Sample Form (Rate and Fee Schedule) B-12... terms, leaving the decision instead to each credit union's board of directors. 12 CFR 204.2(c)(2)....

  8. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  9. Integrating a human thermoregulatory model with a clothing model to predict core and skin temperatures.

    PubMed

    Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen

    2017-05-01

    This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress.

  10. Satellite Derived Land Surface Temperature for Model Assimilation

    NASA Technical Reports Server (NTRS)

    Suggs, Ronnie J.; Jedlovec, Gary J.; Lapenta, William

    1999-01-01

    Studies have shown that land surface temperature (LST) tendencies are sensitive to the surface moisture availability which is a function of soil moisture and vegetation. The assimilation of satellite derived LST tendencies into the surface energy budget of mesoscale models has shown promise in improving the representation of the complex effects of both soil moisture and vegetation within the models for short term simulations. LST derived from geostationary satellites has the potential of providing the temporal and spatial resolution needed for an LST assimilation process. This paper presents an analysis comparing the LST derived from GOES-8 infrared measurements with LST calculated by the MM5 numerical model. The satellite derived LSTs are calculated using a physical split window approach using channels 4 and 5 of GOES-8. The differences in the LST data sets, especially the tendencies, are presented and examined. Quantifying the differences between the data sets provide insight of possible weaknesses in the model parameterizations affecting the surface energy budget calculations and an indication of the potential effectiveness o f assimilating LST into the models.

  11. Comparison of eruptive and intrusive samples from Unzen Volcano, Japan: Effects of contrasting pressure temperature time paths

    NASA Astrophysics Data System (ADS)

    Almberg, L. D.; Larsen, J. F.; Eichelberger, J. C.; Vogel, T. A.; Patino, L. C.

    2008-07-01

    Core samples from the conduit of Unzen Volcano, obtained only 9 years after cessation of the 1991-1995 eruption, exhibit important differences in physical characteristics and mineralogy, and subtle differences in bulk chemistry from erupted samples. These differences in the conduit samples reflect emplacement under a confining pressure where about half of the original magmatic water was retained in the melt phase, maintenance at hypersolidus temperature for some unknown but significant time span, and subsequent subsolidus hydrothermal alteration. In contrast, magma that extruded as lava underwent decompression to 1 atm with nearly complete loss of magmatic water and cooling at a sufficiently rapid rate to produce glass. The resulting hypabyssal texture of the conduit samples, while clearly distinct from eruptive rocks, is also distinct from plutonic suites. Given the already low temperature of the conduit (less than 200 °C, [Nakada, S., Uto, K., Yoshimoto, M., Eichelberger, J.C., Shimizu, H., 2005. Scientific Results of Conduit Drilling in the Unzen Scientific Drilling Project (USDP), Sci. Drill., 1, 18-22]) when it was sampled by drilling, this texture must have developed within a decade, and perhaps within a much shorter time, after emplacement. The fact that all trace-element concentrations of the conduit and the last-emplaced lava of the spine, 1300 m above it, are identical to within analytical uncertainty provides strong evidence that both were produced during the same eruption sequence. Changes in conduit magma that occurred between emplacement and cooling to the solidus were collapse of vesicles from less than or equal to the equilibrium value of about 50 vol.% to about 0.1 vol.%; continued resorption of quartz and reaction of biotite phenocrysts due to heating of magma prior to ascent by intruding mafic magma; breakdown of hornblende; and micro-crystallization of rhyolitic melt to feldspar and quartz. Subsolidus changes were deposition of calcite and

  12. High Temperature Chemical Kinetic Combustion Modeling of Lightly Methylated Alkanes

    SciTech Connect

    Sarathy, S M; Westbrook, C K; Pitz, W J; Mehl, M

    2011-03-01

    Conventional petroleum jet and diesel fuels, as well as alternative Fischer-Tropsch (FT) fuels and hydrotreated renewable jet (HRJ) fuels, contain high molecular weight lightly branched alkanes (i.e., methylalkanes) and straight chain alkanes (n-alkanes). Improving the combustion of these fuels in practical applications requires a fundamental understanding of large hydrocarbon combustion chemistry. This research project presents a detailed high temperature chemical kinetic mechanism for n-octane and three lightly branched isomers octane (i.e., 2-methylheptane, 3-methylheptane, and 2,5-dimethylhexane). The model is validated against experimental data from a variety of fundamental combustion devices. This new model is used to show how the location and number of methyl branches affects fuel reactivity including laminar flame speed and species formation.

  13. Low reheating temperatures in monomial and binomial inflationary models

    SciTech Connect

    Rehagen, Thomas; Gelmini, Graciela B. E-mail: gelmini@physics.ucla.edu

    2015-06-01

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied φ{sup 2} inflationary potential is no longer favored by current CMB data, as well as φ{sup p} with p>2, a φ{sup 1} potential and canonical reheating (0w{sub re}=) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n{sub s}, implies an upper bound on the reheating temperature of T{sub re}∼< 6× 10{sup 10} GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a φ{sup 1} potential. We find that as a subdominant φ{sup 2} term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T{sub re}=4 MeV is excluded by the Planck 2015 68% confidence limit.

  14. Low reheating temperatures in monomial and binomial inflationary models

    SciTech Connect

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-06-23

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ{sup 2} inflationary potential is no longer favored by current CMB data, as well as ϕ{sup p} with p>2, a ϕ{sup 1} potential and canonical reheating (w{sub re}=0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n{sub s}, implies an upper bound on the reheating temperature of T{sub re}≲6×10{sup 10} GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ{sup 1} potential. We find that as a subdominant ϕ{sup 2} term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T{sub re}=4 MeV is excluded by the Planck 2015 68% confidence limit.

  15. Model-based estimation of changes in air temperature seasonality

    NASA Astrophysics Data System (ADS)

    Barbosa, Susana; Trigo, Ricardo

    2010-05-01

    Seasonality is a ubiquitous feature in climate time series. Climate change is expected to involve not only changes in the mean of climate parameters but also changes in the characteristics of the corresponding seasonal cycle. Therefore the identification and quantification of changes in seasonality is a highly relevant topic in climate analysis, particularly in a global warming context. However, the analysis of seasonality is far from a trivial task. A key challenge is the discrimination between long-term changes in the mean and long-term changes in the seasonal pattern itself, which requires the use of appropriate statistical approaches in order to be able to distinguish between overall trends in the mean and trends in the seasons. Model based approaches are particularly suitable for the analysis of seasonality, enabling to assess uncertainties in the amplitude and phase of seasonal patterns within a well defined statistical framework. This work addresses the changes in the seasonality of air temperature over the 20th century. The analysed data are global air temperature values close to surface (2m above ground) and mid-troposphere (500 hPa geopotential height) from the recently developed 20th century reanalysis. This new 3-D Reanalysis dataset is available since 1891, considerably extending all other Reanalyses currently in use (e.g. NCAR, ECWMF), and was obtained with the Ensemble Filter (Compo et al., 2006) by assimilation of pressure observations into a state-of-the-art atmospheric general circulation model that includes the radiative effects of historical time-varying CO2 concentrations, volcanic aerosol emissions and solar output variations. A modeling approach based on autoregression (Barbosa et al, 2008; Barbosa, 2009) is applied within a Bayesian framework for the estimation of a time varying seasonal pattern and further quantification of changes in the amplitude and phase of air temperature over the 20th century. Barbosa, SM, Silva, ME, Fernandes, MJ

  16. Multiscale regression model to infer historical temperatures in a central Mediterranean sub-regional area

    NASA Astrophysics Data System (ADS)

    Diodato, N.; Bellocchi, G.; Bertolin, C.; Camuffo, D.

    2010-12-01

    To reconstruct sub-regional European climate over the past centuries, several efforts have been made using historical datasets. However, only scattered information at low spatial and temporal resolution have been produced to date for the Mediterranean area. This paper has exploited, for Southern and Central Italy (Mediterranean Sub-Regional Area), an unprecedented historical dataset as an attempt to model seasonal (winter and summer) air temperatures in pre-instrumental time (back to 1500). Combining information derived from proxy documentary data and large-scale simulation, a statistical methodology in the form of multiscale-temperature regression (MTR)-model was developed to adapt larger-scale estimations to the sub-regional temperature pattern. The modelled response lacks essentially of autocorrelations among the residuals (marginal or any significance in the Durbin-Watson statistic), and agrees well with the independent data from the validation sample (Nash-Sutcliffe efficiency coefficient >0.60). The advantage of the approach is not merely increased accuracy in estimation. Rather, it relies on the ability to extract (and exploit) the right information to replicate coherent temperature series in historical times.

  17. Modeling thermomechanical fatigue life of high-temperature titanium alloy IMI 834

    NASA Astrophysics Data System (ADS)

    Maier, H. J.; Teteruk, R. G.; Christ, H.-J.

    2000-02-01

    A microcrack propagation model was developed to predict thermomechanical fatigue (TMF) life of high-temperature titanium alloy IMI 834 from isothermal data. Pure fatigue damage, which is assumed to evolve independent of time, is correlated using the cyclic J integral. For test temperatures exceeding about 600 °C, oxygen-induced embrittlement of the material ahead of the advancing crack tip is the dominating environmental effect. To model the contribution of this damage mechanism to fatigue crack growth, extensive use of metallographic measurements was made. Comparisons between stress-free annealed samples and fatigued specimens revealed that oxygen uptake is strongly enhanced by cyclic plastic straining. In fatigue tests with a temperature below about 500 °C, the contribution of oxidation was found to be negligible, and the detrimental environmental effect was attributed to the reaction of water vapor with freshly exposed material at the crack tip. Both environmental degradation mechanisms contributed to damage evolution only in out-of-phase TMF tests, and thus, this loading mode is most detrimental. Electron microscopy revealed that cyclic stress-strain response and crack initiation mechanisms are affected by the change from planar dislocation slip to a more wavy type as test temperature is increased. The predictive capabilities of the model are shown to result from the close correlation with the microstructural observations.

  18. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  19. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  20. Application of the Tripartite Model to a Complicated Sample of Residential Youth with Externalizing Problems

    ERIC Educational Resources Information Center

    Chin, Eu Gene; Ebesutani, Chad; Young, John

    2013-01-01

    The tripartite model of anxiety and depression has received strong support among child and adolescent populations. Clinical samples of children and adolescents in these studies, however, have usually been referred for treatment of anxiety and depression. This study investigated the fit of the tripartite model with a complicated sample of…

  1. Modification of an RBF ANN-Based Temperature Compensation Model of Interferometric Fiber Optical Gyroscopes

    PubMed Central

    Cheng, Jianhua; Qi, Bing; Chen, Daidai; Jr. Landry, René

    2015-01-01

    This paper presents modification of Radial Basis Function Artificial Neural Network (RBF ANN)-based temperature compensation models for Interferometric Fiber Optical Gyroscopes (IFOGs). Based on the mathematical expression of IFOG output, three temperature relevant terms are extracted, which include: (1) temperature of fiber loops; (2) temperature variation of fiber loops; (3) temperature product term of fiber loops. Then, the input-modified RBF ANN-based temperature compensation scheme is established, in which temperature relevant terms are transferred to train the RBF ANN. Experimental temperature tests are conducted and sufficient data are collected and post-processed to form the novel RBF ANN. Finally, we apply the modified RBF ANN based on temperature compensation model in two IFOGs with temperature compensation capabilities. The experimental results show the proposed temperature compensation model could efficiently reduce the influence of environment temperature on the output of IFOG, and exhibit a better temperature compensation performance than conventional scheme without proposed improvements. PMID:25985163

  2. An improved temperature model of the Antarctic uppermost mantle for the benefit of GIA modelling

    NASA Astrophysics Data System (ADS)

    Stolk, Ward; Kaban, Mikhail; van der Wal, Wouter; Wiens, Doug

    2014-05-01

    Mass changes in Antarctica's ice cap influence the underlying lithosphere and upper mantle. The dynamics of the solid earth are in turn coupled back to the surface and ice dynamics. Furthermore, mass changes due to lithosphere and uppermost mantle dynamics pollute measurements of ice mass change in Antarctica. Thus an improved understanding of temperature, composition and rheology of the Antarctic lithosphere is required, not only to improve geodynamic modelling of the Antarctic continent (e.g. glacial isostatic adjustment (GIA) modelling), but also to improve climate monitoring and research. Recent field studies in Antarctica have generated much new data. These data, especially an improved assessment of crustal thickness and seismic tomography of the upper mantle, now allow for the construction of an improved regional temperature model of the Antarctic uppermost mantle. Even a small improvement in the temperature models for the uppermost mantle could have a significant effect on GIA modelling in Antarctica. Our regional temperature model is based on a joint analysis of a high resolution seismic tomography model (Heeszel et al., forthcoming) and a recent global gravity model (Foerste et al., 2011). The model will be further constrained by additional local data where available. Based on an initial general mantle composition, the temperature and density in the uppermost mantle is modelled, elaborating on the the methodology of Goes et al. (2000) and Cammarano et al. (2003). The gravity signal of the constructed model is obtained using forward gravity modelling. This signal is compared with the observed gravity signal and differences form the basis for the compositional model in the next iteration. The first preliminary results of this study, presented here, will focus on the cratonic areas in East-Antarctica, for which modelling converges after a few iterations. Cammarano, F. and Goes, S. and Vacher, P. and Giardini, D. (2003) Inferring upper-mantle temperatures from

  3. Modeling the snow surface temperature with a one-layer energy balance snowmelt model

    NASA Astrophysics Data System (ADS)

    You, J.; Tarboton, D. G.; Luce, C. H.

    2013-12-01

    ⪉bel{sec:abstract} Snow surface temperature is a key control on energy exchanges at the snow surface, particularly net longwave radiation and turbulent energy fluxes. The snow surface temperature is in turn controlled by the balance between various external fluxes and the conductive heat flux, internal to the snowpack. Because of the strong insulating properties of snow, thermal gradients in snow packs are large and nonlinear, a fact that has led many to advocate multiple layer snowmelt models over single layer models. In an effort to keep snowmelt modeling simple and parsimonious, the Utah Energy Balance (UEB) snowmelt model used only one layer but allowed the snow surface temperature to be different from the snow average temperature by using an equilibrium gradient parameterization based on the surface energy balance. Although this procedure was considered an improvement over the ordinary single layer snowmelt models, it still resulted in discrepancies between modeled and measured snowpack energy contents. In this paper we examine the parameterization of snow surface temperature in single layer snowmelt models from the perspective of heat conduction into a semi-infinite medium. We evaluate the equilibrium gradient approach, the force-restore approach, and a modified force-restore approach. In addition, we evaluate a scheme for representing the penetration of a refreezing front in cold periods following melt. We also introduce a method to adjust effective conductivity to account for the presence of ground near to a shallow snow surface. These parameterizations were tested against data from the Central Sierra Snow Laboratory, CA, Utah State University experimental farm, UT, and Subnivean snow laboratory at Niwot Ridge, CO. These tests compare modeled and measured snow surface temperature, snow energy content, snow water equivalent, and snowmelt outflow. We found that with these refinements the model is able to better represent the snowpack energy balance and

  4. Subsurface Temperature Modeling using Integrated Modeling for Nuclear Reactor Site Assessment in Volcanic Zone

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Kurniadi, Rizal; Fatiah, Elfa; Rizal Abda, Muhammad; Martha, Rio; Widowati, Sri

    2017-01-01

    Indonesia has giant vulcanic arc that almost the largest vulcanic arc in the world. Therefore, one of the main risk for nuclear site plan is the vulcanic area. Therefore to reduce the risk, one of most safety nuclear site plant is old vulcanic area. In this paper, we propose to predict subsurface temperature profile to ensure the condition of subsurface of vulcanic zone. Geothermal heat flow is important parameter in modeling of subsurface temperature. The subsurface temperature is one of vulcanic activity parameter which very important for nuclear site plant risk assesment. The integrated modeling for predicting subsurface temperature profile is carried out by combining geothermal heat flow and subsurface profiles resulted from either seismic or gravity measurement. The finite difference of Fourier’s law is applied to surface temperature, temperature gradient, geothermal heat flow and thermal conductivity profile for producing subsurface temperature distribution accurately. This subsurface temperature profile is essential to characterize the vulcanic zone whether it is still active or inactive. Characterization of vulcanic activity is very useful to ensure or to minimize the risk of nuclear site plant in vulcanic zone. One of interesting case study of nuclear site plan in Indonesia is mount Muriah site plan, this method is useful to ensure whether mount Muriah is still active or inactive now

  5. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  6. The impact of orbital sampling, monthly averaging and vertical resolution on climate chemistry model evaluation with satellite observations

    NASA Astrophysics Data System (ADS)

    Aghedo, A. M.; Bowman, K. W.; Shindell, D. T.; Faluvegi, G.

    2011-07-01

    Ensemble climate model simulations used for the Intergovernmental Panel on Climate Change (IPCC) assessments have become important tools for exploring the response of the Earth System to changes in anthropogenic and natural forcings. The systematic evaluation of these models through global satellite observations is a critical step in assessing the uncertainty of climate change projections. This paper presents the technical steps required for using nadir sun-synchronous infrared satellite observations for multi-model evaluation and the uncertainties associated with each step. This is motivated by need to use satellite observations to evaluate climate models. We quantified the implications of the effect of satellite orbit and spatial coverage, the effect of variations in vertical sensitivity as quantified by the observation operator and the impact of averaging the operators for use with monthly-mean model output. We calculated these biases in ozone, carbon monoxide, atmospheric temperature and water vapour by using the output from two global chemistry climate models (ECHAM5-MOZ and GISS-PUCCINI) and the observations from the Tropospheric Emission Spectrometer (TES) instrument on board the NASA-Aura satellite from January 2005 to December 2008. The results show that sampling and monthly averaging of the observation operators produce zonal-mean biases of less than ±3 % for ozone and carbon monoxide throughout the entire troposphere in both models. Water vapour sampling zonal-mean biases were also within the insignificant range of ±3 % (that is ±0.14 g kg-1) in both models. Sampling led to a temperature zonal-mean bias of ±0.3 K over the tropical and mid-latitudes in both models, and up to -1.4 K over the boundary layer in the higher latitudes. Using the monthly average of temperature and water vapour operators lead to large biases over the boundary layer in the southern-hemispheric higher latitudes and in the upper troposphere, respectively. Up to 8 % bias was

  7. The impact of orbital sampling, monthly averaging and vertical resolution on climate chemistry model evaluation with satellite observations

    NASA Astrophysics Data System (ADS)

    Aghedo, A. M.; Bowman, K. W.; Shindell, D. T.; Faluvegi, G.

    2011-03-01

    Ensemble climate model simulations used for the Intergovernmental Panel on Climate Change (IPCC) assessments have become important tools for exploring the response of the Earth System to changes in anthropogenic and natural forcings. The systematic evaluation of these models through global satellite observations is a critical step in assessing the uncertainty of climate change projections. This paper presents the technical steps required for using nadir sun-synchronous infrared satellite observations for multi-model evaluation and the uncertainties associated with each step. This is motivated by need to use satellite observations to evaluate climate models. We quantified the implications of the effect of satellite orbit and spatial coverage, the effect of variations in vertical sensitivity as quantified by the observation operator and the impact of averaging the operators for use with monthly-mean model output. We calculated these biases in ozone, carbon monoxide, atmospheric temperature and water vapour by using the output from two global chemistry climate models (ECHAM5-MOZ and GISS-PUCCINI) and the observations from the Tropospheric Emission Spectrometer (TES) satellite from January 2005 to December 2008. The results show that sampling and monthly averaging of the observation operators produce biases of less than ±3% for ozone and carbon monoxide throughout the entire troposphere in both models. Water vapour sampling biases were also within the insignificant range of ±3% (that is ±0.14 g kg-1) in both models. Sampling led to a temperature bias of ±0.3 K over the tropical and mid-latitudes in both models, and up to -1.4 K over the boundary layer in the higher latitudes. Using the monthly average of temperature and water vapour operators lead to large biases over the boundary layer in the southern-hemispheric higher latitudes and in the upper troposphere, respectively. Up to 8% bias was calculated in the upper troposphere water vapour due to monthly

  8. Modeling Tree Shade Effect on Urban Ground Surface Temperature.

    PubMed

    Napoli, Marco; Massetti, Luciano; Brandani, Giada; Petralli, Martina; Orlandini, Simone

    2016-01-01

    There is growing interest in the role that urban forests can play as urban microclimate modifiers. Tree shade and evapotranspiration affect energy fluxes and mitigate microclimate conditions, with beneficial effects on human health and outdoor comfort. The aim of this study was to investigate surface temperature () variability under the shade of different tree species and to test the capability in predicting of a proposed heat transfer model. Surface temperature data on asphalt and grass under different shading conditions were collected in the Cascine park, Florence, Italy, and were used to test the performance of a one-dimensional heat transfer model integrated with a routine for estimating the effect of plant canopies on surface heat transfer. Shading effects of 10 tree species commonly used in Italian urban settings were determined by considering the infrared radiation and the tree canopy leaf area index (LAI). The results indicate that, on asphalt, was negatively related to the LAI of trees ( reduction ranging from 13.8 to 22.8°C). On grass, this relationship was weaker probably because of the combined effect of shade and grass evapotranspiration on ( reduction ranged from 6.9 to 9.4°C). A sensitivity analysis confirmed that other factors linked to soil water content play an important role in reduction of grassed areas. Our findings suggest that the energy balance model can be effectively used to estimate of the urban pavement under different shading conditions and can be applied to the analysis of microclimate conditions of urban green spaces.

  9. Data-Model Comparison of Pliocene Sea Surface Temperature

    NASA Astrophysics Data System (ADS)

    Dowsett, H. J.; Foley, K.; Robinson, M. M.; Bloemers, J. T.

    2013-12-01

    The mid-Piacenzian (late Pliocene) climate represents the most geologically recent interval of long-term average warmth and shares similarities with the climate projected for the end of the 21st century. As such, its fossil and sedimentary record represents a natural experiment from which we can gain insight into potential climate change impacts, enabling more informed policy decisions for mitigation and adaptation. We present the first systematic comparison of Pliocene sea surface temperatures (SST) between an ensemble of eight climate model simulations produced as part of PlioMIP (Pliocene Model Intercomparison Project) and the PRISM (Pliocene Research, Interpretation and Synoptic Mapping) Project mean annual SST field. Our results highlight key regional (mid- to high latitude North Atlantic and tropics) and dynamic (upwelling) situations where there is discord between reconstructed SST and the PlioMIP simulations. These differences can lead to improved strategies for both experimental design and temporal refinement of the palaeoenvironmental reconstruction. Scatter plot of multi-model-mean anomalies (squares) and PRISM3 data anomalies (large blue circles) by latitude. Vertical bars on data anomalies represent the variability of warm climate phase within the time-slab at each locality. Small colored circles represent individual model anomalies and show the spread of model estimates about the multi-model-mean. While not directly comparable in terms of the development of the means nor the meaning of variability, this plot provides a first order comparison of the anomalies. Encircled areas are a, PRISM low latitude sites outside of upwelling areas; b, North Atlantic coastal sequences and Mediterranean sites; c, large anomaly PRISM sites from the northern hemisphere. Numbers identify Ocean Drilling Program sites.

  10. Temperature accelerated Monte Carlo (TAMC): a method for sampling the free energy surface of non-analytical collective variables.

    PubMed

    Ciccotti, Giovanni; Meloni, Simone

    2011-04-07

    We introduce a new method to simulate the physics of rare events. The method, an extension of the Temperature Accelerated Molecular Dynamics, comes in use when the collective variables introduced to characterize the rare events are either non-analytical or so complex that computing their derivative is not practical. We illustrate the functioning of the method by studying the homogeneous crystallization in a sample of Lennard-Jones particles. The process is studied by introducing a new collective variable that we call Effective Nucleus Size N. We have computed the free energy barriers and the size of critical nucleus, which result in agreement with data available in the literature. We have also performed simulations in the liquid domain of the phase diagram. We found a free energy curve monotonically growing with the nucleus size, consistent with the liquid domain.

  11. Low-frequency echo-reduction and insertion-loss measurements from small passive-material samples under ocean environmental temperatures and hydrostatic pressures.

    PubMed

    Piquette, J C; Forsythe, S E

    2001-10-01

    System L is a horizontal tube designed for acoustical testing of underwater materials and devices, and is part of the Low Frequency Facility of the Naval Undersea Warfare Center in Newport, Rhode Island. The tube contains a fill fluid that is composed of a propylene glycol/water mixture. This system is capable of achieving test temperatures in the range of -3 to 40 deg Centigrade, and hydrostatic test pressures in the range 40 to 68,950 kPa. A unidirectional traveling wave can be established within the tube over frequencies of 100 to 1750 Hz. Described here is a technique for measuring the (normal-incidence) echo reduction and insertion loss of small passive-material samples that approximately fill the tube diameter of 38 cm. (Presented also is a waveguide model that corrects the measurements when the sample fills the tube diameter incompletely.) The validity of the system L measurements was established by comparison with measurements acquired in a large acoustic pressure-test vessel using a relatively large panel of a candidate material, a subsample of which was subsequently evaluated in system L. The first step in effecting the comparison was to least-squares fit the data acquired from the large panel to a causal material model. The material model was used to extrapolate the panel measurements into the frequency range of system L. The extrapolations show good agreement with the direct measurements acquired in system L.

  12. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    USGS Publications Warehouse

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  13. Models to estimate the minimum ignition temperature of dusts and hybrid mixtures.

    PubMed

    Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich

    2016-03-05

    The minimum ignition temperatures (MIT) of hybrid mixtures have been investigated by performing several series of tests in a modified Godbert-Greenwald furnace. Five dusts as well as three perfect gases and three real were used in different combinations as test samples. Further, seven mathematical models for prediction of the MIT of dust/air mixtures were presented of which three were chosen for deeper study and comparison with the experimental results based on the availability of the input quantities needed and their applicability. Additionally, two alternative models were proposed to calculate the MIT of hybrid mixtures and were validated against the experimental results. A significant decrease of the minimum ignition temperature of either the gas or the vapor as well as an increase in the explosion likelihood could be observed when a small amount of dust which was either below its minimum explosible concentration or not ignitable itself at that particular temperature was mixed with the gas. The various models developed by Cassel, Krishma and Mitsui to predict the MIT of dust were in good agreement with the experimental results as well as the two models proposed to predict the MIT of hybrid mixtures were also in agreement with the experimental value.

  14. 3D printed sample holder for in-operando EPR spectroscopy on high temperature polymer electrolyte fuel cells.

    PubMed

    Niemöller, Arvid; Jakes, Peter; Kayser, Steffen; Lin, Yu; Lehnert, Werner; Granwehr, Josef

    2016-08-01

    Electrochemical cells contain electrically conductive components, which causes various problems if such a cell is analyzed during operation in an EPR resonator. The optimum cell design strongly depends on the application and it is necessary to make certain compromises that need to be individually arranged. Rapid prototyping presents a straightforward option to implement a variable cell design that can be easily adapted to changing requirements. In this communication, it is demonstrated that sample containers produced by 3D printing are suitable for EPR applications, with a particular emphasis on electrochemical applications. The housing of a high temperature polymer electrolyte fuel cell (HT-PEFC) with a phosphoric acid doped polybenzimidazole membrane was prepared from polycarbonate by 3D printing. Using a custom glass Dewar, this fuel cell could be operated at temperatures up to 140°C in a standard EPR cavity. The carbon-based gas diffusion layer showed an EPR signal with a characteristic Dysonian line shape, whose evolution could be monitored in-operando in a non-invasive manner.

  15. 3D printed sample holder for in-operando EPR spectroscopy on high temperature polymer electrolyte fuel cells

    NASA Astrophysics Data System (ADS)

    Niemöller, Arvid; Jakes, Peter; Kayser, Steffen; Lin, Yu; Lehnert, Werner; Granwehr, Josef

    2016-08-01

    Electrochemical cells contain electrically conductive components, which causes various problems if such a cell is analyzed during operation in an EPR resonator. The optimum cell design strongly depends on the application and it is necessary to make certain compromises that need to be individually arranged. Rapid prototyping presents a straightforward option to implement a variable cell design that can be easily adapted to changing requirements. In this communication, it is demonstrated that sample containers produced by 3D printing are suitable for EPR applications, with a particular emphasis on electrochemical applications. The housing of a high temperature polymer electrolyte fuel cell (HT-PEFC) with a phosphoric acid doped polybenzimidazole membrane was prepared from polycarbonate by 3D printing. Using a custom glass Dewar, this fuel cell could be operated at temperatures up to 140 °C in a standard EPR cavity. The carbon-based gas diffusion layer showed an EPR signal with a characteristic Dysonian line shape, whose evolution could be monitored in-operando in a non-invasive manner.

  16. A simultaneous derivatization of 3-monochloropropanediol and 1,3-dichloropropane with hexamethyldisilazane-trimethylsilyl trifluoromethanesulfonate at room temperature for efficient analysis of food sample analysis.

    PubMed

    Lee, Bai Qin; Wan Mohamed Radzi, Che Wan Jasimah Bt; Khor, Sook Mei

    2016-02-05

    This paper reports the application of hexamethyldisilazane-trimethylsilyl trifluoromethanesulfonate (HMDS-TMSOTf) for the simultaneous silylation of 3-monochloro-1,2-propanediol (3-MCPD) and 1,3-dicholoropropanol (1,3-DCP) in solid and liquid food samples. 3-MCPD and 1,3-DCP are chloropropanols that have been established as Group 2B carcinogens in clinical testing. They can be found in heat-processed food, especially when an extended high-temperature treatment is required. However, the current AOAC detection method is time-consuming and expensive. Thus, HMDS-TMSOTf was used in this study to provide a safer, and cost-effective alternative to the HFBI method. Three important steps are involved in the quantification of 3-MCPD and 1,3-DCP: extraction, derivatization and quantification. The optimization of the derivatization process, which involved focusing on the catalyst volume, derivatization temperature, and derivatization time was performed based on the findings obtained from both the Box-Behnken modeling and a real experimental set up. With the optimized conditions, the newly developed method was used for actual food sample quantification and the results were compared with those obtained via the standard AOAC method. The developed method required less samples and reagents but it could be used to achieve lower limits of quantification (0.0043mgL(-1) for 1,3-DCP and 0.0011mgL(-1) for 3-MCPD) and detection (0.0028mgL(-1) for 1,3-DCP and 0.0008mgL(-1) for 3-MCPD). All the detected concentrations are below the maximum tolerable limit of 0.02mgL(-1). The percentage of recovery obtained from food sample analysis was between 83% and 96%. The new procedure was validated with the AOAC method and showed a comparable performance. The HMDS-TMSOTf derivatization strategy is capable of simultaneously derivatizing 1,3-DCP and 3-MCPD at room temperature, and it also serves as a rapid, sensitive, and accurate analytical method for food samples analysis.

  17. Deterministic Modeling of the High Temperature Test Reactor

    SciTech Connect

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  18. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    PubMed

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  19. A moment model for phonon transport at room temperature

    NASA Astrophysics Data System (ADS)

    Mohammadzadeh, Alireza; Struchtrup, Henning

    2017-01-01

    Heat transfer in solids is modeled by deriving the macroscopic equations for phonon transport from the phonon-Boltzmann equation. In these equations, the Callaway model with frequency-dependent relaxation time is considered to describe the Resistive and Normal processes in the phonon interactions. Also, the Brillouin zone is considered to be a sphere, and its diameter depends on the temperature of the system. A simple model to describe phonon interaction with crystal boundary is employed to obtain macroscopic boundary conditions, where the reflection kernel is the superposition of diffusive reflection, specular reflection and isotropic scattering. Macroscopic moments are defined using a polynomial of the frequency and wave vector of phonons. As an example, a system of moment equations, consisting of three directional and seven frequency moments, i.e., 63