Science.gov

Sample records for sample temperature modeling

  1. Recommended Maximum Temperature For Mars Returned Samples

    NASA Technical Reports Server (NTRS)

    Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.

    2016-01-01

    The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.

  2. Multiphoton cryo microscope with sample temperature control

    NASA Astrophysics Data System (ADS)

    Breunig, H. G.; Uchugonova, A.; König, K.

    2013-02-01

    We present a multiphoton microscope system which combines the advantages of multiphoton imaging with precise control of the sample temperature. The microscope provides online insight in temperature-induced changes and effects in plant tissue and animal cells with subcellular resolution during cooling and thawing processes. Image contrast is based on multiphoton fluorescence intensity or fluorescence lifetime in the range from liquid nitrogen temperature up to +600°C. In addition, micro spectra from the imaged regions can be recorded. We present measurement results from plant leaf samples as well as Chinese hamster ovary cells.

  3. Estimation of river and stream temperature trends under haphazard sampling

    USGS Publications Warehouse

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  4. Cone sampling array models

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Poirson, Allen

    1987-01-01

    A model is described for positioning cones in the retina. Each cone has a circular disk of influence, and the disks are tightly packed outward from the center. This model has three parameters that can vary with eccentricity: the mean radius of the cone disk, the standard deviation of the cone disk radius, and the standard deviation of postpacking jitter. Estimates for these parameters out to 1.6 deg are found by using measurements reported by Hirsch and Hylton (1985) and Hirsch and Miller (1987) of the positions of the cone inner segments of an adult macaque. The estimation is based on fitting measures of variation in local intercone distances, and the fit to these measures is good.

  5. Spin models and boson sampling

    NASA Astrophysics Data System (ADS)

    Garcia Ripoll, Juan Jose; Peropadre, Borja; Aspuru-Guzik, Alan

    Aaronson & Arkhipov showed that predicting the measurement statistics of random linear optics circuits (i.e. boson sampling) is a classically hard problem for highly non-classical input states. A typical boson-sampling circuit requires N single photon emitters and M photodetectors, and it is a natural idea to rely on few-level systems for both tasks. Indeed, we show that 2M two-level emitters at the input and output ports of a general M-port interferometer interact via an XY-model with collective dissipation and a large number of dark states that could be used for quantum information storage. More important is the fact that, when we neglect dissipation, the resulting long-range XY spin-spin interaction is equivalent to boson sampling under the same conditions that make boson sampling efficient. This allows efficient implementations of boson sampling using quantum simulators & quantum computers. We acknowledge support from Spanish Mineco Project FIS2012-33022, CAM Research Network QUITEMAD+ and EU FP7 FET-Open Project PROMISCE.

  6. Experiment 2030. EE-2 Temperature Log and Downhole Water Sample

    SciTech Connect

    Grigsby, Charles O.

    1983-07-29

    A temperature log and downhole water sample run were conducted in EE-2 on July 13, 1983. The temperature log was taken to show any changes which had occurred in the fracture-to-wellbore intersections as a result of the Experiment 2020 pumping and to locate fluid entries for taking the water sample. The water sample was requested primarily to determine the arsenic concentration in EE-2 fluids (see memo from C.Grigsby, June 28, 1983 concerning arsenic in EE-3 samples.) The temperature log was run using the thermistor in the ESS-6 water samples.

  7. Proximity effect thermometer for local temperature measurements on mesoscopic samples.

    SciTech Connect

    Aumentado, J.; Eom, J.; Chandrasekhar, V.; Baldo, P. M.; Rehn, L. E.; Materials Science Division; Northwestern Univ; Univ. of Chicago

    1999-11-29

    Using the strong temperature-dependent resistance of a normal metal wire in proximity to a superconductor, we have been able to measure the local temperature of electrons heated by flowing a direct-current (dc) in a metallic wire to within a few tens of millikelvin at low temperatures. By placing two such thermometers at different parts of a sample, we have been able to measure the temperature difference induced by a dc flowing in the samples. This technique may provide a flexible means of making quantitative thermal and thermoelectric measurements on mesoscopic metallic samples.

  8. Calibration of tip and sample temperature of a scanning tunneling microscope using a superconductive sample

    SciTech Connect

    Stocker, Matthias; Pfeifer, Holger; Koslowski, Berndt

    2014-05-15

    The temperature of the electrodes is a crucial parameter in virtually all tunneling experiments. The temperature not only controls the thermodynamic state of the electrodes but also causes thermal broadening, which limits the energy resolution. Unfortunately, the construction of many scanning tunneling microscopes inherits a weak thermal link between tip and sample in order to make one side movable. Such, the temperature of that electrode is badly defined. Here, the authors present a procedure to calibrate the tip temperature by very simple means. The authors use a superconducting sample (Nb) and a standard tip made from W. Due to the asymmetry in the density of states of the superconductor (SC)—normal metal (NM) tunneling junction, the SC temperature controls predominantly the density of states while the NM controls the thermal smearing. By numerically simulating the I-V curves and numerically optimizing the tip temperature and the SC gap width, the tip temperature can be accurately deduced if the sample temperature is known or measureable. In our case, the temperature dependence of the SC gap may serve as a temperature sensor, leading to an accurate NM temperature even if the SC temperature is unknown.

  9. Helium Pot System for Maintaining Sample Temperature after Cryocooler Deactivation

    SciTech Connect

    Haid, B J

    2005-01-26

    A system for maintaining a sample at a constant temperature below 10K after deactivating the cooling source is demonstrated. In this system, the cooling source is a GM cryocooler that is joined with the sample through an adaptor that consists of a helium pot and a resistive medium. Upon deactivating the cryocooler, the power applied to a heater located on the sample side of the resistive medium is decreased gradually to maintain an appropriate temperature rise across the resistive medium as the helium pot warms. The temperature is held constant in this manner without the use of solid or liquid cryogens and without mechanically disconnecting the sample from the cooler. Shutting off the cryocooler significantly reduces sample motion that results from vibration and expansion/contraction of the cold head housing. The reduction in motion permits certain processes that are very sensitive to sample position stability, but are not performed throughout the duration that the sample is at low-temperature. An apparatus was constructed to demonstrate this technique using a 4K GM cryocooler. Experimental and theoretical predictions indicate that when the helium pot is pressurized to the working pressure of the cryocooler's helium supply, a sample with continuous heat dissipation of several-hundred milliwatts can be maintained at 7K for several minutes when using an extension that increases the cold head length by less than 50%.

  10. Benzodiazepine stability in postmortem samples stored at different temperatures.

    PubMed

    Melo, Paula; Bastos, M Lourdes; Teixeira, Helena M

    2012-01-01

    Benzodiazepine (lorazepam, estazolam, chlordiazepoxide, and ketazolam) stability was studied in postmortem blood, bile, and vitreous humor stored at different temperatures over six months. The influence of NaF, in blood and bile samples, was also investigated. A solid-phase extraction technique was used on all the studied samples, and benzodiazepine quantification was performed by high-performance liquid chromatography-diode-array detection. Benzodiazepine concentration remained almost stable in all samples stored at -20°C and -80°C. Estazolam appeared to be a stable benzodiazepine during the six-month study, and ketazolam proved to be the most unstable benzodiazepine. A 100% loss of ketazolam occurred in all samples stored over 1 or 2 weeks at room temperature and over 8 or 12 weeks at 4°C, with the simultaneous detection of diazepam. Chlordiazepoxide suffered complete degradation in all samples, except preserved bile samples, stored at room temperature. Samples stored at 4°C for 6 months had a 29-100% decrease in chlordiazepoxide concentration. The data obtained suggest that results from samples with these benzodiazepines stored long-term should be cautiously interpreted. Bile and vitreous humor proved to be the most advantageous samples in cases where degradation of benzodiazepines by microorganisms may occur.

  11. Helium POT System for Maintaining Sample Temperature after Cryocooler Deactivation

    NASA Astrophysics Data System (ADS)

    Haid, B. J.

    2006-04-01

    A system for maintaining a sample at a constant temperature below 10 K after deactivating the cooling source is demonstrated. In this system, the cooling source is a 4 K GM cryocooler that is joined with the sample through an extension that consists of a helium pot and a thermal resistance. Upon stopping the cryocooler, the power applied to a heater located on the sample side of the thermal resistance is decreased gradually to maintain an appropriate temperature rise across the thermal resistance as the helium pot warms. The sample temperature is held constant in this manner without the use of solid or liquid cryogens and without mechanically disconnecting the sample from the cooler. Shutting off the cryocooler significantly reduces sample motion that results from vibration and expansion/contraction of the cold-head housing. The reduction in motion permits certain procedures that are very sensitive to sample position stability, but are performed with limited duration. A proof-of-concept system was built and operated with the helium pot pressurized to the cryocooler's charge pressure. A sample with 200 mW of continuous heat dissipation was maintained at 7 K while the cryocooler operated intermittently with a duty cycle of 9.5 minutes off and 20 minutes on.

  12. Rotating sample magnetometer for cryogenic temperatures and high magnetic fields.

    PubMed

    Eisterer, M; Hengstberger, F; Voutsinas, C S; Hörhager, N; Sorta, S; Hecher, J; Weber, H W

    2011-06-01

    We report on the design and implementation of a rotating sample magnetometer (RSM) operating in the variable temperature insert (VTI) of a cryostat equipped with a high-field magnet. The limited space and the cryogenic temperatures impose the most critical design parameters: the small bore size of the magnet requires a very compact pick-up coil system and the low temperatures demand a very careful design of the bearings. Despite these difficulties the RSM achieves excellent resolution at high magnetic field sweep rates, exceeding that of a typical vibrating sample magnetometer by about a factor of ten. In addition the gas-flow cryostat and the high-field superconducting magnet provide a temperature and magnetic field range unprecedented for this type of magnetometer.

  13. Parametric models for samples of random functions

    SciTech Connect

    Grigoriu, M.

    2015-09-15

    A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.

  14. Ultra sound absorption measurements in rock samples at low temperatures

    NASA Technical Reports Server (NTRS)

    Herminghaus, C.; Berckhemer, H.

    1974-01-01

    A new technique, comparable with the reverberation method in room acoustics, is described. It allows Q-measurements at rock samples of arbitrary shape in the frequency range of 50 to 600 kHz in vacuum (.1 mtorr) and at low temperatures (+20 to -180 C). The method was developed in particular to investigate rock samples under lunar conditions. Ultrasound absorption has been measured at volcanics, breccia, gabbros, feldspar and quartz of different grain size and texture yielding the following results: evacuation raises Q mainly through lowering the humidity in the rock. In a dry compact rock, the effect of evacuation is small. With decreasing temperature, Q generally increases. Between +20 and -30 C, Q does not change much. With further decrease of temperature in many cases distinct anomalies appear, where Q becomes frequency dependent.

  15. MuSTAR MD: multi-scale sampling using temperature accelerated and replica exchange molecular dynamics.

    PubMed

    Yamamori, Yu; Kitao, Akio

    2013-10-14

    A new and efficient conformational sampling method, MuSTAR MD (Multi-scale Sampling using Temperature Accelerated and Replica exchange Molecular Dynamics), is proposed to calculate the free energy landscape on a space spanned by a set of collective variables. This method is an extension of temperature accelerated molecular dynamics and can also be considered as a variation of replica-exchange umbrella sampling. In the MuSTAR MD, each replica contains an all-atom fine-grained model, at least one coarse-grained model, and a model defined by the collective variables that interacts with the other models in the same replica through coupling energy terms. The coarse-grained model is introduced to drive efficient sampling of large conformational space and the fine-grained model can serve to conduct more accurate conformational sampling. The collective variable model serves not only to mediate the coarse- and fine-grained models, but also to enhance sampling efficiency by temperature acceleration. We have applied this method to Ala-dipeptide and examined the sampling efficiency of MuSTAR MD in the free energy landscape calculation compared to that for replica exchange molecular dynamics, replica exchange umbrella sampling, temperature accelerated molecular dynamics, and conventional MD. The results clearly indicate the advantage of sampling a relatively high energy conformational space, which is not sufficiently sampled with other methods. This feature is important in the investigation of transition pathways that go across energy barriers. MuSTAR MD was also applied to Met-enkephalin as a test case in which two Gō-like models were employed as the coarse-grained model.

  16. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  17. Fast temperature spectrometer for samples under extreme conditions.

    PubMed

    Zhang, Dongzhou; Jackson, Jennifer M; Zhao, Jiyong; Sturhahn, Wolfgang; Alp, E Ercan; Toellner, Thomas S; Hu, Michael Y

    2015-01-01

    We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample's transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of (57)Fe0.9Ni0.1 at high pressure.

  18. Accurate sampling of PCDD/F in high temperature flue-gas using cooled sampling probes.

    PubMed

    Phan, Duong Ngoc Chau; Weidemann, Eva; Lundin, Lisa; Marklund, Stellan; Jansson, Stina

    2012-08-01

    In a laboratory-scale combustion reactor, flue-gas samples were collected at two temperatures in the post-combustion zone, 700°C and 400°C, using two different water-cooled sampling probes. The probes were the cooled probe described in the European Standard method EN-1948:1, referred to as the original probe, and a modified probe that contained a salt/ice mixture to assist the cooling, referred to as the sub-zero probe. To determine the efficiency of the cooling probes, internal temperature measurements were recorded at 5cm intervals inside the probes. Flue-gas samples were analyzed for polychlorinated dibenzo-p-dioxin and dibenzofurans (PCDD/Fs). Samples collected at 700°C using the original cooling probe showed higher concentrations of PCDD/Fs compared to samples collected using the sub-zero probe. No significant differences were observed between samples collected at 400°C. The results indicated that artifact formation of PCDD/Fs readily occurs during flue-gas sampling at high temperatures if the cooling within the probe is insufficient, as found for the original probe at 700°C. It was also shown that this problem could be alleviated by using probes with an enhanced cooling capacity, such as the sub-zero probe. Although this may not affect samples collected for regulatory purposes in exit gases, it is of great importance for research conducted in the high-temperature region of the post-combustion zone.

  19. Dual-temperature acoustic levitation and sample transport apparatus

    NASA Technical Reports Server (NTRS)

    Trinh, E.; Robey, J.; Jacobi, N.; Wang, T.

    1986-01-01

    The properties of a dual-temperature resonant chamber to be used for acoustical levitation and positioning have been theoretically and experimentally studied. The predictions of a first-order dissipationless treatment of the generalized wave equation for an inhomogeneous medium are in close agreement with experimental results for the temperature dependence of the resonant mode spectrum and the acoustic pressure distribution, although the measured magnitude of the pressure variations does not correlate well with the calculated one. Ground-based levitation of low-density samples has been demonstrated at 800 C, where steady-state forces up to 700 dyn were generated.

  20. Fluorescence temperature sensing on rotating samples in the cryogenic range

    NASA Astrophysics Data System (ADS)

    Bresson, F.; Devillers, R.

    1999-07-01

    A surface temperature measurement technique for rotating samples is proposed. It is based on the concept of fluorescence thermometry. The fluorescent and phosphorescent phenomena have been applied in thermometry for ambient and high-temperature measurement but not the cryogenic domain, which is explored using thermocouple- or platinum resistor-based thermometers. However, thermal behavior of Yb2+ ions in fluoride matrices seems to be interesting for thermometry in the range 20-120 K. We present here a remote sensing method which uses fluorescence behavior of Yb2+ ion-doped fluoride crystals. The fluorescence decay time of such crystals is related to its temperature. Since we developed a specific sol-gel process (OrMoSils) to make strongly adherent fluorescent layers, we applied the fluorescence thermometry method for rotating object surface temperature measurement. The main application is the monitoring of surface temperature of the ball bearing or turbopump axis in liquid propulsion rocket engines. Our method is presented and discussed, and we give some experimental results. An accurate calibration of the decay time of CaF2:Yb2+ versus temperature is also given.

  1. A low temperature scanning force microscope for biological samples

    SciTech Connect

    Gustafsson, M. G.L.

    1993-05-01

    An SFM has been constructed capable of operating at 143 K. Two contributions to SFM technology are described: a new method of fabricating tips, and new designs of SFM springs that significantly lower the noise level. The SFM has been used to image several biological samples (including collagen, ferritin, RNA, purple membrane) at 143 K and room temperature. No improvement in resolution resulted from 143 K operation; several possible reasons for this are discussed. Possibly sharper tips may help. The 143 K SFM will allow the study of new categories of samples, such as those prepared by freeze-frame, single molecules (temperature dependence of mechanical properties), etc. The SFM was used to cut single collagen molecules into segments with a precision of {le} 10 nm.

  2. Advances in downhole sampling of high temperature solutions

    SciTech Connect

    Bayhurst, G.K.; Janecky, D.R.

    1991-01-01

    A fluid sampler capable of sampling hot and/or deep wells has been developed at Los Alamos National Laboratory. In collaboration with Leutert Instruments, an off-the-shelf sampler design was modified to meet gas-tight and minimal chemical reactivity/contamination specifications for use in geothermal wells and deep ocean drillholes. This downhole sampler has been routinely used at temperatures up to 300{degrees}C and hole depths of greater than 5 km. We have tested this sampler in various continental wells, including Valles Caldera VC-2a and VC-2b, German KTB, Cajon Pass, and Yellowstone Y-10. Both the standard commercial and enhanced samplers have also been used to obtain samples from a range of depths in the Ocean Drilling Project's hole 504B and during recent mid-ocean ridge drilling efforts. The sampler has made it possible to collect samples at temperatures and conditions beyond the limits of other tools with the added advantage of chemical corrosion resistance.

  3. A Simple Model for Solidification of Undercooled Metallic Samples

    NASA Astrophysics Data System (ADS)

    Saleh, Abdala M.; Clemente, Roberto A.

    2004-06-01

    A simple model for reproducing temperature recalescence behaviour in spherical undercooled liquid metallic samples, undergoing crystallization transformations, is presented. The model is applied to constant heat extraction rate, uniform but time dependent temperature distribution inside the sample (even after the start of crystallization), a classical temperature dependent rate of nucleation (including contributions from different specific heats for different phases and also a catalytic factor to model the possibility of heterogeneous distributed impurities) and the solidified grain interface velocity is taken proportional to the temperature undercooling. Different assumptions are considered for the sample transformed fraction as function of the extended volume of nuclei, like the classical Kolmogoroff, Johnson-Mehl, Avrami one (corresponding to random distribution of nuclei), the Austin-Rickett one (corresponding to some kind of clusterized distribution) and also an empirical one corresponding to some ordering in the distribution of nuclei. As an example of application, a published experimental temperature curve for a zirconium sample in the electromagnetic containerless facility TEMPUS, during the 2nd International Microgravity Laboratory Mission in 1994, is modeled. Some thermo-physical parameters of interest for Zr are discussed.

  4. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested

  5. Tissue Sampling Guides for Porcine Biomedical Models.

    PubMed

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results.

  6. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  7. An enhanced compost temperature sampling framework: case study of a covered aerated static pile.

    PubMed

    Isobaev, Pulat; Bouferguene, Ahmed; Wichuk, Kristine M; McCartney, Daryl

    2014-07-01

    Spatial and temporal temperature variations exist in a compost pile. This study demonstrates that systematic temperature sampling of a compost pile, as is widely done, tends to underestimate these variations, which in turn may lead to false conclusions about the sanitary condition of the final product. To address these variations, a proper scheme of temperature sampling needs to be used. A comparison of the results from 21 temperature data loggers randomly introduced into a compost pile with those from 20 systematically introduced data loggers showed that the mean, maximum and minimum temperatures in both methods were very similar in their magnitudes. Overall, greater temperature variation was captured using the random method. In addition, 95% of the probes introduced systematically had attained thermophilic sanitation conditions (≥ 55°C for three consecutive days), as compared to 76% from the group that were randomly introduced. Furthermore, it was found that, from a statistical standpoint, readings from at least 47 randomly introduced temperature loggers are necessary to capture the observed temperature variation. Lastly, the turning of the compost pile was found to increase the chance that any random particle would be exposed to the temperature ≥ 55°C for three consecutive days. One turning was done during the study, and it increased the probability from 76% to nearly 85%. Using the Markov chain model it was calculated that if five turnings had been implemented on the evaluated technology, the likelihood that every particle would experience the required time-temperature condition would be 98%. PMID:24767412

  8. Adaptive importance sampling for network growth models

    PubMed Central

    Holmes, Susan P.

    2016-01-01

    Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable. With this in mind, we develop an Adaptive Importance Sampling algorithm for estimating likelihoods of Network Growth Models. We introduce the use of the classic Plackett-Luce model of rankings as a family of importance distributions. Updates to importance distributions are performed iteratively via the Cross-Entropy Method with an additional correction for degeneracy/over-fitting inspired by the Minimum Description Length principle. This correction can be applied to other estimation problems using the Cross-Entropy method for integration/approximate counting, and it provides an interpretation of Adaptive Importance Sampling as iterative model selection. Empirical results for the Preferential Attachment model are given, along with a comparison to an alternative established technique, Annealed Importance Sampling. PMID:27182098

  9. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    SciTech Connect

    Montaser, A.

    1992-01-01

    New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)

  10. Current Sharing Temperature Test and Simulation with GANDALF Code for ITER PF2 Conductor Sample

    NASA Astrophysics Data System (ADS)

    Li, Shaolei; Wu, Yu; Liu, Bo; Weng, Peide

    2011-10-01

    Cable-in-conduit conductor (CICC) conductor sample of the PF2 coil for ITER was tested in the SULTAN facility. According to the test results, the CICC conductor sample exhibited a stable performance regarding the current sharing temperature. Under the typical operational conditions of a current of 45 kA, a magnetic field of 4 T and a temperature of 5 K for PF2, the test result for the conductor current sharing temperature is 6.71 K, with a temperature margin of 1.71 K. For a comparison thermal-hydraulic analysis of the PF2 conductor was carried out using GANDALF code in a 1-D model, and the result is consistent with the test one.

  11. Modeling abundance using hierarchical distance sampling

    USGS Publications Warehouse

    Royle, Andy; Kery, Marc

    2016-01-01

    In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.

  12. Mixture Models for Distance Sampling Detection Functions

    PubMed Central

    Miller, David L.; Thomas, Len

    2015-01-01

    We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used “key function plus series adjustment” (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set. PMID:25793744

  13. Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea

    NASA Astrophysics Data System (ADS)

    Lorentzen, Torbjørn

    2014-02-01

    The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model

  14. Cancer progression modeling using static sample data.

    PubMed

    Sun, Yijun; Yao, Jin; Nowak, Norma J; Goodison, Steve

    2014-01-01

    As molecular profiling data continues to accumulate, the design of integrative computational analyses that can provide insights into the dynamic aspects of cancer progression becomes feasible. Here, we present a novel computational method for the construction of cancer progression models based on the analysis of static tumor samples. We demonstrate the reliability of the method with simulated data, and describe the application to breast cancer data. Our findings support a linear, branching model for breast cancer progression. An interactive model facilitates the identification of key molecular events in the advance of disease to malignancy.

  15. Annealed Importance Sampling for Neural Mass Models.

    PubMed

    Penny, Will; Sengupta, Biswa

    2016-03-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  16. Annealed Importance Sampling for Neural Mass Models.

    PubMed

    Penny, Will; Sengupta, Biswa

    2016-03-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution.

  17. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  18. Variable range hopping conduction in n-CdSe samples at very low temperature

    NASA Astrophysics Data System (ADS)

    Errai, M.; El Kaaouachi, A.; El Idrissi, H.

    2015-12-01

    We reanalyzed experimental data already published in Friedman J R, Zhang Y, Dai P, et al. Phys Rev B, 1996, 53(15): 9528. Variable range hopping (VRH) conduction in the insulating three-dimensional n-CdSe samples has been studied over the entire temperature range from 0.03 to 1 K. In the absence of a magnetic field, the low temperature conductivity σ of the three samples (A, B and C) obeys the Mott VRH conduction with an appropriate temperature dependence in the prefactor (σ = σ0 exp[-(T0/T)]p with p ≈ 0.25). This behavior can be explained by a VRH model where the transport occurs by hopping between localized states in the vicinity of the Fermi level, EF, without creation of the Coulomb gap (CG). On the contrary, no Efros-Shklovskii VRH is observed, suggesting that the density is constant in the vicinity of the EF.

  19. Effects of room temperature aging on two cryogenic temperature sensor models used in aerospace applications

    NASA Astrophysics Data System (ADS)

    Courts, S. Scott; Krause, John

    2012-06-01

    Cryogenic temperature sensors used in aerospace applications are typically procured far in advance of the mission launch date. Depending upon the program, the temperature sensors may be stored at room temperature for extended periods as installation and groundbased testing can take years before the actual flight. The effects of long term storage at room temperature are sometimes approximated by the use of accelerated aging at temperatures well above room temperature, but this practice can yield invalid results as the sensing material and/or electrical contacting method can be increasingly unstable with higher temperature exposure. To date, little data are available on the effects of extended room temperature aging on sensors commonly used in aerospace applications. This research examines two such temperature sensors models - the Lake Shore Cryotronics, Inc. model CernoxTM and DT-670-SD temperature sensors. Sample groups of each model type have been maintained for ten years or longer with room temperature storage between calibrations. Over an eighteen year period, the CernoxTM temperature sensors exhibited a stability of better than ±20 mK for T<30 K and better than ±0.1% of temperature for T>30 K. Over a ten year period the model DT-670-SD sensors exhibited a stability of better than ±140 mK for T<25 K and better than ±75 mK for T>25 K.

  20. Multistage sampling for latent variable models.

    PubMed

    Thomas, Duncan C

    2007-12-01

    I consider the design of multistage sampling schemes for epidemiologic studies involving latent variable models, with surrogate measurements of the latent variables on a subset of subjects. Such models arise in various situations: when detailed exposure measurements are combined with variables that can be used to assign exposures to unmeasured subjects; when biomarkers are obtained to assess an unobserved pathophysiologic process; or when additional information is to be obtained on confounding or modifying variables. In such situations, it may be possible to stratify the subsample on data available for all subjects in the main study, such as outcomes, exposure predictors, or geographic locations. Three circumstances where analytic calculations of the optimal design are possible are considered: (i) when all variables are binary; (ii) when all are normally distributed; and (iii) when the latent variable and its measurement are normally distributed, but the outcome is binary. In each of these cases, it is often possible to considerably improve the cost efficiency of the design by appropriate selection of the sampling fractions. More complex situations arise when the data are spatially distributed: the spatial correlation can be exploited to improve exposure assignment for unmeasured locations using available measurements on neighboring locations; some approaches for informative selection of the measurement sample using location and/or exposure predictor data are considered.

  1. Far infrared reflectance of sintered nickel manganite samples for negative temperature coefficient thermistors

    SciTech Connect

    Nikolic, M.V. . E-mail: maria@mi.sanu.ac.yu; Paraskevopoulos, K.M.; Aleksic, O.S.; Zorba, T.T.; Savic, S.M.; Lukovic, D.T.

    2007-08-07

    Single phase complex spinel (Mn, Ni, Co, Fe){sub 3}O{sub 4} samples were sintered at 1050, 1200 and 1300 deg. C for 30 min and at 1200 deg. C for 120 min. Morphological changes of the obtained samples with the sintering temperature and time were analyzed by X-ray diffraction and scanning electron microscope (SEM). Room temperature far infrared reflectivity spectra for all samples were measured in the frequency range between 50 and 1200 cm{sup -1}. The obtained spectra for all samples showed the presence of the same oscillators, but their intensities increased with the sintering temperature and time in correlation with the increase in sample density and microstructure changes during sintering. The measured spectra were numerically analyzed using the Kramers-Kroenig method and the four-parameter model of coupled oscillators. Optical modes were calculated for six observed ionic oscillators belonging to the spinel structure of (Mn, Ni, Co, Fe){sub 3}O{sub 4} of which four were strong and two were weak.

  2. Fast temperature spectrometer for samples under extreme conditions

    SciTech Connect

    Zhang, Dongzhou; Jackson, Jennifer M.; Sturhahn, Wolfgang; Zhao, Jiyong; Alp, E. Ercan; Toellner, Thomas S.; Hu, Michael Y.

    2015-01-15

    We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample’s transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of {sup 57}Fe{sub 0.9}Ni{sub 0.1} at high pressure.

  3. Multiple temperatures sampled using only one reference junction

    NASA Technical Reports Server (NTRS)

    Cope, G. W.

    1966-01-01

    In a multitemperature sampling system where the reference thermocouples are a distance from the test thermocouples, an intermediate thermal junction block is placed between the sets of thermocouples permitting switching between a single reference and the test thermocouples. This reduces the amount of cabling, reference thermocouples, and cost of the sampling system.

  4. Thermal modeling of core sampling in flammable gas waste tanks. Part 2: Rotary-mode sampling

    SciTech Connect

    Unal, C.; Poston, D.; Pasamehmetoglu, K.O.; Witwer, K.S.

    1997-08-01

    The radioactive waste stored in underground storage tanks at Hanford site includes mixtures of sodium nitrate and sodium nitrite with organic compounds. The waste can produce undesired violent exothermic reactions when heated locally during the rotary-mode sampling. Experiments are performed varying the downward force at a maximum rotational speed of 55 rpm and minimum nitrogen purge flow of 30 scfm. The rotary drill bit teeth-face temperatures are measured. The waste is simulated with a low thermal conductivity hard material, pumice blocks. A torque meter is used to determine the energy provided to the drill string. The exhaust air-chip temperature as well as drill string and drill bit temperatures and other key operating parameters were recorded. A two-dimensional thermal model is developed. The safe operating conditions were determined for normal operating conditions. A downward force of 750 at 55 rpm and 30 scfm nitrogen purge flow was found to yield acceptable substrate temperatures. The model predicted experimental results reasonably well. Therefore, it could be used to simulate abnormal conditions to develop procedures for safe operations.

  5. Thermospheric temperature, density, and composition: New models

    NASA Technical Reports Server (NTRS)

    Jacchia, L. G.

    1977-01-01

    The models essentially consist of two parts: the basic static models, which give temperature and density profiles for the relevant atmospheric constituents for any specified exospheric temperature, and a set of formulae to compute the exospheric temperature and the expected deviations from the static models as a result of all the recognized types of thermospheric variation. For the basic static models, tables are given for heights from 90 to 2,500 km and for exospheric temperatures from 500 to 2600 K. In the formulae for the variations, an attempt has been made to represent the changes in composition observed by mass spectrometers on the OGO 6 and ESRO 4 satellites.

  6. Temperature-responsive Solid-phase Extraction Column for Biological Sample Pretreatment.

    PubMed

    Akimaru, Michiko; Okubo, Kohei; Hiruta, Yuki; Kanazawa, Hideko

    2015-01-01

    We have developed a novel solid-phase extraction (SPE) system utilizing a temperature-responsive polymer hydrogel-modified stationary phase. Aminopropyl silica beads (average diameter, 40 - 64 μm) were coated with poly(N-isopropylacrylamide) (PNIPAAm)-based thermo-responsive hydrogels. Butyl methacrylate (BMA) and N,N-dimethylaminopropyl acrylamide (DMAPAAm) were used as the hydrophobic and cationic monomers, respectively, and copolymerized with NIPAAm. To evaluate the use of this SPE cartridge for the analysis of drugs and proteins in biological fluids, we studied the separation of phenytoin and theophylline from human serum albumin (HSA) as a model system. The retention of the analytes in an exclusively aqueous eluent could be modulated by changing the temperature and salt content. These results indicated that this temperature-responsive SPE system can be applied to the pretreatment of biological samples for the measurement of serum drug levels.

  7. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Montaser, A.

    In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, nutrition, and biomedicine. Emphasis is being placed on: (1) generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; (2) computer modelings of ICP discharges to predict the behavior of new and existing plasmas; (3) diagnostic studies of high temperature plasmas and sample introduction systems to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; (4) development and characterization of new, low cost sample introduction systems that consume microliter or microgram quantities of samples; and (5) investigation of new membrane separators for stripping solvent from sample aerosol to reduce various interferences and to enhance sensitivity and selectivity in plasma spectrometry.

  8. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  9. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    PubMed Central

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  10. Modeling monthly mean air temperature for Brazil

    NASA Astrophysics Data System (ADS)

    Alvares, Clayton Alcarde; Stape, José Luiz; Sentelhas, Paulo Cesar; de Moraes Gonçalves, José Leonardo

    2013-08-01

    Air temperature is one of the main weather variables influencing agriculture around the world. Its availability, however, is a concern, mainly in Brazil where the weather stations are more concentrated on the coastal regions of the country. Therefore, the present study had as an objective to develop models for estimating monthly and annual mean air temperature for the Brazilian territory using multiple regression and geographic information system techniques. Temperature data from 2,400 stations distributed across the Brazilian territory were used, 1,800 to develop the equations and 600 for validating them, as well as their geographical coordinates and altitude as independent variables for the models. A total of 39 models were developed, relating the dependent variables maximum, mean, and minimum air temperatures (monthly and annual) to the independent variables latitude, longitude, altitude, and their combinations. All regression models were statistically significant ( α ≤ 0.01). The monthly and annual temperature models presented determination coefficients between 0.54 and 0.96. We obtained an overall spatial correlation higher than 0.9 between the models proposed and the 16 major models already published for some Brazilian regions, considering a total of 3.67 × 108 pixels evaluated. Our national temperature models are recommended to predict air temperature in all Brazilian territories.

  11. Loop modeling: Sampling, filtering, and scoring

    PubMed Central

    Soto, Cinque S; Fasnacht, Marc; Zhu, Jiang; Forrest, Lucy; Honig, Barry

    2008-01-01

    We describe a fast and accurate protocol, LoopBuilder, for the prediction of loop conformations in proteins. The procedure includes extensive sampling of backbone conformations, side chain addition, the use of a statistical potential to select a subset of these conformations, and, finally, an energy minimization and ranking with an all-atom force field. We find that the Direct Tweak algorithm used in the previously developed LOOPY program is successful in generating an ensemble of conformations that on average are closer to the native conformation than those generated by other methods. An important feature of Direct Tweak is that it checks for interactions between the loop and the rest of the protein during the loop closure process. DFIRE is found to be a particularly effective statistical potential that can bias conformation space toward conformations that are close to the native structure. Its application as a filter prior to a full molecular mechanics energy minimization both improves prediction accuracy and offers a significant savings in computer time. Final scoring is based on the OPLS/SBG-NP force field implemented in the PLOP program. The approach is also shown to be quite successful in predicting loop conformations for cases where the native side chain conformations are assumed to be unknown, suggesting that it will prove effective in real homology modeling applications. Proteins 2008. © 2007 Wiley-Liss, Inc. PMID:17729286

  12. Effects of High-frequency Wind Sampling on Simulated Mixed Layer Depth and Upper Ocean Temperature

    NASA Technical Reports Server (NTRS)

    Lee, Tong; Liu, W. Timothy

    2005-01-01

    Effects of high-frequency wind sampling on a near-global ocean model are studied by forcing the model with a 12 hourly averaged wind product and its 24 hourly subsamples in separate experiments. The differences in mixed layer depth and sea surface temperature resulting from these experiments are examined, and the underlying physical processes are investigated. The 24 hourly subsampling not only reduces the high-frequency variability of the wind but also affects the annual mean wind because of aliasing. While the former effect largely impacts mid- to high-latitude oceans, the latter primarily affects tropical and coastal oceans. At mid- to high-latitude regions the subsampled wind results in a shallower mixed layer and higher sea surface temperature because of reduced vertical mixing associated with weaker high-frequency wind. In tropical and coastal regions, however, the change in upper ocean structure due to the wind subsampling is primarily caused by the difference in advection resulting from aliased annual mean wind, which varies with the subsampling time. The results of the study indicate a need for more frequent sampling of satellite wind measurement and have implications for data assimilation in terms of identifying the nature of model errors.

  13. Effect of the Target Motion Sampling Temperature Treatment Method on the Statistics and Performance

    NASA Astrophysics Data System (ADS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2014-06-01

    Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very strong resonances. This effect is actually not related to the usage of sampled responses, but is instead an inherent property of the TMS tracking method and concerns both EBT and 0 K calculations.

  14. The X-ray luminosity temperature relation of a complete sample of low mass galaxy clusters

    NASA Astrophysics Data System (ADS)

    Zou, S.; Maughan, B. J.; Giles, P. A.; Vikhlinin, A.; Pacaud, F.; Burenin, R.; Hornstrup, A.

    2016-08-01

    We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 < z < 0.15 with a median temperature of ˜2 KeV. The sample is a statistically complete flux-limited subset of the 400 deg2 survey. We investigated the scaling relation between X-ray luminosity (L) and temperature (T), taking selection biases fully into account. The logarithmic slope of the bolometric L - T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L - T relation we show that there is no evidence for the slope, normalisation, or scatter of the L - T relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised L - T relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups.

  15. Method and apparatus for transport, introduction, atomization and excitation of emission spectrum for quantitative analysis of high temperature gas sample streams containing vapor and particulates without degradation of sample stream temperature

    DOEpatents

    Eckels, David E.; Hass, William J.

    1989-05-30

    A sample transport, sample introduction, and flame excitation system for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses.

  16. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L. (d... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test for filter temperature...

  17. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present...

  18. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present...

  19. Modeling equinox temperature variations in Saturn's rings

    NASA Astrophysics Data System (ADS)

    Spilker, L. J.; Ferrari, C. C.; Morishima, R.

    2011-12-01

    For a few days around Saturn ring equinox, the Cassini Composite Infrared Spectrometer (CIRS) obtained data on Saturn's rings at different local times and phase angles. We examine results from 15 scans taken near equinox. The sun was shining on the south side of the rings prior to the equinox crossing. The solar elevation angle in the 15 scans varied between -0.00007 degrees and 0.036 degrees and the phase angle ranged from 30 degrees to 147 degrees. The equinox geometry is unique because the sun is edge-on to the rings. Saturn heating dominates while solar heating is at a minimum. The ring temperature varies between the lit and unlit sides of the A and B rings when the sun is the dominant heat source. With the sun shining on the rings the temperature of the lit rings decreases with increasing phase angle and the ring temperature in the shadow is less than the ring temperature at noon. At equinox the ring temperature does not decrease with increasing phase angle and the temperature at noon is no longer greater than the temperature in the shadow. As the solar elevation angle decreased the last few degrees, the ring temperatures on the lit and unlit sides rapidly decreased to the coldest temperatures observed thus far. At equinox radial and longitudinal temperature variations are observed in the A, B and C rings and the Cassini Division. The radial temperature variations result both from the decreasing Saturn solid angle with increasing distance from the planet and varying optical depth as the screening effect of optically thicker rings limits the heat contribution to primarily one hemisphere of Saturn. Both monolayer and multilayer models can explain the radial variations in ring temperature except for the A ring. A ring model fits produce temperatures that are lower than observed temperatures perhaps because of the effects from gravitational wakes, density waves and bending waves that are not included in the models. Saturn ring temperatures near equinox also vary

  20. Random energy model at complex temperatures

    PubMed

    Saakian

    2000-06-01

    The complete phase diagram of the random energy model is obtained for complex temperatures using the method proposed by Derrida. We find the density of zeroes for the statistical sum. Then the method is applied to the generalized random energy model. This allowed us to propose an analytical method for investigating zeroes of the statistical sum for finite-dimensional systems. PMID:11088286

  1. Factors affecting quality of temperature models for the pre-appearance interval of forensically useful insects.

    PubMed

    Matuszewski, Szymon; Mądra, Anna

    2015-02-01

    In the case of many forensically important insects an interval preceding appearance of an insect stage on a corpse (called the pre-appearance interval or PAI) is strongly temperature-dependent. Accordingly, it was proposed to estimate PAI from temperature by using temperature models for PAI of particular insect species and temperature data specific for a given case. The quality of temperature models for PAI depends on the protocols for PAI field studies. In this article we analyze effects of sampling frequency and techniques, temperature data, as well as the size of a sample on the quality of PAI models. Models were created by using data from a largely replicated PAI field study, and their performance in estimation was tested with external body of PAI data. It was found that low frequency of insect sampling distinctly deteriorated temperature models for PAI. The effect of sampling techniques was clearly smaller. Temperature data from local weather station gave models of poor quality, however their retrospective correction clearly improved the models. Most importantly, current results demonstrate that sample size in PAI field studies may be substantially reduced, with no model deterioration. Samples consisting of 11-14 carcasses gave models of high quality, as long as the whole range of relevant temperatures was studied. Moreover, it was found that carcasses exposed in forests and carcasses exposed in early spring are particularly important, as they ensure that PAI data is collected at low temperatures. A preliminary best practice model for PAI field studies is given. PMID:25541074

  2. Global modeling of fresh surface water temperature

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Eikelboom, T.; van Vliet, M. T.; Van Beek, L. P.

    2011-12-01

    Temperature determines a range of water physical properties, the solubility of oxygen and other gases and acts as a strong control on fresh water biogeochemistry, influencing chemical reaction rates, phytoplankton and zooplankton composition and the presence or absence of pathogens. Thus, in freshwater ecosystems the thermal regime affects the geographical distribution of aquatic species through their growth and metabolism, tolerance to parasites, diseases and pollution and life history. Compared to statistical approaches, physically-based models of surface water temperature have the advantage that they are robust in light of changes in flow regime, river morphology, radiation balance and upstream hydrology. Such models are therefore better suited for projecting the effects of global change on water temperature. Till now, physically-based models have only been applied to well-defined fresh water bodies of limited size (e.g., lakes or stream segments), where the numerous parameters can be measured or otherwise established, whereas attempts to model water temperature over larger scales has thus far been limited to regression type of models. Here, we present a first attempt to apply a physically-based model of global fresh surface water temperature. The model adds a surface water energy balance to river discharge modelled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by short and long-wave radiation and sensible and latent heat fluxes. Also included are ice-formation and its effect on heat storage and river hydraulics. We used the coupled surface water and energy balance model to simulate global fresh surface water temperature at daily time steps on a 0.5x0.5 degree grid for the period 1970-2000. Meteorological forcing was obtained from the CRU data set, downscaled to daily values with ECMWF

  3. Pre-analytical sample quality: metabolite ratios as an intrinsic marker for prolonged room temperature exposure of serum samples.

    PubMed

    Anton, Gabriele; Wilson, Rory; Yu, Zhong-Hao; Prehn, Cornelia; Zukunft, Sven; Adamski, Jerzy; Heier, Margit; Meisinger, Christa; Römisch-Margl, Werner; Wang-Sattler, Rui; Hveem, Kristian; Wolfenbuttel, Bruce; Peters, Annette; Kastenmüller, Gabi; Waldenberger, Melanie

    2015-01-01

    Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT) and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation. PMID:25823017

  4. Pre-analytical sample quality: metabolite ratios as an intrinsic marker for prolonged room temperature exposure of serum samples.

    PubMed

    Anton, Gabriele; Wilson, Rory; Yu, Zhong-Hao; Prehn, Cornelia; Zukunft, Sven; Adamski, Jerzy; Heier, Margit; Meisinger, Christa; Römisch-Margl, Werner; Wang-Sattler, Rui; Hveem, Kristian; Wolfenbuttel, Bruce; Peters, Annette; Kastenmüller, Gabi; Waldenberger, Melanie

    2015-01-01

    Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT) and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation.

  5. Temperature dependence on the pesticide sampling rate of polar organic chemical integrative samplers (POCIS).

    PubMed

    Yabuki, Yoshinori; Nagai, Takashi; Inao, Keiya; Ono, Junko; Aiko, Nobuyuki; Ohtsuka, Nobutoshi; Tanaka, Hitoshi; Tanimori, Shinji

    2016-10-01

    Laboratory experiments were performed to determine the sampling rates of pesticides for the polar organic chemical integrative samplers (POCIS) used in Japan. The concentrations of pesticides in aquatic environments were estimated from the accumulated amounts of pesticide on POCIS, and the effect of water temperature on the pesticide sampling rates was evaluated. The sampling rates of 48 pesticides at 18, 24, and 30 °C were obtained, and this study confirmed that increasing trend of sampling rates was resulted with increasing water temperature for many pesticides. PMID:27305429

  6. Interpolation of climate variables and temperature modeling

    NASA Astrophysics Data System (ADS)

    Samanta, Sailesh; Pal, Dilip Kumar; Lohar, Debasish; Pal, Babita

    2012-01-01

    Geographic Information Systems (GIS) and modeling are becoming powerful tools in agricultural research and natural resource management. This study proposes an empirical methodology for modeling and mapping of the monthly and annual air temperature using remote sensing and GIS techniques. The study area is Gangetic West Bengal and its neighborhood in the eastern India, where a number of weather systems occur throughout the year. Gangetic West Bengal is a region of strong heterogeneous surface with several weather disturbances. This paper also examines statistical approaches for interpolating climatic data over large regions, providing different interpolation techniques for climate variables' use in agricultural research. Three interpolation approaches, like inverse distance weighted averaging, thin-plate smoothing splines, and co-kriging are evaluated for 4° × 4° area, covering the eastern part of India. The land use/land cover, soil texture, and digital elevation model are used as the independent variables for temperature modeling. Multiple regression analysis with standard method is used to add dependent variables into regression equation. Prediction of mean temperature for monsoon season is better than winter season. Finally standard deviation errors are evaluated after comparing the predicted temperature and observed temperature of the area. For better improvement, distance from the coastline and seasonal wind pattern are stressed to be included as independent variables.

  7. On the Utilization of Sample Weights in Latent Variable Models.

    ERIC Educational Resources Information Center

    Kaplan, David; Ferguson, Aaron J.

    1999-01-01

    Examines the use of sample weights in latent variable models in the case where a simple random sample is drawn from a population containing a mixture of strata through a bootstrap simulation study. Results show that ignoring weights can lead to serious bias in latent variable model parameters and reveal the advantages of using sample weights. (SLD)

  8. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  9. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1991-01-01

    A two dimensional conduction/radiation problem for an alumina crucible in a zirconia heater/muffle tube enclosing a liquid iron sample was solved numerically. Variations in the crucible wall thickness were numerically examined. The results showed that the temperature profiles within the liquid iron sample were significantly affected by the crucible wall thicknesses. New zirconia heating elements are under development that will permit continued experimental investigations of the zirconia furnace. These elements have been designed to work with the existing furnace and have been shown to have longer lifetimes than commercially available zirconia heating elements. The first element has been constructed and tested successfully.

  10. Simple method for highlighting the temperature distribution into a liquid sample heated by microwave power field

    SciTech Connect

    Surducan, V.; Surducan, E.; Dadarlat, D.

    2013-11-13

    Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved.

  11. Sample Size Determination for Rasch Model Tests

    ERIC Educational Resources Information Center

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  12. Modeling complexometric titrations of natural water samples.

    PubMed

    Hudson, Robert J M; Rue, Eden L; Bruland, Kenneth W

    2003-04-15

    Complexometric titrations are the primary source of metal speciation data for aquatic systems, yet their interpretation in waters containing humic and fulvic acids remains problematic. In particular, the accuracy of inferred ambient free metal ion concentrations and parameters quantifying metal complexation by natural ligands has been challenged because of the difficulties inherent in calibrating common analytical methods and in modeling the diverse array of ligands present. This work tests and applies a new method of modeling titration data that combines calibration of analytical sensitivity (S) and estimation of concentrations and stability constants for discrete natural ligand classes ([Li]T and Ki) into a single step using nonlinear regression and a new analytical solution to the one-metal/two-ligand equilibrium problem. When applied to jointly model data from multiple titrations conducted at different analytical windows, it yields accurate estimates of S, [Li]T, Ki, and [Cu2+] plus Monte Carlo-based estimates of the uncertainty in [Cu2+]. Jointly modeling titration data at low-and high-analytical windows leads to an efficient adaptation of the recently proposed "overload" approach to calibrating ACSV/CLE measurements. Application of the method to published data sets yields model results with greater accuracy and precision than originally obtained. The discrete ligand-class model is also re-parametrized, using humic and fulvic acids, L1 class (K1 = 10(13) M(-1)), and strong ligands (L(S)) with K(S) > K1 as "natural components". This approach suggests that Cu complexation in NW Mediterranean Sea water can be well represented as 0.8 +/- 0.3/0.2 mg humic equiv/L, 13 +/- 1 nM L1, and 2.5 +/- 0.1 nM L(S) with [CU]T = 3 nM. In coastal seawater from Narragansett Bay, RI, Cu speciation can be modeled as 0.6 +/- 0.1 mg humic equiv/L and 22 +/- 1 nM L1 or approximately 12 nM L1 and approximately 9 nM L(S), with [CU]T = 13 nM. In both waters, the large excess

  13. The XXL Survey . IV. Mass-temperature relation of the bright cluster sample

    NASA Astrophysics Data System (ADS)

    Lieu, M.; Smith, G. P.; Giles, P. A.; Ziparo, F.; Maughan, B. J.; Démoclès, J.; Pacaud, F.; Pierre, M.; Adami, C.; Bahé, Y. M.; Clerc, N.; Chiappetti, L.; Eckert, D.; Ettori, S.; Lavoie, S.; Le Fevre, J. P.; McCarthy, I. G.; Kilbinger, M.; Ponman, T. J.; Sadibekova, T.; Willis, J. P.

    2016-06-01

    Context. The XXL Survey is the largest survey carried out by XMM-Newton. Covering an area of 50 deg2, the survey contains ~450 galaxy clusters out to a redshift ~2 and to an X-ray flux limit of ~ 5 × 10-15 erg s-1 cm-2. This paper is part of the first release of XXL results focussed on the bright cluster sample. Aims: We investigate the scaling relation between weak-lensing mass and X-ray temperature for the brightest clusters in XXL. The scaling relation discussed in this article is used to estimate the mass of all 100 clusters in XXL-100-GC. Methods: Based on a subsample of 38 objects that lie within the intersection of the northern XXL field and the publicly available CFHTLenS shear catalog, we derive the weak-lensing mass of each system with careful considerations of the systematics. The clusters lie at 0.1 temperature range of T ≃ 1-5 keV. We combine our sample with an additional 58 clusters from the literature, increasing the range to T ≃ 1-10 keV. To date, this is the largest sample of clusters with weak-lensing mass measurements that has been used to study the mass-temperature relation. Results: The mass-temperature relation fit (M ∝ Tb) to the XXL clusters returns a slope and intrinsic scatter σlnM|T≃ 0.53; the scatter is dominated by disturbed clusters. The fit to the combined sample of 96 clusters is in tension with self-similarity, b = 1.67 ± 0.12 and σlnM|T ≃ 0.41. Conclusions: Overall our results demonstrate the feasibility of ground-based weak-lensing scaling relation studies down to cool systems of ~1 keV temperature and highlight that the current data and samples are a limit to our statistical precision. As such we are unable to determine whether the validity of hydrostatic equilibrium is a function of halo mass. An enlarged sample of cool systems, deeper weak-lensing data, and robust modelling of the selection function will help to explore these issues further. Based on observations obtained with XMM-Newton, an ESA

  14. The Effect of Sample Size on Latent Growth Models.

    ERIC Educational Resources Information Center

    Hamilton, Jennifer; Gagne, Phillip E.; Hancock, Gregory R.

    A Monte Carlo simulation approach was taken to investigate the effect of sample size on a variety of latent growth models. A fully balanced experimental design was implemented, with samples drawn from multivariate normal populations specified to represent 12 unique growth models. The models varied factorially by crossing number of time points,…

  15. Flight summaries and temperature climatology at airliner cruise altitudes from GASP (Global Atmospheric Sampling Program) data

    NASA Technical Reports Server (NTRS)

    Nastrom, G. D.; Jasperson, W. H.

    1983-01-01

    Temperature data obtained by the Global Atmospheric Sampling Program (GASP) during the period March 1975 to July 1979 are compiled to form flight summaries of static air temperature and a geographic temperature climatology. The flight summaries include the height and location of the coldest observed temperature and the mean flight level, temperature and the standard deviation of temperature for each flight as well as for flight segments. These summaries are ordered by route and month. The temperature climatology was computed for all statistically independent temperture data for each flight. The grid used consists of 5 deg latitude, 30 deg longitude and 2000 feet vertical resolution from FL270 to FL430 for each month of the year. The number of statistically independent observations, their mean, standard deviation and the empirical 98, 50, 16, 2 and .3 probability percentiles are presented.

  16. Metamorphism during temperature gradient with undersaturated advective airflow in a snow sample

    NASA Astrophysics Data System (ADS)

    Ebner, Pirmin Philipp; Schneebeli, Martin; Steinfeld, Aldo

    2016-04-01

    Snow at or close to the surface commonly undergoes temperature gradient metamorphism under advective flow, which alters its microstructure and physical properties. Time-lapse X-ray microtomography is applied to investigate the structural dynamics of temperature gradient snow metamorphism exposed to an advective airflow in controlled laboratory conditions. Cold saturated air at the inlet was blown into the snow samples and warmed up while flowing across the sample with a temperature gradient of around 50 K m-1. Changes of the porous ice structure were observed at mid-height of the snow sample. Sublimation occurred due to the slight undersaturation of the incoming air into the warmer ice matrix. Diffusion of water vapor opposite to the direction of the temperature gradient counteracted the mass transport of advection. Therefore, the total net ice change was negligible leading to a constant porosity profile. However, the strong recrystallization of water molecules in snow may impact its isotopic or chemical content.

  17. Two-temperature models for nitrogen dissociation

    NASA Astrophysics Data System (ADS)

    da Silva, M. Lino; Guerra, V.; Loureiro, J.

    2007-12-01

    Accurate sets of nitrogen state-resolved dissociation rates have been reduced to two-temperature (translational T and vibrational Tv) dissociation rates. The analysis of such two-temperature dissociation rates shows evidence of two different dissociation behaviors. For Tv < 0.3 T dissociation proceeds predominantly from the lower-lying vibrational levels, whereas for Tv > 0.3 T dissociation proceeds predominantly form the near-dissociative vibrational levels, with an abrupt change of behavior at Tv = 0.3 T. These two-temperature sets have then been utilized as a benchmark for the comparison against popular multitemperature dissociation models (Park, Hansen, Marrone-Treanor, Hammerling, Losev-Shatalov, Gordiets, Kuznetsov, and Macheret-Fridman). This has allowed verifying the accuracy of each theoretical model, and additionally proposing adequate values for any semi-empirical parameters present in the different theories. The Macheret-Fridman model, who acknowledges the existence of the two aforementioned dissociation regimes, has been found to provide significantly more accurate results than the other models. Although these different theoretical approaches have been tested and validated solely for nitrogen dissociation processes, it is reasonable to expect that the general conclusions of this work, regarding the adequacy of the different dissociation models, could be extended to the description of arbitrary diatomic dissociation processes.

  18. Meth math: modeling temperature responses to methamphetamine.

    PubMed

    Molkov, Yaroslav I; Zaretskaia, Maria V; Zaretsky, Dmitry V

    2014-04-15

    Methamphetamine (Meth) can evoke extreme hyperthermia, which correlates with neurotoxicity and death in laboratory animals and humans. The objective of this study was to uncover the mechanisms of a complex dose dependence of temperature responses to Meth by mathematical modeling of the neuronal circuitry. On the basis of previous studies, we composed an artificial neural network with the core comprising three sequentially connected nodes: excitatory, medullary, and sympathetic preganglionic neuronal (SPN). Meth directly stimulated the excitatory node, an inhibitory drive targeted the medullary node, and, in high doses, an additional excitatory drive affected the SPN node. All model parameters (weights of connections, sensitivities, and time constants) were subject to fitting experimental time series of temperature responses to 1, 3, 5, and 10 mg/kg Meth. Modeling suggested that the temperature response to the lowest dose of Meth, which caused an immediate and short hyperthermia, involves neuronal excitation at a supramedullary level. The delay in response after the intermediate doses of Meth is a result of neuronal inhibition at the medullary level. Finally, the rapid and robust increase in body temperature induced by the highest dose of Meth involves activation of high-dose excitatory drive. The impairment in the inhibitory mechanism can provoke a life-threatening temperature rise and makes it a plausible cause of fatal hyperthermia in Meth users. We expect that studying putative neuronal sites of Meth action and the neuromediators involved in a detailed model of this system may lead to more effective strategies for prevention and treatment of hyperthermia induced by amphetamine-like stimulants.

  19. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  20. The XXL Survey. III. Luminosity-temperature relation of the bright cluster sample

    NASA Astrophysics Data System (ADS)

    Giles, P. A.; Maughan, B. J.; Pacaud, F.; Lieu, M.; Clerc, N.; Pierre, M.; Adami, C.; Chiappetti, L.; Démoclés, J.; Ettori, S.; Le Févre, J. P.; Ponman, T.; Sadibekova, T.; Smith, G. P.; Willis, J. P.; Ziparo, F.

    2016-06-01

    Context. The XXL Survey is the largest homogeneous survey carried out with XMM-Newton. Covering an area of 50 deg2, the survey contains several hundred galaxy clusters out to a redshift of ~2 above an X-ray flux limit of ~5 × 10-15 erg cm-2 s-1. This paper belongs to the first series of XXL papers focusing on the bright cluster sample. Aims: We investigate the luminosity-temperature (LT) relation for the brightest clusters detected in the XXL Survey, taking fully into account the selection biases. We investigate the form of the LT relation, placing constraints on its evolution. Methods: We have classified the 100 brightest clusters in the XXL Survey based on their measured X-ray flux. These 100 clusters have been analysed to determine their luminosity and temperature to evaluate the LT relation. We used three methods to fit the form of the LT relation, with two of these methods providing a prescription to fully take into account the selection effects of the survey. We measure the evolution of the LT relation internally using the broad redshift range of the sample. Results: Taking fully into account selection effects, we find a slope of the bolometric LT relation of BLT = 3.08 ± 0.15, steeper than the self-similar expectation (BLT = 2). Our best-fit result for the evolution factor is E(z)1.64 ± 0.77, fully consistent with "strong self-similar" evolution where clusters scale self-similarly with both mass and redshift. However, this result is marginally stronger than "weak self-similar" evolution, where clusters scale with redshift alone. We investigate the sensitivity of our results to the assumptions made in our fitting model, finding that using an external LT relation as a low-z baseline can have a profound effect on the measured evolution. However, more clusters are needed in order to break the degeneracy between the choice of likelihood model and mass-temperature relation on the derived evolution. Based on observations obtained with XMM-Newton, an ESA science

  1. Further development of the temperature model

    SciTech Connect

    Entner, P.M.; Guoemundsson, G.A.

    1996-10-01

    At the TMS Annual Meeting 1995 a model was presented to control the bath temperature of electrolytic pots. Regression methods calculate the parameters of model equations by using pot data of the past. With these equations optimal set values of the pot voltage and aluminum fluoride feeding rate are determined. A pot may be in the reactive or inactive pot state. In the reactive state it responds readily to changes like an increased AlF{sub 3} feeding rate by the corresponding reaction i.e. an increased AlF{sub 3} concentration. In the inactive state however the bath temperature or the aluminum fluoride concentration for instance may stay constant even when pot operation makes strong efforts to change the pot state. Especially interesting is the transition from one state to the other. To predict the moment of transition the temperature model uses indicators. These indicators are derived from the time behavior of pot parameters like metal height, bath height etc. With the knowledge of a coming transition the model adjusts the optimal pot parameters. A mechanism is presented to explain the reactive and inactive pot states and the transitions.

  2. Preliminary Proactive Sample Size Determination for Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Koran, Jennifer

    2016-01-01

    Proactive preliminary minimum sample size determination can be useful for the early planning stages of a latent variable modeling study to set a realistic scope, long before the model and population are finalized. This study examined existing methods and proposed a new method for proactive preliminary minimum sample size determination.

  3. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    NASA Astrophysics Data System (ADS)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  4. Modeling quantum fluid dynamics at nonzero temperatures

    PubMed Central

    Berloff, Natalia G.; Brachet, Marc; Proukakis, Nick P.

    2014-01-01

    The detailed understanding of the intricate dynamics of quantum fluids, in particular in the rapidly growing subfield of quantum turbulence which elucidates the evolution of a vortex tangle in a superfluid, requires an in-depth understanding of the role of finite temperature in such systems. The Landau two-fluid model is the most successful hydrodynamical theory of superfluid helium, but by the nature of the scale separations it cannot give an adequate description of the processes involving vortex dynamics and interactions. In our contribution we introduce a framework based on a nonlinear classical-field equation that is mathematically identical to the Landau model and provides a mechanism for severing and coalescence of vortex lines, so that the questions related to the behavior of quantized vortices can be addressed self-consistently. The correct equation of state as well as nonlocality of interactions that leads to the existence of the roton minimum can also be introduced in such description. We review and apply the ideas developed for finite-temperature description of weakly interacting Bose gases as possible extensions and numerical refinements of the proposed method. We apply this method to elucidate the behavior of the vortices during expansion and contraction following the change in applied pressure. We show that at low temperatures, during the contraction of the vortex core as the negative pressure grows back to positive values, the vortex line density grows through a mechanism of vortex multiplication. This mechanism is suppressed at high temperatures. PMID:24704874

  5. Modeling Low-temperature Geochemical Processes

    NASA Astrophysics Data System (ADS)

    Nordstrom, D. K.

    2003-12-01

    Geochemical modeling has become a popular and useful tool for a wide number of applications from research on the fundamental processes of water-rock interactions to regulatory requirements and decisions regarding permits for industrial and hazardous wastes. In low-temperature environments, generally thought of as those in the temperature range of 0-100 °C and close to atmospheric pressure (1 atm=1.01325 bar=101,325 Pa), complex hydrobiogeochemical reactions participate in an array of interconnected processes that affect us, and that, in turn, we affect. Understanding these complex processes often requires tools that are sufficiently sophisticated to portray multicomponent, multiphase chemical reactions yet transparent enough to reveal the main driving forces. Geochemical models are such tools. The major processes that they are required to model include mineral dissolution and precipitation; aqueous inorganic speciation and complexation; solute adsorption and desorption; ion exchange; oxidation-reduction; or redox; transformations; gas uptake or production; organic matter speciation and complexation; evaporation; dilution; water mixing; reaction during fluid flow; reaction involving biotic interactions; and photoreaction. These processes occur in rain, snow, fog, dry atmosphere, soils, bedrock weathering, streams, rivers, lakes, groundwaters, estuaries, brines, and diagenetic environments. Geochemical modeling attempts to understand the redistribution of elements and compounds, through anthropogenic and natural means, for a large range of scale from nanometer to global. "Aqueous geochemistry" and "environmental geochemistry" are often used interchangeably with "low-temperature geochemistry" to emphasize hydrologic or environmental objectives.Recognition of the strategy or philosophy behind the use of geochemical modeling is not often discussed or explicitly described. Plummer (1984, 1992) and Parkhurst and Plummer (1993) compare and contrast two approaches for

  6. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.

  7. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. PMID:27566774

  8. Thermal Response Modeling System for a Mars Sample Return Vehicle

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Miles, Frank S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite-element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas eneration and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.

  9. Thermal Response Modeling System for a Mars Sample Return Vehicle

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.

    2002-01-01

    A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas generation and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.

  10. Improved Estimation Model of Lunar Surface Temperature

    NASA Astrophysics Data System (ADS)

    Zheng, Y.

    2015-12-01

    Lunar surface temperature (LST) is of great scientific interest both uncovering the thermal properties and designing the lunar robotic or manned landing missions. In this paper, we proposed the improved LST estimation model based on the one-dimensional partial differential equation (PDE). The shadow and surface tilts effects were combined into the model. Using the Chang'E (CE-1) DEM data from the Laser Altimeter (LA), the topographic effect can be estimated with an improved effective solar irradiance (ESI) model. In Fig. 1, the highest LST of the global Moon has been estimated with the spatial resolution of 1 degree /pixel, applying the solar albedo data derived from Clementine UV-750nm in solving the PDE function. The topographic effect is significant in the LST map. It can be identified clearly the maria, highland, and craters. The maximum daytime LST presents at the regions with low albedo, i.g. mare Procellarum, mare Serenitatis and mare Imbrium. The results are consistent with the Diviner's measurements of the LRO mission. Fig. 2 shows the temperature variations at the center of the disk in one year, assuming the Moon to be standard spherical. The seasonal variation of LST at the equator is about 10K. The highest LST occurs in early May. Fig.1. Estimated maximum surface temperatures of the global Moon in spatial resolution of 1 degree /pixel

  11. An environmental sampling model for combining judgment and randomly placed samples

    SciTech Connect

    Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.

    2007-08-23

    In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.

  12. Modeling forces in high-temperature superconductors

    SciTech Connect

    Turner, L. R.; Foster, M. W.

    1997-11-18

    We have developed a simple model that uses computed shielding currents to determine the forces acting on a high-temperature superconductor (HTS). The model has been applied to measurements of the force between HTS and permanent magnets (PM). Results show the expected hysteretic variation of force as the HTS moves first toward and then away from a permanent magnet, including the reversal of the sign of the force. Optimization of the shielding currents is carried out through a simulated annealing algorithm in a C++ program that repeatedly calls a commercial electromagnetic software code. Agreement with measured forces is encouraging.

  13. Analysis of a workplace air particulate sample by synchronous luminescence and room-temperature phosphorescence

    SciTech Connect

    Vo-Dinh, T.; Gammage, R.B.; Martinez, P.R.

    1981-02-01

    An analysis of a XAD-2 resin extract of a particulate air sample collected at an industrial environment was conducted by use of two simple spectroscopic methods performed at ambient temperature, the synchronous luminescence and room-temperature phosphorescence techniques. Results of the analysis of 13 polynuclear aromatic compounds including anthracene, benzo(a)pyrene, benzo(e)pyrene, 2,3-benzofluorene, chrysene, 1,2,5,6-dibenzanthracene, dibenzthiophene, fluoranthene, fluorene, phenanthrene, perylene, pyrene, and tetracene were reported.

  14. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  15. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088

  16. The use of ESR technique for assessment of heating temperatures of archaeological lentil samples.

    PubMed

    Aydaş, Canan; Engin, Birol; Dönmez, Emel Oybak; Belli, Oktay

    2010-01-01

    Heat-induced paramagnetic centers in modern and archaeological lentils (Lens culinaris, Medik.) were studied by X-band (9.3GHz) electron spin resonance (ESR) technique. The modern red lentil samples were heated in an electrical furnace at increasing temperatures in the range 70-500 degrees C. The ESR spectral parameters (the intensity, g-value and peak-to-peak line width) of the heat-induced organic radicals were investigated for modern red lentil (Lens culinaris, Medik.) samples. The obtained ESR spectra indicate that the relative number of heat-induced paramagnetic species and peak-to-peak line widths depends on the temperature and heating time of the modern lentil. The g-values also depend on the heating temperature but not heating time. Heated modern red lentils produced a range of organic radicals with g-values from g=2.0062 to 2.0035. ESR signals of carbonised archaeological lentil samples from two archaeological deposits of the Van province in Turkey were studied and g-values, peak-to-peak line widths, intensities and elemental compositions were compared with those obtained for modern samples in order to assess at which temperature these archaeological lentils were heated in prehistoric sites. The maximum temperatures of the previous heating of carbonised UA5 and Y11 lentil seeds are as follows about 500 degrees C and above 500 degrees C, respectively.

  17. The use of ESR technique for assessment of heating temperatures of archaeological lentil samples

    NASA Astrophysics Data System (ADS)

    Aydaş, Canan; Engin, Birol; Dönmez, Emel Oybak; Belli, Oktay

    2010-01-01

    Heat-induced paramagnetic centers in modern and archaeological lentils ( Lens culinaris, Medik.) were studied by X-band (9.3 GHz) electron spin resonance (ESR) technique. The modern red lentil samples were heated in an electrical furnace at increasing temperatures in the range 70-500 °C. The ESR spectral parameters (the intensity, g-value and peak-to-peak line width) of the heat-induced organic radicals were investigated for modern red lentil ( Lens culinaris, Medik.) samples. The obtained ESR spectra indicate that the relative number of heat-induced paramagnetic species and peak-to-peak line widths depends on the temperature and heating time of the modern lentil. The g-values also depend on the heating temperature but not heating time. Heated modern red lentils produced a range of organic radicals with g-values from g = 2.0062 to 2.0035. ESR signals of carbonised archaeological lentil samples from two archaeological deposits of the Van province in Turkey were studied and g-values, peak-to-peak line widths, intensities and elemental compositions were compared with those obtained for modern samples in order to assess at which temperature these archaeological lentils were heated in prehistoric sites. The maximum temperatures of the previous heating of carbonised UA5 and Y11 lentil seeds are as follows about 500 °C and above 500 °C, respectively.

  18. A low-temperature sample mount for an inelastic electron scattering spectrometer

    NASA Astrophysics Data System (ADS)

    Tarrio, C.; Schnatterly, S. E.; Benitez, E. L.

    1990-10-01

    A continuously operable low-temperature (10-20 K) sample mount for a solid-state inelastic electron scattering spectrometer is described. The cooling is achieved by a closed-cycle gas phase He refrigerator. Because the entire sample chamber is at a potential of 300 kV, it must be isolated from ground, requiring computer automation for positioning, and insulating plumbing for the helium. The motion control has a detachable coupling that allows for complete thermal isolation from room temperature. Details and problems encountered in the design are described.

  19. Surface self-diffusion constants at low temperature: Monte Carlo transition state theory with importance sampling

    SciTech Connect

    Voter, A.F.; Doll, J.D.

    1984-06-01

    We present an importance-sampling method which, when combined with a Monte Carlo procedure for evaluating transition state theory rates, allows computation of classically exact, transition state theory surface diffusion constants at arbitrarily low temperature. In the importance-sampling method, a weighting factor is applied to the transition state region, and Metropolis steps are chosen from a special distribution which facilitates transfer between the two important regions of configuration space: the binding site minimum and the saddle point between two binding sites. We apply the method to the diffusion of Rh on Rh(111) and Rh on Rh(100), in the temperature range of existing field ion microscope experiments.

  20. Temperature-controlled neutron reflectometry sample cell suitable for study of photoactive thin films

    SciTech Connect

    Yager, Kevin G.; Tanchak, Oleh M.; Barrett, Christopher J.; Watson, Mike J.; Fritzsche, Helmut

    2006-04-15

    We describe a novel cell design intended for the study of photoactive materials using neutron reflectometry. The cell can maintain sample temperature and control of ambient atmospheric environment. Critically, the cell is built with an optical port, enabling light irradiation or light probing of the sample, simultaneous with neutron reflectivity measurements. The ability to measure neutron reflectivity with simultaneous temperature ramping and/or light illumination presents unique opportunities for measuring photoactive materials. To validate the cell design, we present preliminary results measuring the photoexpansion of thin films of azobenzene polymer.

  1. The use of variable temperature and magic-angle sample spinning in studies of fulvic acids

    USGS Publications Warehouse

    Earl, W.L.; Wershaw, R. L.; Thorn, K.A.

    1987-01-01

    Intensity distortions and poor signal to noise in the cross-polarization magic-angle sample spinning NMR of fulvic acids were investigated and attributed to molecular mobility in these ostensibly "solid" materials. We have shown that inefficiencies in cross polarization can be overcome by lowering the sample temperature to about -60??C. These difficulties can be generalized to many other synthetic and natural products. The use of variable temperature and cross-polarization intensity as a function of contact time can yield valuable qualitative information which can aid in the characterization of many materials. ?? 1987.

  2. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2016-06-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  3. Effects of different temperature treatments on biological ice nuclei in snow samples

    NASA Astrophysics Data System (ADS)

    Hara, Kazutaka; Maki, Teruya; Kakikawa, Makiko; Kobayashi, Fumihisa; Matsuki, Atsushi

    2016-09-01

    The heat tolerance of biological ice nucleation activity (INA) depends on their types. Different temperature treatments may cause varying degrees of inactivation on biological ice nuclei (IN) in precipitation samples. In this study, we measured IN concentration and bacterial INA in snow samples using a drop freezing assay, and compared the results for unheated snow and snow treated at 40 °C and 90 °C. At a measured temperature of -7 °C, the concentration of IN in untreated snow was 100-570 L-1, whereas the concentration in snow treated at 40 °C and 90 °C was 31-270 L-1 and 2.5-14 L-1, respectively. In the present study, heat sensitive IN inactivated by heating at 40 °C were predominant, and ranged 23-78% of IN at -7 °C compared with untreated samples. Ice nucleation active Pseudomonas strains were also isolated from the snow samples, and heating at 40 °C and 90 °C inactivated these microorganisms. Consequently, different temperature treatments induced varying degrees of inactivation on IN in snow samples. Differences in the concentration of IN across a range of treatment temperatures might reflect the abundance of different heat sensitive biological IN components.

  4. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    NASA Astrophysics Data System (ADS)

    Portner, H.; Bugmann, H.; Wolf, A.

    2010-11-01

    Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not well understood. Thus, we performed an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS. We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff). We determined the parameter confidence ranges of the formulations by nonlinear regression analysis based on eight experimental datasets from Northern Hemisphere ecosystems. We sampled over the confidence ranges of the parameters and ran simulations for each pair of temperature response function and calibration site. We analyzed both the long-term and the short-term heterotrophic soil carbon dynamics over a virtual elevation gradient in southern Switzerland. The temperature relationship of Lloyd-Taylor fitted the overall data set best as the other functions either resulted in poor fits (Exponential, Arrhenius) or were not applicable for all datasets (Gaussian, Van't Hoff). There were two main sources of uncertainty for model simulations: (1) the lack of confidence in the parameter estimates of the temperature response, which increased with increasing temperature, and (2) the size of the simulated soil carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. Our results therefore indicate that such projections are more uncertain for higher elevations and hence also higher latitudes, which are of key importance for the global terrestrial carbon budget.

  5. Temperature influences in receiver clock modelling

    NASA Astrophysics Data System (ADS)

    Wang, Kan; Meindl, Michael; Rothacher, Markus; Schoenemann, Erik; Enderle, Werner

    2016-04-01

    In Precise Point Positioning (PPP), hardware delays at the receiver site (receiver, cables, antenna, …) are always difficult to be separated from the estimated receiver clock parameters. As a result, they are partially or fully contained in the estimated "apparent" clocks and will influence the deterministic and stochastic modelling of the receiver clock behaviour. In this contribution, using three years of data, the receiver clock corrections of a set of high-precision Hydrogen Masers (H-Masers) connected to stations of the ESA/ESOC network and the International GNSS Service (IGS) are firstly characterized concerning clock offsets, drifts, modified Allan deviations and stochastic parameters. In a second step, the apparent behaviour of the clocks is modelled with the help of a low-order polynomial and a known temperature coefficient (Weinbach, 2013). The correlations between the temperature and the hardware delays generated by different types of antennae are then analysed looking at daily, 3-day and weekly time intervals. The outcome of these analyses is crucial, if we intend to model the receiver clocks in the ground station network to improve the estimation of station-related parameters like coordinates, troposphere zenith delays and ambiguities. References: Weinbach, U. (2013) Feasibility and impact of receiver clock modeling in precise GPS data analysis. Dissertation, Leibniz Universität Hannover, Germany.

  6. TEMPERATURE HISTORY AND DYNAMICAL EVOLUTION OF (101955) 1999 RQ 36: A POTENTIAL TARGET FOR SAMPLE RETURN FROM A PRIMITIVE ASTEROID

    SciTech Connect

    Delbo, Marco; Michel, Patrick

    2011-02-20

    It has been recently shown that near-Earth objects (NEOs) have a temperature history-due to the radiative heating by the Sun-non-trivially correlated to their present orbits. This is because the perihelion distance of NEOs varies as a consequence of dynamical mechanisms, such as resonances and close encounters with planets. Thus, it is worth investigating the temperature history of NEOs that are potential targets of space missions devoted to return samples of prebiotic organic compounds. Some of these compounds, expected to be found on NEOs of primitive composition, break up at moderate temperatures, e.g., 300-670 K. Using a model of the orbital evolution of NEOs and thermal models, we studied the temperature history of (101955) 1999 RQ{sub 36} (the primary target of the mission OSIRIS-REx, proposed in the program New Frontiers of NASA). Assuming that the same material always lies on the surface (i.e., there is no regolith turnover), our results suggest that the temperatures reached during its past evolution affected the stability of some organic compounds at the surface (e.g., there is 50% probability that the surface of 1999 RQ{sub 36} was heated at temperatures {>=}500 K). However, the temperature drops rapidly with depth: the regolith at a depth of 3-5 cm, which is not considered difficult to reach with the current designs of sampling devices, has experienced temperatures about 100 K below those at the surface. This is sufficient to protect some subsurface organics from thermal breakup.

  7. Stratospheric Temperature Changes: Observations and Model Simulations

    NASA Technical Reports Server (NTRS)

    Ramaswamy, V.; Chanin, M.-L.; Angell, J.; Barnett, J.; Gaffen, D.; Gelman, M.; Keckhut, P.; Koshelkov, Y.; Labitzke, K.; Lin, J.-J. R.

    1999-01-01

    This paper reviews observations of stratospheric temperatures that have been made over a period of several decades. Those observed temperatures have been used to assess variations and trends in stratospheric temperatures. A wide range of observation datasets have been used, comprising measurements by radiosonde (1940s to the present), satellite (1979 - present), lidar (1979 - present) and rocketsonde (periods varying with location, but most terminating by about the mid-1990s). In addition, trends have also been assessed from meteorological analyses, based on radiosonde and/or satellite data, and products based on assimilating observations into a general circulation model. Radiosonde and satellite data indicate a cooling trend of the annual-mean lower stratosphere since about 1980. Over the period 1979-1994, the trend is 0.6K/decade. For the period prior to 1980, the radiosonde data exhibit a substantially weaker long-term cooling trend. In the northern hemisphere, the cooling trend is about 0.75K/decade in the lower stratosphere, with a reduction in the cooling in mid-stratosphere (near 35 km), and increased cooling in the upper stratosphere (approximately 2 K per decade at 50 km). Model simulations indicate that the depletion of lower stratospheric ozone is the dominant factor in the observed lower stratospheric cooling. In the middle and upper stratosphere both the well-mixed greenhouse gases (such as CO) and ozone changes contribute in an important manner to the cooling.

  8. Electric transport measurements on bulk, polycrystalline MgB2 samples prepared at various reaction temperatures

    NASA Astrophysics Data System (ADS)

    Wiederhold, A.; Koblischka, M. R.; Inoue, K.; Muralidhar, M.; Murakami, M.; Hartmann, U.

    2016-03-01

    A series of disk-shaped, bulk MgB2 superconductors (sample diameter up to 4 cm) was prepared in order to improve the performance for superconducting super-magnets. Several samples were fabricated using a solid state reaction in pure Ar atmosphere from 750 to 950oC in order to determine the optimum processing parameters to obtain the highest critical current density as well as large trapped field values. Additional samples were prepared with added silver (up to 10 wt.-%) to the Mg and B powder. Magneto-resistance data and I/V-characteristics were recorded using an Oxford Instruments Teslatron system. From Arrhenius plots, we determine the TAFF pinning potential, U 0. The I/V-characteristics yield detailed information on the current flow through the polycrystalline samples. The current flow is influenced by the presence of pores in the samples. Our analysis of the achieved critical currents together with a thorough microstructure investigation reveals that the samples prepared at temperatures between 775°C and 805°C exhibit the smallest grains and the best connectivity between them, while the samples fabricated at higher reaction temperatures show a reduced connectivity and lower pinning potential. Doping the samples with silver leads to a considerable increase of the pinning potential and hence, the critical current densities.

  9. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    NASA Astrophysics Data System (ADS)

    Portner, H.; Bugmann, H.; Wolf, A.

    2009-08-01

    Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not understood. Thus, we made an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS. We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff). We determined the parameter uncertainty ranges of the functions by nonlinear regression analysis based on eight experimental datasets from northern hemisphere ecosystems. We sampled over the uncertainty bounds of the parameters and run simulations for each pair of temperature response function and calibration site. The uncertainty in both long-term and short-term soil carbon dynamics was analyzed over an elevation gradient in southern Switzerland. The function of Lloyd-Taylor turned out to be adequate for modelling the temperature dependency of soil organic matter decomposition, whereas the other functions either resulted in poor fits (Exponential, Arrhenius) or were not applicable for all datasets (Gaussian, Van't Hoff). There were two main sources of uncertainty for model simulations: (1) the uncertainty in the parameter estimates of the response functions, which increased with increasing temperature and (2) the uncertainty in the simulated size of carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. The higher uncertainty in carbon pools with slow turn-over rates has important implications for the uncertainty in the projection of the change of soil carbon stocks driven by climate change, which turned out to be more uncertain for higher elevations and hence higher latitudes, which are of key importance for the global terrestrial carbon budget.

  10. Long-term storage of salivary cortisol samples at room temperature

    NASA Technical Reports Server (NTRS)

    Chen, Yu-Ming; Cintron, Nitza M.; Whitson, Peggy A.

    1992-01-01

    Collection of saliva samples for the measurement of cortisol during space flights provides a simple technique for studying changes in adrenal function due microgravity. In the present work, several methods for preserving saliva cortisol at room temperature were investigated using radioimmunoassays for determining cortisol in saliva samples collected on a saliva-collection device called Salivettes. It was found that a pretreatment of Salivettes with citric acid resulted in preserving more than 85 percent of the salivary cortisol for as long as six weeks. The results correlated well with those for a sample stored in a freezer on an untreated Salivette.

  11. Graphite sample preparation for AMS in a high pressure and temperature press

    USGS Publications Warehouse

    Rubin, M.; Mysen, B.O.; Polach, H.

    1984-01-01

    A high pressure-high temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical 14C samples including oxalic acid and carbonates. Beam strength of 12C was generally adequate, but random radioactive contamination by 14C made age measurements impractical. ?? 1984.

  12. Graphite sample preparation for AMS in a high pressure and temperature press

    USGS Publications Warehouse

    Rubin, Meyer; Mysen, Bjorn O.; Polach, Henry

    1984-01-01

    A high pressure-temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical **1**4C samples including oxalic acid and carbonates. Beam strength of **1**2C was generally adequate, but random radioactive contamination by **1**4C made age measurements impractical.

  13. Fast sweep-rate plastic Faraday force magnetometer with simultaneous sample temperature measurement.

    PubMed

    Slobinsky, D; Borzi, R A; Mackenzie, A P; Grigera, S A

    2012-12-01

    We present a design for a magnetometer capable of operating at temperatures down to 50 mK and magnetic fields up to 15 T with integrated sample temperature measurement. Our design is based on the concept of a Faraday force magnetometer with a load-sensing variable capacitor. A plastic body allows for fast sweep rates and sample temperature measurement, and the possibility of regulating the initial capacitance simplifies the initial bridge balancing. Under moderate gradient fields of ~1 T/m our prototype performed with a resolution better than 1 × 10(-5) emu. The magnetometer can be operated either in a dc mode, or in an oscillatory mode which allows the determination of the magnetic susceptibility. We present measurements on Dy(2)Ti(2)O(7) and Sr(3)Ru(2)O(7) as an example of its performance.

  14. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  15. Calorimeters for precision power dissipation measurements on controlled-temperature superconducting radiofrequency samples.

    PubMed

    Xiao, B P; Reece, C E; Phillips, H L; Kelley, M J

    2012-12-01

    Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the 7.5 GHz surface impedance characterization system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm diameter disk sample which is thermally isolated from the radiofrequency (RF) portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analyzed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al(2)O(3), Cu, MgO, Nb, and Si. PMID:23278016

  16. Calorimeters for Precision Power Dissipation Measurements on Controlled-Temperature Superconducting Radiofrequency Samples

    SciTech Connect

    Xiao, Binping P.; Kelley, Michael J.; Reece, Charles E.; Phillips, H. L.

    2012-12-01

    Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the surface impedance characterization (SIC) system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm dia. disk sample which is thermally isolated from the RF portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency (SRF) materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analysed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al{sub 2}O{sub 3}, Cu, MgO, Nb and Si.

  17. Temperature programmed desorption studies of water interactions with Apollo lunar samples 12001 and 72501

    NASA Astrophysics Data System (ADS)

    Poston, Michael J.; Grieves, Gregory A.; Aleksandrov, Alexandr B.; Hibbitts, Charles A.; Dyar, M. Darby; Orlando, Thomas M.

    2015-07-01

    The desorption activation energies for water molecules chemisorbed on Apollo lunar samples 72501 (highlands soil) and 12001 (mare soil) were determined by temperature programmed desorption experiments in ultra-high vacuum. A significant difference in both the energies and abundance of chemisorption sites was observed, with 72501 retaining up to 40 times more water (by mass) and with much stronger adsorption interactions, possibly approaching 1.5 eV. The dramatic difference between the samples may be due to differences in mineralogy and surface exposure age. The distribution function of water desorption activation energies for sample 72501 was used as an initial condition to simulate water persistence through a temperature profile matching the lunar day.

  18. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    DOE PAGES

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; et al

    2015-12-17

    In high material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. Our paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 degrees C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition,more » examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. Our work covers a broad field of research from fundamental to technological investigations of various types of materials and components.« less

  19. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    SciTech Connect

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-17

    In high material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. Our paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 degrees C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. Our work covers a broad field of research from fundamental to technological investigations of various types of materials and components.

  20. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    SciTech Connect

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-15

    High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components.

  1. Effects and Mitigation of Clear Sky Sampling on Recorded Trends in Land Surface Temperature

    NASA Astrophysics Data System (ADS)

    Holmes, T. R.; Hain, C.; de Jeu, R.; Anderson, M. C.; Crow, W. T.

    2015-12-01

    Land surface temperature (LST) is a key input for physically-based retrieval algorithms of hydrological states and fluxes. Yet, it remains a poorly constrained parameter for global scale studies. The main two observational methods to remotely measure T are based on thermal infrared (TIR) observations and passive microwave observations (MW). TIR is the most commonly used approach and the method of choice to provide standard LST products for various satellite missions. MW-based LST retrievals on the other hand are not as widely adopted for land applications; currently their principle use is in soil moisture retrieval algorithms. MW and TIR technologies present two highly complementary and independent means of measuring LST. MW observations have a high tolerance to clouds but a low spatial resolution, and TIR has a high spatial resolution with temporal sampling restricted to clear skies. This paper builds on recent progress in characterizing the main structural differences between TIR LST and MW Ka-band observations, the MW frequency that is most suitable for LST sensing. By accounting for differences in diurnal timing (phase lag with solar noon), amplitude, and emissivity we construct a MW-based LST dataset that matches the diurnal characteristics of the TIR-based LSA SAF LST record. This new global dataset of MW-based LST currently spans the period of 2003-2013. In this paper we will present results of a validation of MW LST with in situ data with special emphasis on the effect of cloudiness on the performance. The ability to remotely sense the temperature of cloud covered land is what sets this MW-LST datasets apart from existing (much higher resolution) TIR-based products. As an example of this we will therefore explore how MW LST can mitigate the effect of clear-sky sampling in the context of trend and anomaly detection. We do this by contrasting monthly means of TIR-LST with its clear-sky and all-sky equivalent from an MW-LST and an NWP model.

  2. Characterization of Wafer-Level Au-In-Bonded Samples at Elevated Temperatures

    NASA Astrophysics Data System (ADS)

    Luu, Thi-Thuy; Hoivik, Nils; Wang, Kaiying; Aasmundtveit, Knut E.; Vardøy, Astrid-Sofie B.

    2015-06-01

    Wafer-level bonding using Au-In solid liquid interdiffusion (SLID) bonding is a promising approach to enable low-temperature assembly and MEMS packaging/encapsulation. Due to the low-melting point of In, wafer-level bonding can be performed at considerably lower temperatures than Sn-based bonding; this work treats bonds performed at 453 K (180 °C). Following bonding, the die shear strength at elevated temperatures was investigated from room temperature to 573 K (300 °C), revealing excellent mechanical integrity at these temperatures well above the bonding temperature. For shear test temperatures from room temperature to 473 K (200 °C), the measured shear strength was stable at 30 MPa, whereas it increased to 40 MPa at shear test temperature of 573 K (300 °C). The fracture surfaces of Au-In-bonded samples revealed brittle fracture modes (at the original bond interface and at the adhesion layers) for shear test temperatures up to 473 K (200 °C), but ductile fracture mode for shear test temperature of 573 K (300 °C). The as-bonded samples have a layered structure consisting of the two intermetallic phases AuIn and γ', as shown by cross section microscopy and predicted from the phase diagram. The change in behavior for the tests at 573 K (300 °C) is attributed to a solid-state phase transition occurring at 497 K (224 °C), where the phase diagram predicts a AuIn/ψ structure and a phase boundary moving across the initial bond interface. The associated interdiffusion of Au and In will strengthen the initial bond interface and, as a consequence, the measured shear strength. This work provides experimental evidence for the high-temperature stability of wafer-level, low-temperature bonded, Au-In SLID bonds. The high bond strength obtained is limited by the strength at the initial bond interface and at the adhesion layers, showing that the Au-In SLID system itself is capable of even higher bond strength.

  3. Polaron models of high-temperature superconductivity

    NASA Astrophysics Data System (ADS)

    Mott, N. F.

    1993-01-01

    A review is given of theories of high-temperature superconductors in which the current is carried by bipolarons, which form a condensed Bose gas below Tc and a non-degenerate gas above it. Such theories were first proposed by Schafroth, Alexandrov, Ranninger and de Jongh; the present author has, for the copper oxide materials, proposed spin bipolarons. Experimental work has, however, shown no magnetic moments in the superconducting and “spin glass” ranges of composition; a modification of the spin bipolaron model is proposed to take account of these observations. Other aspects of the model are discussed, particularly heat conduction and the effect of disorder. A comparison is made with the cubic bismuth materials.

  4. Thermal mapping and trends of Mars analog materials in sample acquisition operations using experimentation and models

    NASA Astrophysics Data System (ADS)

    Szwarc, Timothy; Hubbard, Scott

    2014-09-01

    The effects of atmosphere, ambient temperature, and geologic material were studied experimentally and using a computer model to predict the heating undergone by Mars rocks during rover sampling operations. Tests were performed on five well-characterized and/or Mars analog materials: Indiana limestone, Saddleback basalt, kaolinite, travertine, and water ice. Eighteen tests were conducted to 55 mm depth using a Mars Sample Return prototype coring drill, with each sample containing six thermal sensors. A thermal simulation was written to predict the complete thermal profile within each sample during coring and this model was shown to be capable of predicting temperature increases with an average error of about 7%. This model may be used to schedule power levels and periods of rest during actual sample acquisition processes to avoid damaging samples or freezing the bit into icy formations. Maximum rock temperature increase is found to be modeled by a power law incorporating rock and operational parameters. Energy transmission efficiency in coring is found to increase linearly with rock hardness and decrease by 31% at Mars pressure.

  5. A Unimodal Model for Double Observer Distance Sampling Surveys

    PubMed Central

    Becker, Earl F.; Christ, Aaron M.

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984

  6. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  7. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  8. Headspace-programmed temperature vaporizer-mass spectrometry and pattern recognition techniques for the analysis of volatiles in saliva samples.

    PubMed

    Pérez Antón, Ana; Del Nogal Sánchez, Miguel; Crisolino Pozas, Ángel Pedro; Pérez Pavón, José Luis; Moreno Cordero, Bernardo

    2016-11-01

    A rapid method for the analysis of volatiles in saliva samples is proposed. The method is based on direct coupling of three components: a headspace sampler (HS), a programmable temperature vaporizer (PTV) and a quadrupole mass spectrometer (qMS). Several applications in the biomedical field have been proposed with electronic noses based on different sensors. However, few contributions have been developed using a mass spectrometry-based electronic nose in this field up to date. Samples of 23 patients with some type of cancer and 32 healthy volunteers were analyzed with HS-PTV-MS and the profile signals obtained were subjected to pattern recognition techniques with the aim of studying the possibilities of the methodology to differentiate patients with cancer from healthy controls. An initial inspection of the contained information in the data by means of principal components analysis (PCA) revealed a complex situation were an overlapped distribution of samples in the score plot was visualized instead of two groups of separated samples. Models using K-nearest neighbors (KNN) and Soft Independent Modeling of Class Analogy (SIMCA) showed poor discrimination, specially using SIMCA where a small distance between classes was obtained and no satisfactory results in the classification of the external validation samples were achieved. Good results were obtained when Mahalanobis discriminant analysis (DA) and support vector machines (SVM) were used obtaining 2 (false positives) and 0 samples misclassified in the external validation set, respectively. No false negatives were found using these techniques.

  9. Headspace-programmed temperature vaporizer-mass spectrometry and pattern recognition techniques for the analysis of volatiles in saliva samples.

    PubMed

    Pérez Antón, Ana; Del Nogal Sánchez, Miguel; Crisolino Pozas, Ángel Pedro; Pérez Pavón, José Luis; Moreno Cordero, Bernardo

    2016-11-01

    A rapid method for the analysis of volatiles in saliva samples is proposed. The method is based on direct coupling of three components: a headspace sampler (HS), a programmable temperature vaporizer (PTV) and a quadrupole mass spectrometer (qMS). Several applications in the biomedical field have been proposed with electronic noses based on different sensors. However, few contributions have been developed using a mass spectrometry-based electronic nose in this field up to date. Samples of 23 patients with some type of cancer and 32 healthy volunteers were analyzed with HS-PTV-MS and the profile signals obtained were subjected to pattern recognition techniques with the aim of studying the possibilities of the methodology to differentiate patients with cancer from healthy controls. An initial inspection of the contained information in the data by means of principal components analysis (PCA) revealed a complex situation were an overlapped distribution of samples in the score plot was visualized instead of two groups of separated samples. Models using K-nearest neighbors (KNN) and Soft Independent Modeling of Class Analogy (SIMCA) showed poor discrimination, specially using SIMCA where a small distance between classes was obtained and no satisfactory results in the classification of the external validation samples were achieved. Good results were obtained when Mahalanobis discriminant analysis (DA) and support vector machines (SVM) were used obtaining 2 (false positives) and 0 samples misclassified in the external validation set, respectively. No false negatives were found using these techniques. PMID:27591583

  10. Sample size calculation for the proportional hazards cure model.

    PubMed

    Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin

    2012-12-20

    In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805

  11. Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Pengjie; Zheng, Yi; Jing, Yipeng

    2015-02-01

    Cosmology based on large scale peculiar velocity prefers volume weighted velocity statistics. However, measuring the volume weighted velocity statistics from inhomogeneously distributed galaxies (simulation particles/halos) suffers from an inevitable and significant sampling artifact. We study this sampling artifact in the velocity power spectrum measured by the nearest particle velocity assignment method by Zheng et al., [Phys. Rev. D 88, 103510 (2013).]. We derive the analytical expression of leading and higher order terms. We find that the sampling artifact suppresses the z =0 E -mode velocity power spectrum by ˜10 % at k =0.1 h /Mpc , for samples with number density 10-3 (Mpc /h )-3 . This suppression becomes larger for larger k and for sparser samples. We argue that this source of systematic errors in peculiar velocity cosmology, albeit severe, can be self-calibrated in the framework of our theoretical modelling. We also work out the sampling artifact in the density-velocity cross power spectrum measurement. A more robust evaluation of related statistics through simulations will be presented in a companion paper by Zheng et al., [Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modelling, arXiv:1409.6809.]. We also argue that similar sampling artifact exists in other velocity assignment methods and hence must be carefully corrected to avoid systematic bias in peculiar velocity cosmology.

  12. Integrated research in constitutive modelling at elevated temperatures, part 1

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Allen, D. H.

    1986-01-01

    Topics covered include: numerical integration techniques; thermodynamics and internal state variables; experimental lab development; comparison of models at room temperature; comparison of models at elevated temperature; and integrated software development.

  13. Database in low temperature plasma modeling

    NASA Astrophysics Data System (ADS)

    Sakai, Y.

    2002-05-01

    This article is composed of recommended sets of electron collision cross-sections and reaction cross-sections of excited species assessed by a swam method and of information on transport coefficients and reaction rates (cross-sections) of ions, which are needed in low temperature plasma modeling. These data have been piled up by the Investigation Committee on "Discharge Plasma Electron Collision Cross-sections", IEE Japan, and the author's laboratory. The gases taken for the assessment in this work are rare gases, Hg, N 2, O 2, CO 2, CF 4, CH 4, GeH 4, SiH 4, SF 6, C 2H 6, Si 2H 6, c-C 4F 8 and CCl 2F 2.

  14. Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System

    NASA Astrophysics Data System (ADS)

    Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.

    2011-09-01

    Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.

  15. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  16. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGES

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  17. Errors of five-day mean surface wind and temperature conditions due to inadequate sampling

    NASA Technical Reports Server (NTRS)

    Legler, David M.

    1991-01-01

    Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.

  18. Effect of vacuum packing and temperature on survival and hatching of strongyle eggs in faecal samples.

    PubMed

    Sengupta, Mita E; Thapa, Sundar; Thamsborg, Stig M; Mejer, Helena

    2016-02-15

    Strongyle eggs of helminths of livestock usually hatch within a few hours or days after deposition with faeces. This poses a problem when faecal sampling is performed in the field. As oxygen is needed for embryonic development, it is recommended to reduce air supply during transport and refrigerate. The present study therefore investigated the combined effect of vacuum packing and temperature on survival of strongyle eggs and their subsequent ability to hatch and develop into L3. Fresh faecal samples were collected from calves infected with Cooperia oncophora, pigs infected with Oesophagostomum dentatum, and horses infected with Strongylus vulgaris and cyathostomins. The samples were allocated into four treatments: vacuum packing and storage at 5 °C or 20 °C (5 V and 20 V); normal packing in plastic gloves closed with a loose knot and storage at 5 °C or 20 °C (5 N and 20 N). The number of eggs per gram faeces (EPG) was estimated every fourth day until day 28 post set up (p.s.) by a concentration McMaster-method. Larval cultures were prepared on day 0, 12 and 28 p.s. and the larval yield determined. For C. oncophora, the EPG was significantly higher in vacuum packed samples after 28 days as compared to normal storage, regardless of temperature. However, O. dentatum EPG was significantly higher in samples kept at 5 °C as compared to 20 °C, irrespective of packing. For the horse strongyles, vacuum packed samples at 5 °C had a significantly higher EPG compared to the other treatments after 28 days. The highest larval yield of O. dentatum and horse strongyles were obtained from fresh faecal samples, however, if storage is necessary prior to setting up larval cultures O. dentatum should be kept at room temperature (aerobic or anaerobic). However, horse strongyle coprocultures should ideally be set up on the day of collection to ensure maximum yield. Eggs of C. oncophora should be kept vacuum packed at room temperature for the highest larval yield.

  19. The effect of low-temperature demagnetization on paleointensity determinations from samples with different domain states

    NASA Astrophysics Data System (ADS)

    Kulakov, E.; Smirnov, A. V.

    2013-05-01

    It has been recently proposed that incorporation of low-temperature demagnetization (LTD) into the Thellier double-heating method increases the accuracy and success rate of paleointensity experiments by reducing the effects of magnetic remanence carried by large pseudo-singledomain (PSD) and multidomain (MD) grains (e.g., Celino et al., Geophysical Research Letters, 34, L12306, 2007). However, it has been unclear to what degree the LTD affects the remanence carried by single-domain (SD) and small PSD. To investigate this problem, we carried out paleointensity experiments on synthetic magnetite-bearing samples containing nearly SD, PSD, and multidomain MD grains as well as mixtures of MD and SD grains. Before the experiments, a thermal remanent magnetization was imparted to the samples in a known laboratory field. Paleointensities were determined using both the LTD-Thellier and multi-specimen parallel pTRM methods. The samples were subjected to a series of three LTD treatments in liquid nitrogen after each heating. LTD significantly improved the quality of paleointensity determinations from the samples containing large PSD and MD magnetite as well as SD-MD mixtures. In particular, LTD resulted in a significant increase of the paleointensity quality factor, producing more linear Arai plots and reducing data scatter. In addition, field intensities calculated after LTD fell within 2-4% of the known laboratory field. On the other hand, the effect of LTD on paleointensity determinations from samples with nearly SD magnetite is negligible. Paleointensity values based on both pre- and post-LTD data were statistically indistinguishable of the laboratory field. LTD treatment significantly reduced the systematic paleofield overestimation using the multi-specimen method from samples containing PSD and MD grains, as well as SD-MD mixtures. The results of multi-specimen paleointensity experiments performed on the PSD and MD samples using different heating temperatures suggest

  20. Improving the replica-exchange molecular-dynamics method for efficient sampling in the temperature space.

    PubMed

    Chen, Changjun; Xiao, Yi; Huang, Yanzhao

    2015-05-01

    Replica-exchange molecular dynamics (REMD) is a popular sampling method in the molecular simulation. By frequently exchanging the replicas at different temperatures, the molecule can jump out of the minima and sample efficiently in the conformational space. Although REMD has been shown to be practical in a lot of applications, it does have a critical limitation. All the replicas at all the temperatures must be simulated for a period between the replica-exchange steps. This may be problematic for the reaction with high free energy barriers. In that case, too many replicas are required in the simulation. To reduce the calculation quantity and improve its performance, in this paper we propose a modified REMD method. During the simulation, each replica at each temperature can stay in either the active or inactive state and only switch between the states at the exchange step. In the active state, the replica moves freely in the canonical ensemble by the normal molecular dynamics, and in the inactive state, the replica is frozen temporarily until the next exchange step. The number of the replicas in the active states (active replicas) depends on the number of CPUs in the computer. Using the additional inactive replicas, one can perform an REMD simulation in a wider temperature space. The practical applications show that the modified REMD method is reliable. With the same number of active replicas, this REMD method can produce a more reasonable free energy surface around the free energy minima than the standard REMD method. PMID:26066200

  1. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  2. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  3. Helium flow and temperatures in a heated sample of a final ITER TF cable-in-conduit conductor

    NASA Astrophysics Data System (ADS)

    Herzog, Robert; Lewandowska, Monika; Calvi, Marco; Bessette, Denis

    2010-06-01

    The quest for a detailed understanding of the thermo-hydraulic behaviour of the helium flow in the dual-channel cable-in-conduit conductor (CICC) for the ITER toroidal-field coils led to a series of experiments in the SULTAN test facility on a dedicated sample made according to the final conductor design. With helium flowing through the conductor as expected during ITER operation, the sample was heated by eddy-current losses induced in the strands by an applied AC magnetic field as well as by foil heaters mounted on the outside of the conductor jacket. Temperature sensors mounted on the jacket surface, in the central channel and at different radii in the sub-cable region showed the longitudinal as well as radial temperature distribution at different mass flow rates and heat loads. Spot heaters in the bundle and the central channel created small heated helium regions, which were detected downstream by a series of temperature sensors. With a time-of-flight method the helium velocity could thus be determined independently of any flow model. The temperature and velocity distributions in bundle and central channel under different mass-flow and heat load conditions thus led to a detailed picture of the helium flow in the final ITER TF CICCs.

  4. Temperature Models for the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Manea, V. C.; Kostoglodov, V.; Currie, C.; Manea, M.; Wang, K.

    2002-12-01

    It is well known that the temperature is one of the major factors which controls the seismogenic zone. The Mexican subduction zone is characterized by a very shallow flat subducting interplate in its central part (Acapulco, Oaxaca), and deeper subduction slabs northern (Jalisco) and southern (Chiapas). It has been proposed that the seismogenic zone is controlled, among other factors, by a temperature. Therefore, we have developed four two-dimensional steady state thermal models for Jalisco, Guerrero, Oaxaca and Chiapas. The updip limit of the seismogenic zone is taken between 100 §C and 150 §C, while the downdip limit is thought to be at 350 §C because of the transition from stick-slip to stable-sliding. The shape of the subducting plate is inferred from gravity and seismicity. The convergence velocity between oceanic and continental lithospheric plates is taken as the following: 5 cm/yr for Jalisco profile, 5.5 for Guerrero profile, 5.8 for Oaxaca profile, and 7.8 for Chiapas profile. The age of the subducting plates, since they are young, and provides the primary control on the forearc thermal structure, are as the following: 11 My for Jalisco profile, 14.5 My for Guerrero profile, 15 My for Oaxaca profile, and 28 My for Chiapas profile. We also introduced in the models a small quantity of frictional heating (pore pressure ratio 0.98). The value of 0.98 for pore pressure ratio was obtained for the Guerrero profile, in order to fit the intersection between the 350 §C isotherm and the subducting plate at 200 Km from trench. The value of 200 km coupling zone from trench is inferred from GPS data for the steady interseismic period and also for the last slow aseismic slip that occurred in Guerrero in 2002. We have used this value of pore pressure ratio (0.98) for all the other profiles. For the others three profiles we obtained the following coupling extents: Jalisco - 100 km, Oaxaca - 170 km and Chiapas - 125 km (from the trench). Independent constrains of the

  5. Effect of short-term room temperature storage on the microbial community in infant fecal samples

    PubMed Central

    Guo, Yong; Li, Sheng-Hui; Kuang, Ya-Shu; He, Jian-Rong; Lu, Jin-Hua; Luo, Bei-Jun; Jiang, Feng-Ju; Liu, Yao-Zhong; Papasian, Christopher J.; Xia, Hui-Min; Deng, Hong-Wen; Qiu, Xiu

    2016-01-01

    Sample storage conditions are important for unbiased analysis of microbial communities in metagenomic studies. Specifically, for infant gut microbiota studies, stool specimens are often exposed to room temperature (RT) conditions prior to analysis. This could lead to variations in structural and quantitative assessment of bacterial communities. To estimate such effects of RT storage, we collected feces from 29 healthy infants (0–3 months) and partitioned each sample into 5 portions to be stored for different lengths of time at RT before freezing at −80 °C. Alpha diversity did not differ between samples with storage time from 0 to 2 hours. The UniFrac distances and microbial composition analysis showed significant differences by testing among individuals, but not by testing between different time points at RT. Changes in the relative abundance of some specific (less common, minor) taxa were still found during storage at room temperature. Our results support previous studies in children and adults, and provided useful information for accurate characterization of infant gut microbiomes. In particular, our study furnished a solid foundation and justification for using fecal samples exposed to RT for less than 2 hours for comparative analyses between various medical conditions. PMID:27226242

  6. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    SciTech Connect

    Montaser, A.

    1990-01-01

    In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, biomedicine and nutrition. Emphasis is being placed on: generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; diagnostic studies of high-temperature plasmas to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; development and characterization of new sample introduction systems that consume microliter or microgram quantities of samples, and investigation of new membrane separators for striping solvent from sample aerosol to reduce various interferences and to enhance sensitivity in plasma spectrometry.

  7. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability

    SciTech Connect

    Kim, Jungdae; Nam, Hyoungdo; Schroeder, Allan; Shih, Chih-Kang; Qin, Shengyong; Kim, Sang-ui; Eom, Daejin

    2015-09-15

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  8. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability

    NASA Astrophysics Data System (ADS)

    Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang

    2015-09-01

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  9. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability.

    PubMed

    Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang

    2015-09-01

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  10. Multiple sample characterization of coals and other substances by controlled-atmosphere programmed temperature oxidation

    DOEpatents

    LaCount, Robert B.

    1993-01-01

    A furnace with two hot zones holds multiple analysis tubes. Each tube has a separable sample-packing section positioned in the first hot zone and a catalyst-packing section positioned in the second hot zone. A mass flow controller is connected to an inlet of each sample tube, and gas is supplied to the mass flow controller. Oxygen is supplied through a mass flow controller to each tube to either or both of an inlet of the first tube and an intermediate portion between the tube sections to intermingle with and oxidize the entrained gases evolved from the sample. Oxidation of those gases is completed in the catalyst in each second tube section. A thermocouple within a sample reduces furnace temperature when an exothermic condition is sensed within the sample. Oxidized gases flow from outlets of the tubes to individual gas cells. The cells are sequentially aligned with an infrared detector, which senses the composition and quantities of the gas components. Each elongated cell is tapered inward toward the center from cell windows at the ends. Volume is reduced from a conventional cell, while permitting maximum interaction of gas with the light beam. Reduced volume and angulation of the cell inlets provide rapid purgings of the cell, providing shorter cycles between detections. For coal and other high molecular weight samples, from 50% to 100% oxygen is introduced to the tubes.

  11. On species sampling sequences induced by residual allocation models

    PubMed Central

    Rodríguez, Abel; Quintana, Fernando A.

    2014-01-01

    We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling. PMID:25477705

  12. On species sampling sequences induced by residual allocation models.

    PubMed

    Rodríguez, Abel; Quintana, Fernando A

    2015-02-01

    We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling. PMID:25477705

  13. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  14. Temperature calibration of lacustrine alkenones using in-situ sampling and growth cultures

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Toney, J. L.; Andersen, R.; Fritz, S. C.; Baker, P. A.; Grimm, E. C.; Theroux, S.; Amaral Zettler, L.; Nyren, P. E.

    2010-12-01

    Sedimentary alkenones have been found in an increasing number of lakes around the globe. Studies using molecular biological tools, however, indicate that haptophyte species that produce lacustrine alkenones differ from the oceanic species. In order to convert alkenone unsaturation ratios measured in sediments into temperature, it is necessary to obtain an accurate calibration for individual lakes. Using Lake George, North Dakota, U.S. as an example, we have carried out temperature calibrations by both in-situ water column sampling and culture growth experiments. In-situ measured lake water temperatures in the lake show a strong correlation with the alkenone unsaturation indices (r-squared = 0.82), indicating a rapid equilibrium of alkenone distributions with the lake water temperature in the water column. We applied the in-situ calibration to down-core measurements for Lake George and generated realistic temperature estimates for the past 8 kyr. Algal isolation and culture growth, on the other hand, reveal the presence of two different types of alkenone producing haptophytes. The species making a predominant C37:4 alkenone (species A) produced much greater concentrations of alkenones per unit volume than the species that produced a predominant C37:3 alkenone (species B). It is the first time that a haptophyte species (species A) making a predominant C37:4 alkenone is cultured successfully and now replicated at four different growth temperatures. The distribution of alkeones in Lake George sediments matches extremely well with the alkenones produced by species A, indicating species A is likely the producer for the alkenones in the sediments. The alkenone unsaturation ratio of alkenones produced by species A haptophyte shows a primary dependence on growth temperature as expected, but the slope of change appears to vary depending on the growth stages. The implications of our findings for paleoclimate reconstructions using lacustrine alkenones will be discussed.

  15. Modeling temperature dependence of trace element concentrations in groundwater using temperature dependent distribution coefficient

    NASA Astrophysics Data System (ADS)

    Saito, H.; Saito, T.; Hamamoto, S.; Komatsu, T.

    2015-12-01

    In our previous study, we have observed trace element concentrations in groundwater increased when groundwater temperature was increased with constant thermal loading using a 50-m long vertical heat exchanger installed at Saitama University, Japan. During the field experiment, 38 degree C fluid was circulated in the heat exchanger resulting 2.8 kW thermal loading over 295 days. Groundwater samples were collected regularly from 17-m and 40-m deep aquifers at four observation wells located 1, 2, 5, and 10 m, respectively, from the heat exchange well and were analyzed with ICP-MS. As a result, concentrations of some trace elements such as boron increased with temperature especially at the 17-m deep aquifer that is known as marine sediment. It has been also observed that the increased concentrations have decreased after the thermal loading was terminated indicating that this phenomenon may be reversible. Although the mechanism is not fully understood, changes in the liquid phase concentration should be associated with dissolution and/or desorption from the solid phase. We therefore attempt to model this phenomenon by introducing temperature dependence in equilibrium linear adsorption isotherms. We assumed that distribution coefficients decrease with temperature so that the liquid phase concentration of a given element becomes higher as the temperature increases under the condition that the total mass stays constant. A shape function was developed to model the temperature dependence of the distribution coefficient. By solving the mass balance equation between the liquid phase and the solid phase for a given element, a new term describing changes in the concentration was implemented in a source/sink term of a standard convection dispersion equation (CDE). The CDE was then solved under a constant ground water flow using FlexPDE. By calibrating parameters in the newly developed shape function, the changes in element concentrations observed were quite well predicted. The

  16. A Nonlinear Viscoelastic Model for Ceramics at High Temperatures

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.

    2002-01-01

    High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.

  17. Long-term room temperature preservation of corpse soft tissue: an approach for tissue sample storage

    PubMed Central

    2011-01-01

    Background Disaster victim identification (DVI) represents one of the most difficult challenges in forensic sciences, and subsequent DNA typing is essential. Collected samples for DNA-based human identification are usually stored at low temperature to halt the degradation processes of human remains. We have developed a simple and reliable procedure for soft tissue storage and preservation for DNA extraction. It ensures high quality DNA suitable for PCR-based DNA typing after at least 1 year of room temperature storage. Methods Fragments of human psoas muscle were exposed to three different environmental conditions for diverse time periods at room temperature. Storage conditions included: (a) a preserving medium consisting of solid sodium chloride (salt), (b) no additional substances and (c) garden soil. DNA was extracted with proteinase K/SDS followed by organic solvent treatment and concentration by centrifugal filter devices. Quantification was carried out by real-time PCR using commercial kits. Short tandem repeat (STR) typing profiles were analysed with 'expert software'. Results DNA quantities recovered from samples stored in salt were similar up to the complete storage time and underscored the effectiveness of the preservation method. It was possible to reliably and accurately type different genetic systems including autosomal STRs and mitochondrial and Y-chromosome haplogroups. Autosomal STR typing quality was evaluated by expert software, denoting high quality profiles from DNA samples obtained from corpse tissue stored in salt for up to 365 days. Conclusions The procedure proposed herein is a cost efficient alternative for storage of human remains in challenging environmental areas, such as mass disaster locations, mass graves and exhumations. This technique should be considered as an additional method for sample storage when preservation of DNA integrity is required for PCR-based DNA typing. PMID:21846338

  18. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  19. Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses

    SciTech Connect

    Bastos, Rodrigo O.; Appoloni, Carlos R.

    2008-08-07

    In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.

  20. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  1. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 4.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas State Language Arts Framework, this sample curriculum model for grade four language arts is divided into sections focusing on writing; listening, speaking, and viewing; and reading. Each section lists standards; benchmarks; assessments; and strategies/activities. The reading section itself is divided into print…

  2. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 3.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas State Language Arts Framework, this sample curriculum model for grade three language arts is divided into sections focusing on writing; listening, speaking, and viewing; and reading. Each section lists standards; benchmarks; assessments; and strategies/activities. The reading section itself is divided into print…

  3. A three stage sampling model for remote sensing applications

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A conceptual model and an empirical application of the relationship between the manner of selecting observations and its effect on the precision of estimates from remote sensing are reported. This three stage sampling scheme considers flightlines, segments within flightlines, and units within these segments. The error of estimate is dependent on the number of observations in each of the stages.

  4. Water adsorption at high temperature on core samples from The Geysers geothermal field

    SciTech Connect

    Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.

    1998-06-01

    The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal reservoir, California, was measured at 150, 200, and 250 C as a function of pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption (increasing pressure) and desorption (decreasing pressure) runs were made in order to investigate the nature and the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were performed on the same rock samples. Nitrogen or krypton adsorption and desorption isotherms at 77 K were used to obtain BET specific surface areas, pore volumes and their distributions with respect to pore sizes. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there is in general no proportionality between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The results indicate that multilayer adsorption rather than capillary condensation is the dominant water storage mechanism at high temperatures.

  5. Water adsorption at high temperature on core samples from The Geysers geothermal field

    SciTech Connect

    Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.

    1998-06-01

    The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal field, California, was measured at 150, 200, and 250 C as a function of steam pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption and desorption runs were made in order to investigate the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were made on the same rock samples. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there was no direct correlation between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The hysteresis decreased significantly at 250 C. The results indicate that multilayer adsorption, rather than capillary condensation, is the dominant water storage mechanism at high temperatures.

  6. A temperature dependent SPICE macro-model for power MOSFETs

    SciTech Connect

    Pierce, D.G.

    1991-01-01

    The power MOSFET SPICE Macro-Model has been developed suitable for use over the temperature range {minus}55 to 125 {degrees}C. The model is comprised of a single parameter set with temperature dependence accessed through the SPICE .TEMP card. SPICE parameter extraction techniques for the model and model predictive accuracy are discussed. 7 refs., 8 figs., 1 tab.

  7. Regolith layering processes based on studies of low-temperature volatile elements in Apollo core samples

    NASA Technical Reports Server (NTRS)

    Jovanovic, S.; Reed, G. W., Jr.

    1979-01-01

    The concentrations of Hg released at at the most 130 C increase with depth in near-surface samples from cores. This is in response to a daytime thermal gradient with temperatures of approximately 400 K at the surface decreasing to approximately 250 K at greater than 10 cm depth (Keihm and Langseth, 1973). The steepness of the slopes and the depths to which the concentration gradients extend appear to be determined by the color, density and possibly the grain size of the soils. Earlier surface layers can be identified and, in general, are in agreement with other indicators of such layers. Low temperature volatilized Br exhibits trends that parallel those of Hg in a number of cases. This is also true of Br and Hg fractions released in stepwise heating experiments at higher temperatures. The coherence, especially in higher temperature fractions, between these chemically dissimilar elements implies a common physical process of entrapment; possibly one related to the presence of vapor deposits on surfaces and to opening and closing of microcracks and pores.

  8. Simulating canopy temperature for modelling heat stress in cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop models must be improved to account for the large effects of heat stress effects on crop yields. To date, most approaches in crop models use air temperature despite evidence that crop canopy temperature better explains yield reductions associated with high temperature events. This study presents...

  9. Low conductive support for thermal insulation of a sample holder of a variable temperature scanning tunneling microscope

    NASA Astrophysics Data System (ADS)

    Hanzelka, Pavel; Vonka, Jakub; Musilova, Vera

    2013-08-01

    We have designed a supporting system to fix a sample holder of a scanning tunneling microscope in an UHV chamber at room temperature. The microscope will operate down to a temperature of 20 K. Low thermal conductance, high mechanical stiffness, and small dimensions are the main features of the supporting system. Three sets of four glass balls placed in vertices of a tetrahedron are used for thermal insulation based on small contact areas between the glass balls. We have analyzed the thermal conductivity of the contacts between the balls mutually and between a ball and a metallic plate while the results have been applied to the entire support. The calculation based on a simple model of the setup has been verified with some experimental measurements. In comparison with other feasible supporting structures, the designed support has the lowest thermal conductance.

  10. Low conductive support for thermal insulation of a sample holder of a variable temperature scanning tunneling microscope.

    PubMed

    Hanzelka, Pavel; Vonka, Jakub; Musilova, Vera

    2013-08-01

    We have designed a supporting system to fix a sample holder of a scanning tunneling microscope in an UHV chamber at room temperature. The microscope will operate down to a temperature of 20 K. Low thermal conductance, high mechanical stiffness, and small dimensions are the main features of the supporting system. Three sets of four glass balls placed in vertices of a tetrahedron are used for thermal insulation based on small contact areas between the glass balls. We have analyzed the thermal conductivity of the contacts between the balls mutually and between a ball and a metallic plate while the results have been applied to the entire support. The calculation based on a simple model of the setup has been verified with some experimental measurements. In comparison with other feasible supporting structures, the designed support has the lowest thermal conductance.

  11. Data augmentation for models based on rejection sampling

    PubMed Central

    Rao, Vinayak; Lin, Lizhen; Dunson, David B.

    2016-01-01

    We present a data augmentation scheme to perform Markov chain Monte Carlo inference for models where data generation involves a rejection sampling algorithm. Our idea is a simple scheme to instantiate the rejected proposals preceding each data point. The resulting joint probability over observed and rejected variables can be much simpler than the marginal distribution over the observed variables, which often involves intractable integrals. We consider three problems: modelling flow-cytometry measurements subject to truncation; the Bayesian analysis of the matrix Langevin distribution on the Stiefel manifold; and Bayesian inference for a nonparametric Gaussian process density model. The latter two are instances of doubly-intractable Markov chain Monte Carlo problems, where evaluating the likelihood is intractable. Our experiments demonstrate superior performance over state-of-the-art sampling algorithms for such problems. PMID:27279660

  12. Far-infrared Dust Temperatures and Column Densities of the MALT90 Molecular Clump Sample

    NASA Astrophysics Data System (ADS)

    Guzmán, Andrés E.; Sanhueza, Patricio; Contreras, Yanett; Smith, Howard A.; Jackson, James M.; Hoq, Sadia; Rathborne, Jill M.

    2015-12-01

    We present dust column densities and dust temperatures for ˜3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm-2, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.

  13. Study of Low Temperature Baking Effect on Field Emission on Nb Samples Treated by BEP, EP, and BCP

    SciTech Connect

    Andy Wu, Song Jin, Robert Rimmer, Xiang Yang Lu, K. Zhao, Laura MacIntyre, Robert Ike

    2010-05-01

    Field emission is still one of the major obstacles facing Nb superconducting radio frequency (SRF) community for allowing Nb SRF cavities to reach routinely accelerating gradient of 35 MV/m that is required for the international linear collider. Nowadays, the well know low temperature backing at 120 oC for 48 hours is a common procedure used in the SRF community to improve the high field Q slope. However, some cavity production data have showed that the low temperature baking may induce field emission for cavities treated by EP. On the other hand, an earlier study of field emission on Nb flat samples treated by BCP showed an opposite conclusion. In this presentation, the preliminary measurements of Nb flat samples treated by BEP, EP, and BCP via our unique home-made scanning field emission microscope before and after the low temperature baking are reported. Some correlations between surface smoothness and the number of the observed field emitters were found. The observed experimental results can be understood, at least partially, by a simple model that involves the change of the thickness of the pent-oxide layer on Nb surfaces.

  14. Measuring body temperature time series regularity using Approximate Entropy and Sample Entropy.

    PubMed

    Cuesta-Frau, D; Miro-Martinez, P; Oltra-Crespo, S; Varela-Entrecanales, M; Aboy, M; Novak, D; Austin, D

    2009-01-01

    Approximate Entropy (ApEn) and Sample Entropy (SampEn) have proven to be a valuable analyzing tool for a number of physiological signals. However, the characterization of these metrics is still lacking. We applied ApEn and SampEn to body temperature time series recorded from patients in critical state. This study was aimed at finding the optimal analytical configuration to best distinguish between survivor and non-survivor records, and at gaining additional insight into the characterization of such tools. A statistical analysis of the results was conducted to support the parameter and metric selection criteria for this type of physiological signal.

  15. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  16. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    SciTech Connect

    Tuomas, V.; Jaakko, L.

    2013-07-01

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)

  17. Temperature distributions in the laser-heated diamond anvil cell from 3-D numerical modeling

    SciTech Connect

    Rainey, E. S. G.; Kavner, A.; Hernlund, J. W.

    2013-11-28

    We present TempDAC, a 3-D numerical model for calculating the steady-state temperature distribution for continuous wave laser-heated experiments in the diamond anvil cell. TempDAC solves the steady heat conduction equation in three dimensions over the sample chamber, gasket, and diamond anvils and includes material-, temperature-, and direction-dependent thermal conductivity, while allowing for flexible sample geometries, laser beam intensity profile, and laser absorption properties. The model has been validated against an axisymmetric analytic solution for the temperature distribution within a laser-heated sample. Example calculations illustrate the importance of considering heat flow in three dimensions for the laser-heated diamond anvil cell. In particular, we show that a “flat top” input laser beam profile does not lead to a more uniform temperature distribution or flatter temperature gradients than a wide Gaussian laser beam.

  18. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    NASA Astrophysics Data System (ADS)

    Zou, S.; Maughan, B. J.; Giles, P. A.; Vikhlinin, A.; Pacaud, F.; Burenin, R.; Hornstrup, A.

    2016-11-01

    We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 < z < 0.15 with a median temperature of {˜ }2{keV}. The sample is a statistically complete flux-limited subset of the 400 deg2 survey. We investigated the scaling relation between X-ray luminosity (L) and temperature (T), taking selection biases fully into account. The logarithmic slope of the bolometric L-T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L-T relation, we show that there is no evidence for the slope, normalization, or scatter of the L-T relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised L-T relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups.

  19. Sampling and specimens: potential application of a general model in geoscience sample registration

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Habermann, T.; Duclaux, G.

    2011-12-01

    Sampling is a key element of observational science. Specimens are a particular class of sample, in which material is retrieved from its original location and used for ex-situ observations and analysis. Specimens retrieved from difficult locations (e.g. deep ocean sampling, extra-terrestrial sampling) or of rare phenomena, have special scientific value. Material from these may be distributed to multiple laboratories for observation. For maximum utility, reports from the different studies must be recognized and compared. This has been a challenge as the original specimens are often not clearly identified or existing ids are not reported. To mitigate this, the International Geologic Specimen Number (IGSN) provides universal, project-neutral identifiers for geoscience specimens, and SESAR a system for registering those identifiers. Standard descriptive information required for specimen registration was proposed during a SESAR meeting held in February 2011. The standard ISO 19156 'Observations and Measurements' (O&M) includes an information model for basic description of specimens. The specimen model was designed to accommodate a variety of scenarios in chemistry, geochemistry, field geology, and life-sciences, and is believed to be applicable to a wide variety of application domains. O&M is implemented in XML (as a GML Schema) for OGC services and we have recently developed a complementary semantic-web compatible RDF/OWL representation. The GML form is used in several services deployed through AuScope, and for water quality information in WIRADA. The model has underpinned the redevelopment of a large geochemistry database in CSIRO. Capturing the preparation chain is the particular challenge in (geo-) chemistry, so the flexible and scalable model provided by the specimen model in O&M has been critical to its success in this context. This standard model for specimen metadata appears to satisfy all SESAR requirements, so might serve as the basic schema in the SESAR

  20. Model of local temperature changes in brain upon functional activation.

    PubMed

    Collins, Christopher M; Smith, Michael B; Turner, Robert

    2004-12-01

    Experimental results for changes in brain temperature during functional activation show large variations. It is, therefore, desirable to develop a careful numerical model for such changes. Here, a three-dimensional model of temperature in the human head using the bioheat equation, which includes effects of metabolism, perfusion, and thermal conduction, is employed to examine potential temperature changes due to functional activation in brain. It is found that, depending on location in brain and corresponding baseline temperature relative to blood temperature, temperature may increase or decrease on activation and concomitant increases in perfusion and rate of metabolism. Changes in perfusion are generally seen to have a greater effect on temperature than are changes in metabolism, and hence active brain is predicted to approach blood temperature from its initial temperature. All calculated changes in temperature for reasonable physiological parameters have magnitudes <0.12 degrees C and are well within the range reported in recent experimental studies involving human subjects.

  1. Modeling the Freezing of SN in High Temperature Furnaces

    NASA Technical Reports Server (NTRS)

    Brush, Lucien

    1999-01-01

    Presently, crystal growth furnaces are being designed that will be used to monitor the crystal melt interface shape and the solutal and thermal fields in its vicinity during the directional freezing of dilute binary alloys, To monitor the thermal field within the solidifying materials, thermocouple arrays (AMITA) are inserted into the sample. Intrusive thermocouple monitoring devices can affect the experimental data being measured. Therefore, one objective of this work is to minimize the effect of the thermocouples on the data generated. To aid in accomplishing this objective, two models of solidification have been developed. Model A is a fully transient, one dimensional model for the freezing of a dilute binary alloy that is used to compute temperature profiles for comparison with measurements taken from the thermocouples. Model B is a fully transient two dimensional model of the solidification of a pure metal. It will be used to uncover the manner in which thermocouple placement and orientation within the ampoule breaks the longitudinal axis of symmetry of the thermal field and the crystal-melt interface. Results and conclusions are based on the comparison of the models with experimental results taken during the freezing of pure Sn.

  2. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  3. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  4. The Genealogy of Samples in Models with Selection

    PubMed Central

    Neuhauser, C.; Krone, S. M.

    1997-01-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604

  5. Effect of Sample Storage Temperature and Time Delay on Blood Gases, Bicarbonate and pH in Human Arterial Blood Samples

    PubMed Central

    Mohammadhoseini, Elham; Safavi, Enayat; Seifi, Sepideh; Seifirad, Soroush; Firoozbakhsh, Shahram; Peiman, Soheil

    2015-01-01

    Background: Results of arterial blood gas analysis can be biased by pre-analytical factors, such as time interval before analysis, temperature during storage and syringe type. Objectives: To investigate the effects of samples storage temperature and time delay on blood gases, bicarbonate and PH results in human arterial blood samples. Patients and Methods: 2.5 mL arterial blood samples were drawn from 45 patients via an indwelling Intraarterial catheter. Each sample was divided into five equal samples and stored in multipurpose tuberculin plastic syringes. Blood gas analysis was performed on one of five samples as soon as possible. Four other samples were divided into two groups stored at 22°C and 0°C. Blood gas analyses were repeated at 30 and 60 minutes after sampling. Results: PaO2 of the samples stored at 0°C was increased significantly after 60 minutes (P = 0.007). The PaCO2 of the samples kept for 30 and 60 minutes at 22°C was significantly higher than primary result (P = 0.04, P < 0.001). In samples stored at 22°C, pH decreased significantly after 30 and 60 minutes (P = 0.017, P = 0.001). There were no significant differences in other results of samples stored at 0°C or 22°C after 30 or 60 minutes. Conclusions: In samples stored in plastic syringes, overestimation of PaO2 levels should be noted if samples cooled before analysis. In samples stored in plastic syringes, it is not necessary to store samples in iced water when analysis delayed up to one hour. PMID:26019892

  6. Simple determination of the herbicide napropamide in water and soil samples by room temperature phosphorescence.

    PubMed

    Salinas-Castillo, Alfonso; Fernández-Sanchez, Jorge Fernando; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2005-08-01

    A new, simple, rapid and selective phosphorimetric method for determining napropamide is proposed which demonstrates the applicability of heavy-atom-induced room-temperature phosphorescence for analyzing pesticides in real samples. The phosphorescence signals are a consequence of intermolecular protection and are found exclusively with analytes in the presence of heavy atom salts. Sodium sulfite was used as an oxygen scavenger to minimize room-temperature phosphorescence quenching. The determination was performed in 1 M potassium iodide and 6 mM sodium sulfite at 20 degrees C. The phosphorescence intensity was measured at 520 nm with excitation at 290 nm. Phosphorescence was easily developed, with a linear relation to concentration between 3.2 and 600.0 ng ml(-1) and a detection limit of 3.2 ng ml(-1). The method has been successfully applied to the analysis of napropamide in water and soil samples and an exhaustive interference study was also carried out to display the selectivity of the proposed method. PMID:15838936

  7. Automation of sample plan creation for process model calibration

    NASA Astrophysics Data System (ADS)

    Oberschmidt, James; Abdo, Amr; Desouky, Tamer; Al-Imam, Mohamed; Krasnoperova, Azalia; Viswanathan, Ramya

    2010-04-01

    The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.

  8. Ambient temperature modelling with soft computing techniques

    SciTech Connect

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  9. Causal Estimation using Semiparametric Transformation Models under Prevalent Sampling

    PubMed Central

    Cheng, Yu-Jen; Wang, Mei-Cheng

    2015-01-01

    Summary This paper develops methods and inference for causal estimation in semiparametric transformation models for prevalent survival data. Through estimation of the transformation models and covariate distribution, we propose analytical procedures to estimate the causal survival function. As the data are observational, the unobserved potential outcome (survival time) may be associated with the treatment assignment, and therefore there may exist a systematic imbalance between the data observed from each treatment arm. Further, due to prevalent sampling, subjects are observed only if they have not experienced the failure event when data collection began, causing the prevalent sampling bias. We propose a unified approach which simultaneously corrects the bias from the prevalent sampling and balances the systematic differences from the observational data. We illustrate in the simulation study that standard analysis without proper adjustment would result in biased causal inference. Large sample properties of the proposed estimation procedures are established by techniques of empirical processes and examined by simulation studies. The proposed methods are applied to the Surveillance, Epidemiology, and End Results (SEER) and Medicare linked data for women diagnosed with breast cancer. PMID:25715045

  10. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  11. A NEW SAMPLE CELL DESIGN FOR STUDYING SOLID-MATRIX ROOM TEMPERATURE PHOSPHORESCENCE MOISTURE QUENCHING. (R824100)

    EPA Science Inventory

    A new sample chamber was developed that can be used in the measurement of the effects of moisture on the room-temperature solid-matrix phosphorescence of phosphors adsorbed onto filter paper. The sample chamber consists of a sealed quartz cell that contains a special teflon sampl...

  12. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part II: Applications.

    PubMed

    Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J

    2016-01-15

    This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications.

  13. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part II: Applications.

    PubMed

    Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J

    2016-01-15

    This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934

  14. Physical Models of Seismic-Attenuation Measurements on Lab Samples

    NASA Astrophysics Data System (ADS)

    Coulman, T. J.; Morozov, I. B.

    2012-12-01

    Seismic attenuation in Earth materials is often measured in the lab by using low-frequency forced oscillations or static creep experiments. The usual assumption in interpreting and even designing such experiments is the "viscoelastic" behavior of materials, i.e., their description by the notions of a Q-factor and material memory. However, this is not the only theoretical approach to internal friction, and it also involves several contradictions with conventional mechanics. From the viewpoint of mechanics, the frequency-dependent Q becomes a particularly enigmatic property attributed to the material. At the same time, the behavior of rock samples in seismic-attenuation experiments can be explained by a strictly mechanical approach. We use this approach to simulate such experiments analytically and numerically for a system of two cylinders consisting of a rock sample and elastic standard undergoing forced oscillations, and also for a single rock sample cylinder undergoing static creep. The system is subject to oscillatory compression or torsion, and the phase-lag between the sample and standard is measured. Unlike in the viscoelastic approach, a full Lagrangian formulation is considered, in which material anelasticity is described by parameters of "solid viscosity" and a dissipation function from which the constitutive equation is derived. Results show that this physical model of anelasticity predicts creep results very close to those obtained by using empirical Burger's bodies or Andrade laws. With nonlinear (non-Newtonian) solid viscosity, the system shows an almost instantaneous initial deformation followed by slow creep towards an equilibrium. For Aheim Dunite, the "rheologic" parameters of nonlinear viscosity are υ=0.79 and η=2.4 GPa-s. Phase-lag results for nonlinear viscosity show Q's slowly decreasing with frequency. To explain a Q increasing with frequency (which is often observed in the lab and in the field), one has to consider nonlinear viscosity with

  15. FAR-INFRARED DUST TEMPERATURES AND COLUMN DENSITIES OF THE MALT90 MOLECULAR CLUMP SAMPLE

    SciTech Connect

    Guzmán, Andrés E.; Smith, Howard A.; Sanhueza, Patricio; Contreras, Yanett; Rathborne, Jill M.; Jackson, James M.; Hoq, Sadia

    2015-12-20

    We present dust column densities and dust temperatures for ∼3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm{sup −2}, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.

  16. Modified equilibrium temperature models for cold-water streams

    NASA Astrophysics Data System (ADS)

    Herb, William R.; Stefan, Heinz G.

    2011-06-01

    Water temperature determines the spatial distribution of fish species, including cold-water fish such as trout, and is driven by the balance of the heat flux across the water surface and the heat flux across the sediment surface. In this study, a modified equilibrium temperature model was developed for cold-water streams that includes the effect of groundwater inflow. The modified equilibrium temperature model gives estimates of daily average stream temperature based on climate conditions, riparian shading, stream width, and groundwater input rate and temperature. For a small tributary stream with relatively uniform riparian shading, the modified equilibrium temperature was found to be a good predictor of daily average stream temperature, with a root-mean-square errors (RMSE) of 1.2°C. The modified equilibrium temperature model also gave good estimates (1.4°C RMSE) of daily average stream temperature for a larger stream when riparian shading was averaged over sufficiently long distances. A sensitivity analysis using the modified equilibrium temperature model confirmed that water temperature in cold-water streams varies strongly with riparian shading, stream width, and both groundwater inflow rate and temperature. These groundwater parameters therefore need to be taken into account when climate change impacts on stream temperature are projected. The stream temperature model developed in this study is a useful tool to characterize temperature conditions in cold-water streams with different levels of riparian shading and groundwater inputs and to assess the impact of future land use and climate change on temperature in these streams.

  17. Research on Temperature Modeling of Strapdown Inertial Navigation System

    NASA Astrophysics Data System (ADS)

    Huang, XiaoJuan; Zhao, LiJian; Xu, RuXiang; Yang, Heng

    2016-02-01

    Strapdown inertial navigation system with laser gyro has been deployed in space tracking ship and compared with the conventional platform inertial navigation system, it has substantial advantage in performance, accuracy and stabilization. Environmental and internal temperature affects the gyro, accelerator, electrical circuits and mechanical structure significantly but the existing temperature compensation model is not accurate enough especially when there is a big temperature change.

  18. THE TWO-LEVEL MODEL AT FINITE-TEMPERATURE

    SciTech Connect

    Goodman, A.L.

    1980-07-01

    The finite-temperature HFB cranking equations are solved for the two-level model. The pair gap, moment of inertia and internal energy are determined as functions of spin and temperature. Thermal excitations and rotations collaborate to destroy the pair correlations. Raising the temperature eliminates the backbending effect and improves the HFB approximation.

  19. Two-Temperature Model of Nonequilibrium Electron Relaxation:. a Review

    NASA Astrophysics Data System (ADS)

    Singh, Navinder

    The present paper is a review of the phenomena related to nonequilibrium electron relaxation in bulk and nano-scale metallic samples. The workable Two-Temperature Model (TTM) based on Boltzmann-Bloch-Peierls kinetic equation has been applied to study the ultra-fast (femto-second) electronic relaxation in various metallic systems. The advent of new ultra-fast (femto-second) laser technology and pump-probe spectroscopy has produced wealth of new results for micro- and nano-scale electronic technology. The aim of this paper is to clarify the TTM, conditions of its validity and nonvalidity, its modifications for nano-systems, to sum-up the progress, and to point out open problems in this field. We also give a phenomenological integro-differential equation for the kinetics of nondegenerate electrons that goes beyond the TTM.

  20. Quantification Model for Estimating Temperature Field Distributions of Apple Fruit

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Yang, Le; Zhao, Huizhong; Zhang, Leijie; Zhong, Zhiyou; Liu, Yanling; Chen, Jianhua

    A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature distribution in the cooling process. Then there was obvious difference between apple fruit temperature and environment temperature. Compared to the change of environment temperature, a long hysteresis phenomenon happened to the temperature of apple fruit body. That is to say, there was a significant temperature change of apple fruit body in a period of time after environment temperature dropping. And then the change of temerature of apple fruit body in the cooling process became slower and slower. This can explain the time delay phenomenon of biology. After that, the temperature differences of every layer increased from centre to surface of apple fruit gradually. That is to say, the minimum temperature differences closed to centre of apple fruit body and the maximum temperature differences closed to the surface of apple fruit body. Finally, the temperature of every part of apple fruit body will tend to consistent and be near to the environment temperature in the cooling process. It was related to the metabolism heat of plant body at any time.

  1. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  2. Two-zone transient storage modeling using temperature and solute data with multiobjective calibration: 1. Temperature

    NASA Astrophysics Data System (ADS)

    Neilson, B. T.; Chapra, S. C.; Stevens, D. K.; Bandaragoda, C.

    2010-12-01

    This paper presents the formulation and calibration of the temperature portion of a two-zone temperature and solute (TZTS) model which separates transient storage into surface (STS) and subsurface transient storage (HTS) zones. The inclusion of temperature required the TZTS model formulation to differ somewhat from past transient storage models in order to accommodate terms associated with heat transfer. These include surface heat fluxes in the main channel (MC) and STS, heat and mass exchange between the STS and MC, heat and mass exchange between the HTS and MC, and heat exchange due to bed and deeper ground conduction. To estimate the additional parameters associated with a two-zone model, a data collection effort was conducted to provide temperature time series within each zone. Both single-objective and multiobjective calibration algorithms were then linked to the TZTS model to assist in parameter estimation. Single-objective calibrations based on MC temperatures at two different locations along the study reach provided reasonable predictions in the MC and STS. The HTS temperatures, however, were typically poorly estimated. The two-objective calibration using MC temperatures simultaneously at two locations illustrated that the TZTS model accurately predicts temperatures observed in MC, STS, and HTS zones, including those not used in the calibration. These results suggest that multiple data sets representing different characteristics of the system should be used when calibrating complex in-stream models.

  3. Volcanic Aerosol Evolution: Model vs. In Situ Sampling

    NASA Astrophysics Data System (ADS)

    Pfeffer, M. A.; Rietmeijer, F. J.; Brearley, A. J.; Fischer, T. P.

    2002-12-01

    Volcanoes are the most significant non-anthropogenic source of tropospheric aerosols. Aerosol samples were collected at different distances from 92°C fumarolic source at Poás Volcano. Aerosols were captured on TEM grids coated by a thin C-film using a specially designed collector. In the sampling, grids were exposed to the plume for 30-second intervals then sealed and frozen to prevent reaction before ATEM analysis to determine aerosol size and chemistry. Gas composition was established using gas chromatography, wet chemistry techniques, AAS and Ion Chromatography on samples collected directly from a fumarolic vent. SO2 flux was measured remotely by COSPEC. A Gaussian plume dispersion model was used to model concentrations of the gases at different distances down-wind. Calculated mixing ratios of air and the initial gas species were used as input to the thermo-chemical model GASWORKS (Symonds and Reed, Am. Jour. Sci., 1993). Modeled products were compared with measured aerosol compositions. Aerosols predicted to precipitate out of the plume one meter above the fumarole are [CaSO4, Fe2.3SO4, H2SO4, MgF2. Na2SO4, silica, water]. Where the plume leaves the confines of the crater, 380 meters distant, the predicted aerosols are the same, excepting FeF3 replacing Fe2.3SO4. Collected aerosols show considerable compositional differences between the sampling locations and are more complex than those predicted. Aerosols from the fumarole consist of [Fe +/- Si,S,Cl], [S +/- O] and [Si +/- O]. Aerosols collected on the crater rim consist of the same plus [O,Na,Mg,Ca], [O,Si,Cl +/- Fe], [Fe,O,F] and [S,O +/- Mg,Ca]. The comparison between results obtained by the equilibrium gas model and the actual aerosol compositions shows that an assumption of chemical and thermal equilibrium evolution is invalid. The complex aerosols collected contrast the simple formulae predicted. These findings show that complex, non-equilibrium chemical reactions take place immediately upon volcanic

  4. Modelling of tandem cell temperature coefficients

    SciTech Connect

    Friedman, D.J.

    1996-05-01

    This paper discusses the temperature dependence of the basic solar-cell operating parameters for a GaInP/GaAs series-connected two-terminal tandem cell. The effects of series resistance and of different incident solar spectra are also discussed.

  5. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L. (d...) under conditions of elevated solar insolation. The test evaluates radiative effects on...

  6. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L. (d...) under conditions of elevated solar insolation. The test evaluates radiative effects on...

  7. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  8. Blood sample stability at room temperature for counting red and white blood cells and platelets.

    PubMed

    Vogelaar, S A; Posthuma, D; Boomsma, D; Kluft, C

    2002-08-01

    Blood handling required for different cellular variables is different. In a practical setting of blood sampling approximately 4 h separated from the first analysis, we compared the analysis of blood cell variables at this 4-h point with analysis of blood stored for approximately 48 h (over the weekend) at room temperature. Blood was collected from 304 apparently healthy individuals aged between 17 and 70 years, with a female/male ratio of 1.8, in K3EDTA. Measurement was performed with a Beckman Coulter Counter Maxm. In addition to the comparison of the data and their correlation on the two time points, we investigated agreement between the data using analysis according to Bland and Altman. Counts of white and red blood cells and platelets were found stable over time and agreement of data was excellent. Platelet mean volume increased as expected between the two time points from 8.8 to 10.3 fl. The white blood cell subpopulations, however, changed over time with a decrease in neutrophils and monocytes and increases in lymphocytes and eosinophils. Apparently, ageing of the sample resulted in the alteration of certain cell characteristics leading to a change in automated cell classification without changing the total number of cells. Among the preanalytical variables recorded, only the time of the year and gender were found to be minor determinants (r < .25) of some of the differences between approximately 4 and approximately 48 h analysis delay. It is concluded that after storage at room temperature over approximately 48 h counts of red, total white cells, platelets and analysis of platelet volume can be combined in one assay session.

  9. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  10. Modeling and Simulation of a Tethered Harpoon for Comet Sampling

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2014-01-01

    This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.

  11. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    SciTech Connect

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.; Mahadevan, Sankaran

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  12. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry. Progress report, January 1, 1990--December 31, 1992

    SciTech Connect

    Montaser, A.

    1992-09-01

    New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)

  13. Potential models and lattice correlators for quarkonia at finite temperature

    SciTech Connect

    Alberico, W. M.; De Pace, A.; Molinari, A.; Beraudo, A.

    2008-01-01

    We update our recent calculation of quarkonium Euclidean correlators at finite temperatures in a potential model by including the effect of zero modes in the lattice spectral functions. These contributions cure most of the previously observed discrepancies with lattice calculations, supporting the use of potential models at finite temperature as an important tool to complement lattice studies.

  14. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  15. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1988-01-01

    Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.

  16. Temperature-variable high-frequency dynamic modeling of PIN diode

    NASA Astrophysics Data System (ADS)

    Shangbin, Ye; Jiajia, Zhang; Yicheng, Zhang; Yongtao, Yao

    2016-04-01

    The PIN diode model for high frequency dynamic transient characteristic simulation is important in conducted EMI analysis. The model should take junction temperature into consideration since equipment usually works at a wide range of temperature. In this paper, a temperature-variable high frequency dynamic model for the PIN diode is built, which is based on the Laplace-transform analytical model at constant temperature. The relationship between model parameters and temperature is expressed as temperature functions by analyzing the physical principle of these parameters. A fast recovery power diode MUR1560 is chosen as the test sample and its dynamic performance is tested under inductive load by a temperature chamber experiment, which is used for model parameter extraction and model verification. Results show that the model proposed in this paper is accurate for reverse recovery simulation with relatively small errors at the temperature range from 25 to 120 °C. Project supported by the National High Technology and Development Program of China (No. 2011AA11A265).

  17. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  18. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  19. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  20. Sparse model selection in the highly under-sampled regime

    NASA Astrophysics Data System (ADS)

    Bulso, Nicola; Marsili, Matteo; Roudi, Yasser

    2016-09-01

    We propose a method for recovering the structure of a sparse undirected graphical model when very few samples are available. The method decides about the presence or absence of bonds between pairs of variable by considering one pair at a time and using a closed form formula, analytically derived by calculating the posterior probability for every possible model explaining a two body system using Jeffreys prior. The approach does not rely on the optimization of any cost functions and consequently is much faster than existing algorithms. Despite this time and computational advantage, numerical results show that for several sparse topologies the algorithm is comparable to the best existing algorithms, and is more accurate in the presence of hidden variables. We apply this approach to the analysis of US stock market data and to neural data, in order to show its efficiency in recovering robust statistical dependencies in real data with non-stationary correlations in time and/or space.

  1. Hippocampal and body temperature changes in rats during delayed matching-to-sample performance in a cold environment.

    PubMed

    Ahlers, S T; Thomas, J R; Berkey, D L

    1991-11-01

    In order to study the effects of temperature changes induced by cold stress on working memory, telemetry thermistor probes were implanted into the hippocampal region of the brain and into the peritoneal cavity of rats. Temperatures in these regions were monitored while rats performed on a delayed matching-to-sample (DMTS) task at ambient temperatures of 23 degrees C and 2 degrees C. Matching accuracy was significantly decreased during exposure to 2 degrees C, indicating a marked impairment of short-term or working memory. Temperature in the hippocampus increased 2 degrees C during exposure to 23 degrees C, but only 1 degrees C when the environmental temperature was 2 degrees C. Body temperature showed a similar but less pronounced pattern in that cold exposure attenuated the increase in temperature observed when animals performed the DMTS task. These results suggest that cold-induced impairment of working memory may be associated with subtle temperature changes in the brain.

  2. Activation energy for a model ferrous-ferric half reaction from transition path sampling.

    PubMed

    Drechsel-Grau, Christof; Sprik, Michiel

    2012-01-21

    Activation parameters for the model oxidation half reaction of the classical aqueous ferrous ion are compared for different molecular simulation techniques. In particular, activation free energies are obtained from umbrella integration and Marcus theory based thermodynamic integration, which rely on the diabatic gap as the reaction coordinate. The latter method also assumes linear response, and both methods obtain the activation entropy and the activation energy from the temperature dependence of the activation free energy. In contrast, transition path sampling does not require knowledge of the reaction coordinate and directly yields the activation energy [C. Dellago and P. G. Bolhuis, Mol. Simul. 30, 795 (2004)]. Benchmark activation energies from transition path sampling agree within statistical uncertainty with activation energies obtained from standard techniques requiring knowledge of the reaction coordinate. In addition, it is found that the activation energy for this model system is significantly smaller than the activation free energy for the Marcus model, approximately half the value, implying an equally large entropy contribution.

  3. A Simple Dewar/Cryostat for Thermally Equilibrating Samples at Known Temperatures for Accurate Cryogenic Luminescence Measurements.

    PubMed

    Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W

    2016-01-01

    The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature. PMID:27501355

  4. Modeling the Orbital Sampling Effect of Extrasolar Moons

    NASA Astrophysics Data System (ADS)

    Heller, René; Hippke, Michael; Jackson, Brian

    2016-04-01

    The orbital sampling effect (OSE) appears in phase-folded transit light curves of extrasolar planets with moons. Analytical OSE models have hitherto neglected stellar limb darkening and non-zero transit impact parameters and assumed that the moon is on a circular, co-planar orbit around the planet. Here, we present an analytical OSE model for eccentric moon orbits, which we implement in a numerical simulator with stellar limb darkening that allows for arbitrary transit impact parameters. We also describe and publicly release a fully numerical OSE simulator (PyOSE) that can model arbitrary inclinations of the transiting moon orbit. Both our analytical solution for the OSE and PyOSE can be used to search for exomoons in long-term stellar light curves such as those by Kepler and the upcoming PLATO mission. Our updated OSE model offers an independent method for the verification of possible future exomoon claims via transit timing variations and transit duration variations. Photometrically quiet K and M dwarf stars are particularly promising targets for an exomoon discovery using the OSE.

  5. Laser-induced breakdown spectroscopy on metallic samples at very low temperature in different ambient gas pressures

    NASA Astrophysics Data System (ADS)

    El-Saeid, R. H.; Abdelhamid, M.; Harith, M. A.

    2016-02-01

    Analysis of metals at very low temperature adopting laser-induced breakdown spectroscopy (LIBS) is greatly beneficial in space exploration expeditions and in some important industrial applications. In the present work, the effect of very low sample temperature on the spectral emission intensity of laser-induced plasma under both atmospheric pressure and vacuum has been studied for different bronze alloy samples. The sample was cooled down to liquid nitrogen (LN) temperature 77 K in a special vacuum chamber. Laser-induced plasma has been produced onto the sample surface using the fundamental wavelength of Nd:YAG laser. The optical emission from the plasma is collected by an optical fiber and analyzed by an echelle spectrometer combined with an intensified CCD camera. The integrated intensities of certain spectral emission lines of Cu, Pb, Sn, and Zn have been estimated from the obtained LIBS spectra and compared with that measured at room temperature. The laser-induced plasma parameters (electron number density Ne and electron temperature Te) were investigated at room and liquid nitrogen temperatures for both atmospheric pressure and vacuum ambient conditions. The results suggest that reducing the sample temperature leads to decrease in the emission line intensities under both environments. Plasma parameters were found to decrease at atmospheric pressure but increased under vacuum conditions.

  6. Temperature Dependent Constitutive Modeling for Magnesium Alloy Sheet

    SciTech Connect

    Lee, Jong K.; Lee, June K.; Kim, Hyung S.; Kim, Heon Y.

    2010-06-15

    Magnesium alloys have been increasingly used in automotive and electronic industries because of their excellent strength to weight ratio and EMI shielding properties. However, magnesium alloys have low formability at room temperature due to their unique mechanical behavior (twinning and untwining), prompting for forming at an elevated temperature. In this study, a temperature dependent constitutive model for magnesium alloy (AZ31B) sheet is developed. A hardening law based on non linear kinematic hardening model is used to consider Bauschinger effect properly. Material parameters are determined from a series of uni-axial cyclic experiments (T-C-T or C-T-C) with the temperature ranging 150-250 deg. C. The influence of temperature on the constitutive equation is introduced by the material parameters assumed to be functions of temperature. Fitting process of the assumed model to measured data is presented and the results are compared.

  7. A thermocouple-based remote temperature controller of an electrically floated sample to study plasma CVD growth of carbon nanotube

    NASA Astrophysics Data System (ADS)

    Miura, Takuya; Xie, Wei; Yanase, Takashi; Nagahama, Taro; Shimada, Toshihiro

    2015-09-01

    Plasma chemical vapor deposition (CVD) is now gathering attention from a novel viewpoint, because it is easy to combine plasma processes and electrochemistry by applying a bias voltage to the sample. In order to explore electrochemistry during the plasma CVD, the temperature of the sample must be controlled precisely. In traditional equipment, the sample temperature is measured by a radiation thermometer. Since emissivity of the sample surface changes in the course of the CVD growth, it is difficult to measure the exact temperature using the radiation thermometer. In this work, we developed new equipment to control the temperature of electrically floated samples by thermocouple with Wi-Fi transmission. The growth of the CNT was investigated using our plasma CVD equipment. We examined the temperature accuracy and stability controlled by the thermocouple with monitoring the radiation thermometer. We noticed that the thermocouple readings were stable, whereas the readings of the radiation thermometer changes significantly (20 °C) during plasma CVD. This result clearly shows that the sample temperature should be measured with direct connection. On the result of CVD experiment, different structures of carbon including CNT were obtained by changing the bias voltages.

  8. Simulation of Soil Temperature Dynamics with Models Using Different Concepts

    PubMed Central

    Sándor, Renáta; Fodor, Nándor

    2012-01-01

    This paper presents two soil temperature models with empirical and mechanistic concepts. At the test site (calcaric arenosol), meteorological parameters as well as soil moisture content and temperature at 5 different depths were measured in an experiment with 8 parcels realizing the combinations of the fertilized, nonfertilized, irrigated, nonirrigated treatments in two replicates. Leaf area dynamics was also monitored. Soil temperature was calculated with the original and a modified version of CERES as well as with the HYDRUS-1D model. The simulated soil temperature values were compared to the observed ones. The vegetation reduced both the average soil temperature and its diurnal amplitude; therefore, considering the leaf area dynamics is important in modeling. The models underestimated the actual soil temperature and overestimated the temperature oscillation within the winter period. All models failed to account for the insulation effect of snow cover. The modified CERES provided explicitly more accurate soil temperature values than the original one. Though HYDRUS-1D provided more accurate soil temperature estimations, its superiority to CERES is not unequivocal as it requires more detailed inputs. PMID:22792047

  9. Fundamental Thermodynamic Model for Analysis of Stream Temperature Data

    NASA Astrophysics Data System (ADS)

    Davis, L.; Reiter, M.; Groom, J.; Dent, L.

    2012-12-01

    Stream temperature is a critical aquatic ecosystem parameter and has been extensively studied for many years. Complex models have been built as a way to understand stream temperature dynamics and estimate the magnitude of anthropogenic influences on temperature. These models have proven very useful in estimating the relative contribution of various thermal energy sources to the stream heat budget and how management can alter the heat budget. However, the large number of measured or estimated input parameters required by such models makes their application to the analysis of specific stream temperature data difficult when the necessary input data is not readily available. To gain insight into the physical processes governing stream temperature behavior in forested streams we analyzed data based on fundamental thermodynamic concepts. The dataset we used is from a recent multi-year study on the effects of timber harvest on stream temperature in the Oregon Coast Range. From the hourly temperature data we extracted time-averaged diurnal heating and cooling rates. Examining the data in this context allowed us to qualitatively assess changes in the relative magnitude of stream temperature (T), stream equilibrium temperature (Teq), and effective heat transfer coefficient (h) across years and treatments. A benefit of analyzing the data in this way is that it separates the influence of timber harvest on stream temperature from that of climate variation. To categorize longitudinal temperature behaviors before and after timber harvest we developed a data-event matrix which specifies qualitative constraints (i.e., what is physically possible for T, Teq and h) for a given set of observed stream temperature responses. We then analyzed data from 18 different streams to categorize the temperature response to management. Understanding stream temperature dynamics using fundamental thermodynamic concepts provides insight into the processes governing stream temperature and the pathways

  10. Wang-Landau sampling in face-centered-cubic hydrophobic-hydrophilic lattice model proteins.

    PubMed

    Liu, Jingfa; Song, Beibei; Yao, Yonglei; Xue, Yu; Liu, Wenjie; Liu, Zhaoxia

    2014-10-01

    Finding the global minimum-energy structure is one of the main problems of protein structure prediction. The face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice model can reach high approximation ratios of real protein structures, so the fcc lattice model is a good choice to predict the protein structures. The lacking of an effective global optimization method is the key obstacle in solving this problem. The Wang-Landau sampling method is especially useful for complex systems with a rough energy landscape and has been successfully applied to solving many optimization problems. We apply the improved Wang-Landau (IWL) sampling method, which incorporates the generation of an initial conformation based on the greedy strategy and the neighborhood strategy based on pull moves into the Wang-Landau sampling method to predict the protein structures on the fcc HP lattice model. Unlike conventional Monte Carlo simulations that generate a probability distribution at a given temperature, the Wang-Landau sampling method can estimate the density of states accurately via a random walk, which produces a flat histogram in energy space. We test 12 general benchmark instances on both two-dimensional and three-dimensional (3D) fcc HP lattice models. The lowest energies by the IWL sampling method are as good as or better than those of other methods in the literature for all instances. We then test five sets of larger-scale instances, denoted by the S, R, F90, F180, and CASP target instances on the 3D fcc HP lattice model. The numerical results show that our algorithm performs better than the other five methods in the literature on both the lowest energies and the average lowest energies in all runs. The IWL sampling method turns out to be a powerful tool to study the structure prediction of the fcc HP lattice model proteins.

  11. Wang-Landau sampling in face-centered-cubic hydrophobic-hydrophilic lattice model proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jingfa; Song, Beibei; Yao, Yonglei; Xue, Yu; Liu, Wenjie; Liu, Zhaoxia

    2014-10-01

    Finding the global minimum-energy structure is one of the main problems of protein structure prediction. The face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice model can reach high approximation ratios of real protein structures, so the fcc lattice model is a good choice to predict the protein structures. The lacking of an effective global optimization method is the key obstacle in solving this problem. The Wang-Landau sampling method is especially useful for complex systems with a rough energy landscape and has been successfully applied to solving many optimization problems. We apply the improved Wang-Landau (IWL) sampling method, which incorporates the generation of an initial conformation based on the greedy strategy and the neighborhood strategy based on pull moves into the Wang-Landau sampling method to predict the protein structures on the fcc HP lattice model. Unlike conventional Monte Carlo simulations that generate a probability distribution at a given temperature, the Wang-Landau sampling method can estimate the density of states accurately via a random walk, which produces a flat histogram in energy space. We test 12 general benchmark instances on both two-dimensional and three-dimensional (3D) fcc HP lattice models. The lowest energies by the IWL sampling method are as good as or better than those of other methods in the literature for all instances. We then test five sets of larger-scale instances, denoted by the S, R, F90, F180, and CASP target instances on the 3D fcc HP lattice model. The numerical results show that our algorithm performs better than the other five methods in the literature on both the lowest energies and the average lowest energies in all runs. The IWL sampling method turns out to be a powerful tool to study the structure prediction of the fcc HP lattice model proteins.

  12. Wang-Landau sampling in face-centered-cubic hydrophobic-hydrophilic lattice model proteins.

    PubMed

    Liu, Jingfa; Song, Beibei; Yao, Yonglei; Xue, Yu; Liu, Wenjie; Liu, Zhaoxia

    2014-10-01

    Finding the global minimum-energy structure is one of the main problems of protein structure prediction. The face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice model can reach high approximation ratios of real protein structures, so the fcc lattice model is a good choice to predict the protein structures. The lacking of an effective global optimization method is the key obstacle in solving this problem. The Wang-Landau sampling method is especially useful for complex systems with a rough energy landscape and has been successfully applied to solving many optimization problems. We apply the improved Wang-Landau (IWL) sampling method, which incorporates the generation of an initial conformation based on the greedy strategy and the neighborhood strategy based on pull moves into the Wang-Landau sampling method to predict the protein structures on the fcc HP lattice model. Unlike conventional Monte Carlo simulations that generate a probability distribution at a given temperature, the Wang-Landau sampling method can estimate the density of states accurately via a random walk, which produces a flat histogram in energy space. We test 12 general benchmark instances on both two-dimensional and three-dimensional (3D) fcc HP lattice models. The lowest energies by the IWL sampling method are as good as or better than those of other methods in the literature for all instances. We then test five sets of larger-scale instances, denoted by the S, R, F90, F180, and CASP target instances on the 3D fcc HP lattice model. The numerical results show that our algorithm performs better than the other five methods in the literature on both the lowest energies and the average lowest energies in all runs. The IWL sampling method turns out to be a powerful tool to study the structure prediction of the fcc HP lattice model proteins. PMID:25375531

  13. The effects of sampling frequency on the climate statistics of the ECMWF general circulation model

    SciTech Connect

    Phillips, T.J.; Gates, W.L.; Arpe, K.

    1992-09-01

    The effects of sampling frequency on the first- and second-moment statistics of selected EC model variables are investigated in a simulation of ``perpetual July`` with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the enfolding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and run-off, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over the oceans.

  14. The effects of sampling frequency on the climate statistics of the ECMWF general circulation model

    SciTech Connect

    Phillips, T.J.; Gates, W.L. ); Arpe, K. )

    1992-09-01

    The effects of sampling frequency on the first- and second-moment statistics of selected EC model variables are investigated in a simulation of perpetual July'' with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the enfolding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and run-off, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over the oceans.

  15. West Flank Coso, CA FORGE 3D temperature model

    DOE Data Explorer

    Doug Blankenship

    2016-03-01

    x,y,z data of the 3D temperature model for the West Flank Coso FORGE site. Model grid spacing is 250m. The temperature model for the Coso geothermal field used over 100 geothermal production sized wells and intermediate-depth temperature holes. At the near surface of this model, two boundary temperatures were assumed: (1) areas with surface manifestations, including fumaroles along the northeast striking normal faults and northwest striking dextral faults with the hydrothermal field, a temperature of ~104˚C was applied to datum at +1066 meters above sea level elevation, and (2) a near-surface temperature at about 10 meters depth, of 20˚C was applied below the diurnal and annual conductive temperature perturbations. These assumptions were based on heat flow studies conducted at the CVF and for the Mojave Desert. On the edges of the hydrothermal system, a 73˚C/km (4˚F/100’) temperature gradient contour was established using conductive gradient data from shallow and intermediate-depth temperature holes. This contour was continued to all elevation datums between the 20˚C surface and -1520 meters below mean sea level. Because the West Flank is outside of the geothermal field footprint, during Phase 1, the three wells inside the FORGE site were incorporated into the preexisting temperature model. To ensure a complete model was built based on all the available data sets, measured bottom-hole temperature gradients in certain wells were downward extrapolated to the next deepest elevation datum (or a maximum of about 25% of the well depth where conductive gradients are evident in the lower portions of the wells). After assuring that the margins of the geothermal field were going to be adequately modelled, the data was contoured using the Kriging method algorithm. Although the extrapolated temperatures and boundary conditions are not rigorous, the calculated temperatures are anticipated to be within ~6˚C (20˚F), or one contour interval, of the

  16. Sample Collection from Small Airless Bodies: Examination of Temperature Constraints for the TGIP Sample Collector for the Hera Near-Earth Asteroid Sample Return Mission

    NASA Technical Reports Server (NTRS)

    Franzen, M. A.; Roe, L. A.; Buffington, J. A.; Sears, D. W. G.

    2005-01-01

    There have been a number of missions that have explored the solar system with cameras and other instruments but profound questions remain that can only be addressed through the analysis of returned samples. However, due to lack of appropriate technology, high cost, and high risk, sample return has only recently become a feasible part of robotic solar system exploration. One specific objective of the President s new vision is that robotic exploration of the solar system should enhance human exploration as it discovers and understands the the solar system, and searches for life and resources [1]. Missions to small bodies, asteroids and comets, will partially fill the huge technological void between missions to the Moon and missions to Mars. However, such missions must be low cost and inherently simple, so they can be applied routinely to many missions. Sample return from asteroids, comets, Mars, and Jupiter s moons will be an important and natural part of the human exploration of space effort. Here we describe the collector designed for the Hera Near-Earth Asteroid Sample Return Mission. We have built a small prototype for preliminary evaluation, but expect the final collector to gather approx.100 g of sample of dust grains to centimeter sized clasts on each application to the surface of the asteroid.

  17. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  18. Multi-Relaxation Temperature-Dependent Dielectric Model of the Arctic Soil at Positive Temperatures

    NASA Astrophysics Data System (ADS)

    Savin, I. V.; Mironov, V. L.

    2014-11-01

    Frequency spectra of the dielectric permittivity of the Arctic soil of Alaska are investigated with allowance for the dipole and ionic relaxation of molecules of the soil moisture at frequencies from 40 MHz to 16 GHz and temperatures from -5 to +25°С. A generalized temperature-dependent multi-relaxation refraction dielectric model of the humid Arctic soil is suggested.

  19. Z-estimation and stratified samples: application to survival models.

    PubMed

    Breslow, Norman E; Hu, Jie; Wellner, Jon A

    2015-10-01

    The infinite dimensional Z-estimation theorem offers a systematic approach to joint estimation of both Euclidean and non-Euclidean parameters in probability models for data. It is easily adapted for stratified sampling designs. This is important in applications to censored survival data because the inverse probability weights that modify the standard estimating equations often depend on the entire follow-up history. Since the weights are not predictable, they complicate the usual theory based on martingales. This paper considers joint estimation of regression coefficients and baseline hazard functions in the Cox proportional and Lin-Ying additive hazards models. Weighted likelihood equations are used for the former and weighted estimating equations for the latter. Regression coefficients and baseline hazards may be combined to estimate individual survival probabilities. Efficiency is improved by calibrating or estimating the weights using information available for all subjects. Although inefficient in comparison with likelihood inference for incomplete data, which is often difficult to implement, the approach provides consistent estimates of desired population parameters even under model misspecification.

  20. Integrated flow and temperature modeling at the catchment scale

    NASA Astrophysics Data System (ADS)

    Loinaz, Maria C.; Davidsen, Hasse Kampp; Butts, Michael; Bauer-Gottwein, Peter

    2013-07-01

    Changes in natural stream temperature levels can be detrimental to the health of aquatic ecosystems. Water use and land management directly affect the distribution of diffuse heat sources and thermal loads to streams, while riparian vegetation and geomorphology play a critical role in how thermal loads are buffered. In many areas, groundwater flow is a significant contribution to river flow, particularly during low flows and therefore has a strong influence on stream temperature levels and dynamics. However, previous stream temperature models do not properly simulate how surface water-groundwater dynamics affect stream temperature. A coupled surface water-groundwater and temperature model has therefore been developed to quantify the impacts of land management and water use on stream flow and temperatures. The model is applied to the simulation of stream temperature levels in a spring-fed stream, the Silver Creek Basin in Idaho, where stream temperature affects the populations of fish and other aquatic organisms. The model calibration highlights the importance of spatially distributed flow dynamics in the catchment to accurately predict stream temperatures. The results also show the value of including temperature data in an integrated flow model calibration because temperature data provide additional constraints on the flow sources and volumes. Simulations show that a reduction of 10% in the groundwater flow to the Silver Creek Basin can cause average and maximum temperature increases in Silver Creek over 0.3 °C and 1.5 °C, respectively. In spring-fed systems like Silver Creek, it is clearly not feasible to separate river habitat restoration from upstream catchment and groundwater management.

  1. Advanced flight design systems subsystem performance models. Sample model: Environmental analysis routine library

    NASA Technical Reports Server (NTRS)

    Parker, K. C.; Torian, J. G.

    1980-01-01

    A sample environmental control and life support model performance analysis using the environmental analysis routines library is presented. An example of a complete model set up and execution is provided. The particular model was synthesized to utilize all of the component performance routines and most of the program options.

  2. Estimating sampling biases and measurement uncertainties of AIRS/AMSU-A temperature and water vapor observations using MERRA reanalysis

    NASA Astrophysics Data System (ADS)

    Hearty, Thomas J.; Savtchenko, Andrey; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-03-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be ± 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and > 30% dry over midlatitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  3. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  4. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  5. Multiaxial Temperature- and Time-Dependent Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, David; McLennan, Michael; Anderson, Gregory; Macon, David; Batista-Rodriquez, Alicia

    2003-01-01

    A temperature- and time-dependent mathematical model predicts the conditions for failure of a material subjected to multiaxial stress. The model was initially applied to a filled epoxy below its glass-transition temperature, and is expected to be applicable to other materials, at least below their glass-transition temperatures. The model is justified simply by the fact that it closely approximates the experimentally observed failure behavior of this material: The multiaxiality of the model has been confirmed (see figure) and the model has been shown to be applicable at temperatures from -20 to 115 F (-29 to 46 C) and to predict tensile failures of constant-load and constant-load-rate specimens with failure times ranging from minutes to months..

  6. ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION

    SciTech Connect

    Martino, C.; Herman, D.; Pike, J.; Peters, T.

    2014-06-05

    Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.

  7. Defining Predictive Probability Functions for Species Sampling Models

    PubMed Central

    Lee, Jaeyong; Quintana, Fernando A.; Müller, Peter; Trippa, Lorenzo

    2013-01-01

    We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF. PMID:24368874

  8. Ignition temperature of magnesium powder clouds: a theoretical model.

    PubMed

    Chunmiao, Yuan; Chang, Li; Gang, Li; Peihong, Zhang

    2012-11-15

    Minimum ignition temperature of dust clouds (MIT-DC) is an important consideration when adopting explosion prevention measures. This paper presents a model for determining minimum ignition temperature for a magnesium powder cloud under conditions simulating a Godbert-Greenwald (GG) furnace. The model is based on heterogeneous oxidation of metal particles and Newton's law of motion, while correlating particle size, dust concentration, and dust dispersion pressure with MIT-DC. The model predicted values in close agreement with experimental data and is especially useful in predicting temperature and velocity change as particles pass through the furnace tube.

  9. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    SciTech Connect

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; Carroll, Susan A.

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution features including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.

  10. A physically based model of global freshwater surface temperature

    NASA Astrophysics Data System (ADS)

    Beek, Ludovicus P. H.; Eikelboom, Tessa; Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  11. A temperature dependent SPICE macro-model for power MOSFETs

    SciTech Connect

    Pierce, D.G.

    1992-05-01

    A power MOSFET macro-model for use with the circuit simulator SPICE has been developed suitable for use over the temperature range of {minus}55 to 125{degrees}C. The model is comprised of a single parameter set with the temperature dependence accessed through the SPICE TEMP card. This report describes in detail the development of the model and the extraction algorithms used to obtain model parameters. The extraction algorithms are described in sufficient detail to allow for automated measurements which in turn allows for rapid and cost effective development of an accurate SPICE model for any power MOSFET. 22 refs.

  12. Integrated Modeling of Spacecraft Touch-and-Go Sampling

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco

    2009-01-01

    An integrated modeling tool has been developed to include multi-body dynamics, orbital dynamics, and touch-and-go dynamics for spacecraft covering three types of end-effectors: a sticky pad, a brush-wheel sampler, and a pellet gun. Several multi-body models of a free-flying spacecraft with a multi-link manipulator driving these end-effectors have been tested with typical contact conditions arising when the manipulator arm is to sample the surface of an asteroidal body. The test data have been infused directly into the dynamics formulation including such information as the mass collected as a function of end-effector longitudinal speed for the brush-wheel and sticky-pad samplers, and the mass collected as a function of projectile speed for the pellet gun sampler. These data represent the realistic behavior of the end effector while in contact with a surface, and represent a low-order model of more complex contact conditions that otherwise would have to be simulated. Numerical results demonstrate the adequacy of these multibody models for spacecraft and manipulator- arm control design. The work contributes to the development of a touch-and-go testbed for small body exploration, denoted as the GREX Testbed (GN&C for Rendezvous-based EXploration). The GREX testbed addresses the key issues involved in landing on an asteroidal body or comet; namely, a complex, low-gravity field; partially known terrain properties; possible comet outgassing; dust ejection; and navigating to a safe and scientifically desirable zone.

  13. Model of the magnetization of nanocrystalline materials at low temperatures

    NASA Astrophysics Data System (ADS)

    Bian, Q.; Niewczas, M.

    2014-07-01

    A theoretical model incorporating the material texture has been developed to simulate the magnetic properties of nanocrystalline materials at low temperatures where the effect of thermal energy on magnetization is neglected. The method is based on Landau-Lifshitz-Gilbert (LLG) theory and it describes the magnetization dynamics of individual grains in the effective field. The modified LLG equation incorporates the intrinsic fields from the intragrain magnetocrystalline and grain boundary anisotropies and the interacting fields from intergrain dipolar and exchange couplings between the neighbouring grains. The model is applied to study magnetic properties of textured nanocrystalline Ni samples at 2K and is capable to reproduce closely the hysteresis loop behaviour at different orientations of applied magnetic field. Nanocrystalline Ni shows the grain boundary anisotropy constant K 1 s = - 6.0 × 104 J / m 3 and the intergrain exchange coupling denoted by the effective exchange constant Ap = 2.16 × 10-11 J/m. Analytical expressions to estimate the intergrain exchange energy density and the effective exchange constant have been formulated.

  14. Modelling of aluminium sheet forming at elevated temperatures

    NASA Astrophysics Data System (ADS)

    van den Boogaard, A. H.; Huétink, J.

    2004-06-01

    The formability of Al-Mg sheet can be improved considerably, by increasing the temperature. By heating the sheet in areas with large shear strains, but cooling it on places where the risk of necking is high, the limiting drawing ratio can be increased to values above 2.5. At elevated temperatures, the mechanical response of the material becomes strain rate dependent. To accurately simulate warm forming of aluminium sheet, a material model is required that incorporates the temperature and strain-rate dependency. In this paper simulations are presented of the deep drawing of a cylindrical cup, using shell elements. It is demonstrated that the familiar quadratic Hill yield function is not capable of describing the plastic deformation of aluminium. Hardening can be described successfully with a physically based material model for temperatures up to 200 °C. At higher temperatures and very low strain rates, the flow curve deviates significantly from the model.

  15. Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality

    NASA Astrophysics Data System (ADS)

    Caldwell, R. J.; Rajagopalan, B.

    2011-12-01

    Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other

  16. Modeling the Effect of Temperature on Ozone-Related Mortality.

    EPA Science Inventory

    Modeling the Effect of Temperature on Ozone-Related Mortality. Wilson, Ander, Reich, Brian J, Neas, Lucas M., Rappold, Ana G. Background: Previous studies show ozone and temperature are associated with increased mortality; however, the joint effect is not well explored. Underst...

  17. A generalized conditional heteroscedastic model for temperature downscaling

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  18. Temperature sensitivity of a numerical pollen forecast model

    NASA Astrophysics Data System (ADS)

    Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone

    2016-04-01

    Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.

  19. Note: A sample holder design for sensitive magnetic measurements at high temperatures in a magnetic properties measurement system

    SciTech Connect

    Arauzo, A.; Guerrero, E.; Urtizberea, A.; Stankiewicz, J.; Rillo, C.

    2012-06-15

    A sample holder design for high temperature measurements in a commercial MPMS SQUID magnetometer from Quantum Design is presented. It fulfills the requirements for the simultaneous use of the oven and reciprocating sample option (RSO) options, thus allowing sensitive magnetic measurements up to 800 K. Alternating current susceptibility can also be measured, since the holder does not induce any phase shift relative to the ac driven field. It is easily fabricated by twisting Constantan Copyright-Sign wires into a braid nesting the sample inside. This design ensures that the sample be placed tightly into a tough holder with its orientation fixed, and prevents any sample displacement during the fast movements of the RSO transport, up to high temperatures.

  20. Application of a temperature-dependent fluorescent dye (Rhodamine B) to the measurement of radiofrequency radiation-induced temperature changes in biological samples.

    PubMed

    Chen, Yuen Y; Wood, Andrew W

    2009-10-01

    We have applied a non-contact method for studying the temperature changes produced by radiofrequency (RF) radiation specifically to small biological samples. A temperature-dependent fluorescent dye, Rhodamine B, as imaged by laser scanning confocal microscopy (LSCM) was used to do this. The results were calibrated against real-time temperature measurements from fiber optic probes, with a calibration factor of 3.4% intensity change degrees C(-1) and a reproducibility of +/-6%. This non-contact method provided two-dimensional and three-dimensional images of temperature change and distributions in biological samples, at a spatial resolution of a few micrometers and with an estimated absolute precision of around 1.5 degrees C, with a differential precision of 0.4 degree C. Temperature rise within tissue was found to be non-uniform. Estimates of specific absorption rate (SAR) from absorbed power measurements were greater than those estimated from rate of temperature rise, measured at 1 min intervals, probably because this interval is too long to permit accurate estimation of initial temperature rise following start of RF exposure. Future experiments will aim to explore this. PMID:19507188

  1. Reliability and stability of three cryogenic temperature sensor models subjected to accelerated thermal cycling

    NASA Astrophysics Data System (ADS)

    Courts, S. Scott; Krause, John

    2012-06-01

    Reliability of a cryogenic temperature sensor is important for any experimental application, but even more so for aerospace applications where there is virtually no opportunity to replace a failed sensor. Many factors affect the stability and longevity of a cryogenic temperature sensor, but one of the most detrimental factors is thermal cycling over an extended temperature range. Strains and stresses caused by thermal contraction can affect both the sensing material and its interface with electrical contacts leading to either calibration shift and/or catastrophic failure of the sensor. Depending upon the aerospace application, a temperature sensor may cycle from cryogenic temperature to near room temperature hundreds of times or more during the lifetime of the mission. Sample groups of three sensors types, the Lake Shore Cryotronics, Inc. models CX-1050-SD (23 samples), DT-670-SD (12 samples), and DT-470-SD (11 samples), were subjected to accelerated thermal shocking from room temperature to 77 K one thousand times. Recalibrations of each group were performed from 1.2 K to 325 K after 20, 40, 60, 100, 250, 500 and 1,000 thermal shocks. The resulting reliability and stability data are presented.

  2. The influence of model resolution on temperature variability

    NASA Astrophysics Data System (ADS)

    Klavans, Jeremy M.; Poppick, Andrew; Sun, Shanshan; Moyer, Elisabeth J.

    2016-08-01

    Understanding future changes in climate variability, which can impact human activities, is a current research priority. It is often assumed that a key part of this effort involves improving the spatial resolution of climate models; however, few previous studies comprehensively evaluate the effects of model resolution on variability. In this study, we systematically examine the sensitivity of temperature variability to horizontal atmospheric resolution in a single model (CCSM3, the Community Climate System Model 3) at three different resolutions (T85, T42, and T31), using spectral analysis to describe the frequency dependence of differences. We find that in these runs, increased model resolution is associated with reduced temperature variability at all but the highest frequencies (2-5 day periods), though with strong regional differences. (In the tropics, where temperature fluctuations are smallest, increased resolution is associated with increased variability.) At all resolutions, temperature fluctuations in CCSM3 are highly spatially correlated, implying that the changes in variability with model resolution are driven by alterations in large-scale phenomena. Because CCSM3 generally overestimates temperature variability relative to reanalysis output, the reductions in variability associated with increased resolution tend to improve model fidelity. However, the resolution-related variability differences are relatively uniform with frequency, whereas the sign of model bias changes at interannual frequencies. This discrepancy raises questions about the mechanisms underlying the improvement at subannual frequencies. The consistent response across frequencies also implies that the atmosphere plays a significant role in interannual variability.

  3. Elevated body temperature is linked to fatigue in an Italian sample of relapsing-remitting multiple sclerosis patients.

    PubMed

    Leavitt, V M; De Meo, E; Riccitelli, G; Rocca, M A; Comi, G; Filippi, M; Sumowski, J F

    2015-11-01

    Elevated body temperature was recently reported for the first time in patients with relapsing-remitting multiple sclerosis (RRMS) relative to healthy controls. In addition, warmer body temperature was associated with worse fatigue. These findings are highly novel, may indicate a novel pathophysiology for MS fatigue, and therefore warrant replication in a geographically separate sample. Here, we investigated body temperature and its association to fatigue in an Italian sample of 44 RRMS patients and 44 age- and sex-matched healthy controls. Consistent with our original report, we found elevated body temperature in the RRMS sample compared to healthy controls. Warmer body temperature was associated with worse fatigue, thereby supporting the notion of endogenous temperature elevations in patients with RRMS as a novel pathophysiological factor underlying fatigue. Our findings highlight a paradigm shift in our understanding of the effect of heat in RRMS, from exogenous (i.e., Uhthoff's phenomenon) to endogenous. Although randomized controlled trials of cooling treatments (i.e., aspirin, cooling garments) to reduce fatigue in RRMS have been successful, consideration of endogenously elevated body temperature as the underlying target will enhance our development of novel treatments.

  4. Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2016-08-01

    An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an instance-dependent effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to k -step contrastive divergence (CD-k ) with k up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one-step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.

  5. High temperature spice modeling of partially depleted SOI MOSFETs

    SciTech Connect

    Osman, M.A.; Osman, A.A.

    1996-03-01

    Several partially depleted SOI N- and P-mosfets with dimensions ranging from W/L=30/10 to 15/3 were characterized from room temperature up to 300 C. The devices exhibited a well defined and sharp zero temperature coefficient biasing point up to 573 K in both linear and saturation regions. Simulation of the I-V characteristics using a temperature dependent SOI SPICE were in excellent agreement with measurements. Additionally, measured ZTC points agreed favorably with the predicted ZTC points using expressions derived from the temperature dependent SOI model for the ZTC {copyright} {ital 1996 American Institute of Physics.}

  6. A model for estimating the value of sampling programs and the optimal number of samples for contaminated soil

    NASA Astrophysics Data System (ADS)

    Back, Pär-Erik

    2007-04-01

    A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.

  7. Control of household refrigerators. Part 1: Modeling temperature control performance

    SciTech Connect

    Graviss, K.J.; Collins, R.L.

    1999-07-01

    Commercial household refrigerators use simple, cost-effective, temperature controllers to obtain acceptable control. A manually adjusted airflow damper regulates the freezer compartment temperature while a thermostat controls operation of the compressor and evaporator fan to regulate refrigerator compartment temperature. Dual compartment temperature control can be achieved with automatic airflow dampers that function independently of the compressor and evaporator fan thermostat, resulting in improved temperature control quality and energy consumption. Under dual control, freezer temperature is controlled by the thermostat while the damper controls refrigerator temperature by regulating airflow circulation. A simulation model is presented that analyzes a household refrigerator configured with a conventional thermostat and both manual and automatic dampers. The model provides a new paradigm for investigating refrigerator systems and temperature control performance relative to the extensive verification testing that is typically done by manufacturers. The effects of each type of control and damper configuration are compared with respect to energy usage, control quality, and ambient temperature shift criteria. The results indicate that the appropriate control configuration can have significant effects and can improve plant performance.

  8. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  9. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2004-01-01

    Through Monte Carlo simulation, small sample methods for evaluating overall data-model fit in structural equation modeling were explored. Type I error behavior and power were examined using maximum likelihood (ML), Satorra-Bentler scaled and adjusted (SB; Satorra & Bentler, 1988, 1994), residual-based (Browne, 1984), and asymptotically…

  10. Simple and compact optode for real-time in-situ temperature detection in very small samples

    PubMed Central

    Long, Feng; Shi, Hanchang

    2014-01-01

    Real-time in-situ temperature detection is essential in many applications. In this paper, a simple and robust optode, which uses Ruthenium (II) complex as a temperature indicator, has been developed for rapid and sensitive temperature detection in small volume samples (<5 μL). Transmission of excitation light and collection and transmission of fluorescence are performed by a homemade single-multi mode fiber coupler, which provides the entire system with a simple and robust structure. The photoluminescence intensity of Ruthenium (II) complex diminishes monotonically from 0°C to 80°C, and the response to temperature is rapid and completely reversible. When temperature is less than (or higher than) 50°C, a linear correlation exists between the fluorescence intensity and the temperature. Excellent agreement was also observed between the continuous and in situ measurements obtained by the presented optode and the discrete temperature values measured by a conventional thermometer. The proposed optode has high sensitivity, high photostability and chemical stability, a wide detection range, and thermal reversibility, and can be applied to real-time in-situ temperature detection of a very small volume biological, environmental, and chemical sample. PMID:24875420

  11. The stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures

    USGS Publications Warehouse

    Gordon, J.D.; Schroder, L.J.; Morden-Moore, A. L.; Bowersox, V.C.

    1995-01-01

    Separate experiments by the U.S. Geological Survey (USGS) and the Illinois State Water Survey Central Analytical Laboratory (CAL) independently assessed the stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures. The USGS experiment represented a test of sample stability under a diverse range of conditions, whereas the CAL experiment was a controlled test of sample stability. In the experiment by the USGS, a statistically significant (?? = 0.05) relation between [H+] and time was found for the composited filtered, natural, wet-deposition solution when all reported values are included in the analysis. However, if two outlying pH values most likely representing measurement error are excluded from the analysis, the change in [H+] over time was not statistically significant. In the experiment by the CAL, randomly selected samples were reanalyzed between July 1984 and February 1991. The original analysis and reanalysis pairs revealed that [H+] differences, although very small, were statistically different from zero, whereas specific-conductance differences were not. Nevertheless, the results of the CAL reanalysis project indicate there appears to be no consistent, chemically significant degradation in sample integrity with regard to [H+] and specific conductance while samples are stored at room temperature at the CAL. Based on the results of the CAL and USGS studies, short-term (45-60 day) stability of [H+] and specific conductance in natural filtered wet-deposition samples that are shipped and stored unchilled at ambient temperatures was satisfactory.

  12. Space Weathering of Olivine: Samples, Experiments and Modeling

    NASA Technical Reports Server (NTRS)

    Keller, L. P.; Berger, E. L.; Christoffersen, R.

    2016-01-01

    Olivine is a major constituent of chondritic bodies and its response to space weathering processes likely dominates the optical properties of asteroid regoliths (e.g. S- and many C-type asteroids). Analyses of olivine in returned samples and laboratory experiments provide details and insights regarding the mechanisms and rates of space weathering. Analyses of olivine grains from lunar soils and asteroid Itokawa reveal that they display solar wind damaged rims that are typically not amorphized despite long surface exposure ages, which are inferred from solar flare track densities (up to 10 (sup 7 y)). The olivine damaged rim width rapidly approaches approximately 120 nm in approximately 10 (sup 6 y) and then reaches steady-state with longer exposure times. The damaged rims are nanocrystalline with high dislocation densities, but crystalline order exists up to the outermost exposed surface. Sparse nanophase Fe metal inclusions occur in the damaged rims and are believed to be produced during irradiation through preferential sputtering of oxygen from the rims. The observed space weathering effects in lunar and Itokawa olivine grains are difficult to reconcile with laboratory irradiation studies and our numerical models that indicate that olivine surfaces should readily blister and amorphize on relatively short time scales (less than 10 (sup 3 y)). These results suggest that it is not just the ion fluence alone, but other variable, the ion flux that controls the type and extent of irradiation damage that develops in olivine. This flux dependence argues for caution in extrapolating between high flux laboratory experiments and the natural case. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind processing of olivine.

  13. Constitutive modelling of aluminium alloy sheet at warm forming temperatures

    NASA Astrophysics Data System (ADS)

    Kurukuri, S.; Worswick, M. J.; Winkler, S.

    2016-08-01

    The formability of aluminium alloy sheet can be greatly improved by warm forming. However predicting constitutive behaviour under warm forming conditions is a challenge for aluminium alloys due to strong, coupled temperature- and rate-sensitivity. In this work, uniaxial tensile characterization of 0.5 mm thick fully annealed aluminium alloy brazing sheet, widely used in the fabrication of automotive heat exchanger components, is performed at various temperatures (25 to 250 °C) and strain rates (0.002 and 0.02 s-1). In order to capture the observed rate- and temperature-dependent work hardening behaviour, a phenomenological extended-Nadai model and the physically based (i) Bergstrom and (ii) Nes models are considered and compared. It is demonstrated that the Nes model is able to accurately describe the flow stress of AA3003 sheet at different temperatures, strain rates and instantaneous strain rate jumps.

  14. Theoretical modeling of critical temperature increase in metamaterial superconductors

    NASA Astrophysics Data System (ADS)

    Smolyaninov, Igor I.; Smolyaninova, Vera N.

    2016-05-01

    Recent experiments have demonstrated that the metamaterial approach is capable of a drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al -A l2O3 ENZ core-shell metamaterials. Here, we perform theoretical modeling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modeling and experimental results in both aluminum- and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium-, Mg B2- , and H2S -based metamaterial superconductors is evaluated. The Mg B2 -based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of a H2S -based metamaterial Tc appears to reach ˜250 K.

  15. Modeling the formation of some polycyclic aromatic hydrocarbons during the roasting of Arabica coffee samples.

    PubMed

    Houessou, Justin Koffi; Goujot, Daniel; Heyd, Bertrand; Camel, Valerie

    2008-05-28

    Roasting is a critical process in coffee production, as it enables the development of flavor and aroma. At the same time, roasting may lead to the formation of nondesirable compounds, such as polycyclic aromatic hydrocarbons (PAHs). In this study, Arabica green coffee beans from Cuba were roasted under controlled conditions to monitor PAH formation during the roasting process. Roasting was performed in a pilot-spouted bed roaster, with the inlet air temperature varying from 180 to 260 degrees C, for roasting conditions ranging from 5 to 20 min. Several PAHs were determined in both roasted coffee samples and green coffee samples. Different models were tested, with more or less assumptions on the chemical phenomena, with a view to predict the system global behavior. Two kinds of models were used and compared: kinetic models (based on Arrhenius law) and statistical models (neural networks). The numbers of parameters to adjust differed for the tested models, varying from three to nine for the kinetic models and from five to 13 for the neural networks. Interesting results are presented, with satisfactory correlations between experimental and predicted concentrations for some PAHs, such as pyrene, benz[a]anthracene, chrysene, and anthracene.

  16. Temperature and wavevector dependence of overdoped Bi_2Sr_2CaCu_2O_8+x single crystal samples

    NASA Astrophysics Data System (ADS)

    Rast, S.; Klohs, A.; Frazer, B. H.; Hirai, Y.; Schmauder, T.; Gatt, R.; Abrecht, M.; Pavuna, D.; Margaritondo, G.; Onellion, M.

    2000-03-01

    We report on measuring the temperature and wavevector change of angle-resolved photoemission spectra for overdoped Bi_2Sr_2CaCu_2O_8+x single crystal samples. Spectra taken from close to <0, π > to close to <π , π > were analyzed. The changes of spectral lineshape with temperature and wave vector indicate qualitatively different behavior in different parts of the Brillouin zone and will be analyzed and presented.

  17. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  18. Modeling sugarcane growth in response to age, insolation, and temperature

    SciTech Connect

    How, K.T.S.

    1986-01-01

    Modeling sugarcane growth in response to age of cane, insolation and air temperature using first-order multiple regression analysis and a nonlinear approach is investigated. Data are restricted to one variety from irrigated fields to eliminate the impact of varietal response and rainfall. Ten first-order models are investigated. The predictant is cane yield from 600 field tests. The predictors are cumulative values of insolation, maximum temperature, and minimum temperature for 3, 6, 12, and 18 months, or for each crop period derived from weather observations near the test plots. The low R-square values indicate that the selected predictor variables could not account for a substantial proportion of the variations of cane yield and the models have limited predictive values. The nonlinear model is based on known functional relationships between growth and age, growth and insolation, and growth and maximum temperature. A mathematical expression that integrates the effect of age, insolation and maximum temperature is developed. The constant terms and coefficients of the equation are determined from the requirement that the model must produce results that are reasonable when compared with observed monthly elongation data. The nonlinear model is validated and tested using another set of data.

  19. Estimating transient climate response using consistent temperature reconstruction methods in models and observations

    NASA Astrophysics Data System (ADS)

    Richardson, M.; Cowtan, K.; Hawkins, E.; Stolpe, M.

    2015-12-01

    Observational temperature records such as HadCRUT4 typically have incomplete geographical coverage and blend air temperature over land with sea surface temperatures over ocean, in contrast to model output which is commonly reported as global air temperature. This complicates estimation of properties such as the transient climate response (TCR). Observation-based estimates of TCR have been made using energy-budget constraints applied to time series of historical radiative forcing and surface temperature changes, while model TCR is formally derived from simulations where CO2 increases at 1% per year. We perform a like-with-like comparison using three published energy-budget methods to derive modelled TCR from historical CMIP5 temperature series sampled in a manner consistent with HadCRUT4. Observation-based TCR estimates agree to within 0.12 K of the multi-model mean in each case and for 2 of the 3 energy-budget methods the observation-based TCR is higher than the multi-model mean. For one energy-budget method, using the HadCRUT4 blending method leads to a TCR underestimate of 0.3±0.1 K, relative to that estimated using global near-surface air temperatures.

  20. A model of the diurnal variation in lake surface temperature

    NASA Astrophysics Data System (ADS)

    Hodges, Jonathan L.

    Satellite measurements of water surface temperature can benefit several environmental applications such as predictions of lake evaporation, meteorological forecasts, and predictions of lake overturning events, among others. However, limitations on the temporal resolution of satellite measurements restrict these improvements. A model of the diurnal variation in lake surface temperature could potentially increase the effective temporal resolution of satellite measurements of surface temperature, thereby enhancing the utility of these measurements in the above applications. Herein, a one-dimensional transient thermal model of a lake is used in combination with surface temperature measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Aqua and Terra satellites, along with ambient atmospheric conditions from local weather stations, and bulk temperature measurements to calculate the diurnal surface temperature variation for the five major lakes in the Savannah River Basin in South Carolina: Lakes Jocassee, Keowee, Hartwell, Russell, and Thurmond. The calculated solutions are used to obtain a functional form for the diurnal surface temperature variation of these lakes. Differences in diurnal variation in surface temperature between each of these lakes are identified and potential explanations for these differences are presented.

  1. TEMPERATURE-BASED REACTIVE FLOW MODEL FOR ANFO.

    SciTech Connect

    MULFORD, ROBERTA; SWIFT, DAMIAN C

    2002-06-12

    Reaction rates depend on temperature as well as on the mechanical state. In shock wave initiation, experimental data almost always comprise mechanical measurements such as shock speed, material speed, compression, and pressure, and are accordingly modeled in terms of these parameters. Omission of temperature is one reason why mechanically based reaction rates do not extrapolate well out with the range of states used to normalize them. The model presented addresses chemical processes directly, enabling chemical kinetic data reported in terms of temperature (and at STP, generally) to be used in shock reaction models. We have recently extended a temperature-based model for use with ANFO-type formulations. Reactive material is treated as a heterogeneous mixture of components, each of which has its own model for response to dynamic loading (equation of state, strength model, reactions.) A finite-rate equilibration model is used to determine the overall response of the mixture to dynamic loading. In this model of ANFO, the ammonium nitrate and the fuel oil are treated as separate components in the unreacted mixture.

  2. Critical Behavior of the Spin-1/2 Baxter-Wu Model: Entropic Sampling Simulations

    NASA Astrophysics Data System (ADS)

    Jorge, L. N.; Ferreira, L. S.; Leão, S. A.; Caparica, A. A.

    2016-10-01

    In this work, we use a refined entropic sampling technique based on the Wang-Landau method to study the spin- 1/2 Baxter-Wu model. We adopt the total magnetization as the order parameter and, as a result, do not divide the system into three sub-lattices. The static critical exponents were determined as α = 0.6697(54), β = 0.0813(67), γ = 1.1772(33), and ν = 0.6574(61). The estimate for the critical temperature was T c = 2.26924(2). We compare the present results with those obtained from other well-established approaches, and we find a very good closeness with the exact values, besides the high precision reached for the critical temperature.

  3. Critical Behavior of the Spin-1/2 Baxter-Wu Model: Entropic Sampling Simulations

    NASA Astrophysics Data System (ADS)

    Jorge, L. N.; Ferreira, L. S.; Leão, S. A.; Caparica, A. A.

    2016-08-01

    In this work, we use a refined entropic sampling technique based on the Wang-Landau method to study the spin- 1/2 Baxter-Wu model. We adopt the total magnetization as the order parameter and, as a result, do not divide the system into three sub-lattices. The static critical exponents were determined as α = 0.6697(54), β = 0.0813(67), γ = 1.1772(33), and ν = 0.6574(61). The estimate for the critical temperature was T c = 2.26924(2). We compare the present results with those obtained from other well-established approaches, and we find a very good closeness with the exact values, besides the high precision reached for the critical temperature.

  4. Temperature dependence of heterogeneous nucleation: Extension of the Fletcher model

    NASA Astrophysics Data System (ADS)

    McGraw, Robert; Winkler, Paul; Wagner, Paul

    2015-04-01

    Recently there have been several cases reported where the critical saturation ratio for onset of heterogeneous nucleation increases with nucleation temperature (positive slope dependence). This behavior contrasts with the behavior observed in homogeneous nucleation, where a decreasing critical saturation ratio with increasing nucleation temperature (negative slope dependence) seems universal. For this reason the positive slope dependence is referred to as anomalous. Negative slope dependence is found in heterogeneous nucleation as well, but because so few temperature-dependent measurements have been reported, it is not presently clear which slope condition (positive or negative) will become more frequent. Especially interesting is the case of water vapor condensation on silver nanoparticles [Kupc et al., AS&T 47: i-iv, 2013] where the critical saturation ratio for heterogeneous nucleation onset passes through a maximum, at about 278K, with higher (lower) temperatures showing the usual (anomalous) temperature dependence. In the present study we develop an extension of Fletcher's classical, capillarity-based, model of heterogeneous nucleation that explicitly resolves the roles of surface energy and surface entropy in determining temperature dependence. Application of the second nucleation theorem, which relates temperature dependence of nucleation rate to cluster energy, yields both necessary and sufficient conditions for anomalous temperature behavior in the extended Fletcher model. In particular it is found that an increasing contact angle with temperature is a necessary, but not sufficient, condition for anomalous temperature dependence to occur. Methods for inferring microscopic contact angle and its temperature dependence from heterogeneous nucleation probability measurements are discussed in light of the new theory.

  5. An Analytic Function of Lunar Surface Temperature for Exospheric Modeling

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David

    2014-01-01

    We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.

  6. Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.

    PubMed

    Figura, Simon; Livingstone, David M; Kipfer, Rolf

    2015-01-01

    Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed. PMID:25412761

  7. Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.

    PubMed

    Figura, Simon; Livingstone, David M; Kipfer, Rolf

    2015-01-01

    Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed.

  8. Phasic temperature control appraised with the Ceres-Wheat model.

    PubMed

    Volk, T; Bugbee, B; Tubiello, F

    1997-01-01

    Phasic control refers to the specification of a series of different environmental conditions during a crop's life cycle, with the goal of optimizing some aspect of productivity. Because of the enormous number of possible scenarios, phasic control is an ideal situation for modeling to provide guidance prior to experiments. Here we use the Ceres-Wheat model, modified for hydroponic growth chambers, to examine temperature effects. We first establish a baseline by running the model at constant temperatures from 10 degrees C to 30 degrees C. Grain yield per day peaks at 15 degrees C at a value that is 25% higher than the yield at the commonly used 23 degrees C. We then show results for phasic control limited to a single shift in temperature and, finally, we examine scenarios that allow each of the five phases of the life cycle to have a different temperature. Results indicate that grain yield might be increased by 15-20% over the best yield at constant temperature, primarily from a boosted harvest index, which has the additional advantage of less waste biomass. Such gains, if achievable, would help optimize food production for life support systems. Experimental work should first verify the relationship between yield and temperature, and then move to selected scenarios of phasic control, based on model predictions. PMID:11540452

  9. Measuring the mechanical efficiency of a working cardiac muscle sample at body temperature using a flow-through calorimeter.

    PubMed

    Taberner, Andrew J; Johnston, Callum M; Pham, Toan; June-Chiew Han; Ruddy, Bryan P; Loiselle, Denis S; Nielsen, Poul M F

    2015-08-01

    We have developed a new `work-loop calorimeter' that is capable of measuring, simultaneously, the work-done and heat production of isolated cardiac muscle samples at body temperature. Through the innovative use of thermoelectric modules as temperature sensors, the development of a low-noise fluid-flow system, and implementation of precise temperature control, the heat resolution of this device is 10 nW, an improvement by a factor of ten over previous designs. These advances have allowed us to conduct the first flow-through measurements of work output and heat dissipation from cardiac tissue at body temperature. The mechanical efficiency is found to vary with peak stress, and reaches a peak value of approximately 15 %, a figure similar to that observed in cardiac muscle at lower temperatures.

  10. A non-intrusive method for temperature measurements in flames produced by milligram-sized solid samples

    NASA Astrophysics Data System (ADS)

    Frances, Colleen Elizabeth

    Fires are responsible for the loss of thousands of lives and billions of dollars in property damage each year in the United States. Flame retardants can assist in the prevention of fires through mechanisms which either prevent or greatly inhibit flame spread and development. In this study samples of both brominated and non-brominated polystyrene were tested in the Milligram-scale Flaming Calorimeter and images captured with two DSL-R cameras were analyzed to determine flame temperatures through use of a non-intrusive method. Based on the flame temperature measurement results, a better understanding of the gas phase mechanisms of flame retardants may result, as temperature is an important diagnostic in the study of fire and combustion. Measurements taken at 70% of the total flame height resulted in average maximum temperatures of about 1656 K for polystyrene and about 1614 K for brominated polystyrene, suggesting that the polymer flame retardant may reduce flame temperatures.

  11. Measuring the mechanical efficiency of a working cardiac muscle sample at body temperature using a flow-through calorimeter.

    PubMed

    Taberner, Andrew J; Johnston, Callum M; Pham, Toan; June-Chiew Han; Ruddy, Bryan P; Loiselle, Denis S; Nielsen, Poul M F

    2015-08-01

    We have developed a new `work-loop calorimeter' that is capable of measuring, simultaneously, the work-done and heat production of isolated cardiac muscle samples at body temperature. Through the innovative use of thermoelectric modules as temperature sensors, the development of a low-noise fluid-flow system, and implementation of precise temperature control, the heat resolution of this device is 10 nW, an improvement by a factor of ten over previous designs. These advances have allowed us to conduct the first flow-through measurements of work output and heat dissipation from cardiac tissue at body temperature. The mechanical efficiency is found to vary with peak stress, and reaches a peak value of approximately 15 %, a figure similar to that observed in cardiac muscle at lower temperatures. PMID:26738140

  12. LOW TEMPERATURE X-RAY DIFFRACTION STUDIES OF NATURAL GAS HYDRATE SAMPLES FROM THE GULF OF MEXICO

    SciTech Connect

    Rawn, Claudia J; Sassen, Roger; Ulrich, Shannon M; Phelps, Tommy Joe; Chakoumakos, Bryan C; Payzant, E Andrew

    2008-01-01

    Clathrate hydrates of methane and other small alkanes occur widespread terrestrially in marine sediments of the continental margins and in permafrost sediments of the arctic. Quantitative study of natural clathrate hydrates is hampered by the difficulty in obtaining pristine samples, particularly from submarine environments. Bringing samples of clathrate hydrate from the seafloor at depths without compromising their integrity is not trivial. Most physical property measurements are based on studies of laboratory-synthesized samples. Here we report X-ray powder diffraction measurements of a natural gas hydrate sample from the Green Canyon, Gulf of Mexico. The first data were collected in 2002 and revealed ice and structure II gas hydrate. In the subsequent time the sample has been stored in liquid nitrogen. More recent X-ray powder diffraction data have been collected as functions of temperature and time. This new data indicates that the larger sample is heterogeneous in ice content and shows that the amount of sII hydrate decreases with increasing temperature and time as expected. However, the dissociation rate is higher at lower temperatures and earlier in the experiment.

  13. Corn blight review: Sampling model and ground data measurements program

    NASA Technical Reports Server (NTRS)

    Allen, R. D.

    1972-01-01

    The sampling plan involved the selection of the study area, determination of the flightline and segment sample design within the study area, and determination of a field sample design. Initial interview survey data consisting of crop species acreage and land use were collected. On all corn fields, additional information such as seed type, row direction, population, planting date, ect. were also collected. From this information, sample corn fields were selected to be observed through the growing season on a biweekly basis by county extension personnel.

  14. Monte Carlo path sampling approach to modeling aeolian sediment transport

    NASA Astrophysics Data System (ADS)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient

  15. River water temperature and fish growth forecasting models

    NASA Astrophysics Data System (ADS)

    Danner, E.; Pike, A.; Lindley, S.; Mendelssohn, R.; Dewitt, L.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.

    2010-12-01

    Water is a valuable, limited, and highly regulated resource throughout the United States. When making decisions about water allocations, state and federal water project managers must consider the short-term and long-term needs of agriculture, urban users, hydroelectric production, flood control, and the ecosystems downstream. In the Central Valley of California, river water temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. There is consequently strong interest in modeling water temperature dynamics and the subsequent impacts on fish growth in such regulated rivers. However, the accuracy of current stream temperature models is limited by the lack of spatially detailed meteorological forecasts. To address these issues, we developed a high-resolution deterministic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) in a state-space framework, and applied this model to Upper Sacramento River. We then adapted salmon bioenergetics models to incorporate the temperature data at sub-hourly time steps to provide more realistic estimates of salmon growth. The temperature model uses physically-based heat budgets to calculate the rate of heat transfer to/from the river. We use variables provided by the TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) model—a high-resolution assimilation of satellite-derived meteorological observations and numerical weather simulations—as inputs. The TOPS-WRF framework allows us to improve the spatial and temporal resolution of stream temperature predictions. The salmon growth models are adapted from the Wisconsin bioenergetics model. We have made the output from both models available on an interactive website so that water and fisheries managers can determine the past, current and three day forecasted water temperatures at

  16. Modeling the melting temperature of nanoscaled bimetallic alloys.

    PubMed

    Li, Ming; Zhu, Tian-Shu

    2016-06-22

    The effect of size, composition and dimension on the melting temperature of nanoscaled bimetallic alloys was investigated by considering the interatomic interaction. The established thermodynamics model without any arbitrarily adjustable parameters can be used to predict the melting temperature of nanoscaled bimetallic alloys. It is found that, the melting temperature and interatomic interaction of nanoscaled bimetallic alloys decrease with the decrease in size and the increasing composition of the lower surface energy metal. Moreover, for the nanoscaled bimetallic alloys with the same size and composition, the dependence of the melting temperature on the dimension can be sequenced as follows: nanoparticles > nanowires > thin films. The accuracy of the developed model is verified by the recent experimental and computer simulation results.

  17. Modeling temperature compensation in chemical and biological oscillators.

    PubMed

    Ruoff, P; Rensing, L; Kommedal, R; Mohsenzadeh, S

    1997-09-01

    All physicochemical and biological oscillators maintain a balance between destabilizing reactions (as, for example, intrinsic autocatalytic or amplifying reactions) and stabilizing processes. These two groups of processes tend to influence the period in opposite directions and may lead to temperature compensation whenever their overall influence balances. This principle of "antagonistic balance" has been tested for several chemical and biological oscillators. The Goodwin negative feedback oscillator appears of particular interest for modeling the circadian clocks in Neurospora and Drosophila and their temperature compensation. Remarkably, the Goodwin oscillator not only gives qualitative, correct phase response curves for temperature steps and temperature pulses, but also simulates the temperature behavior of Neurospora frq and Drosophila per mutants almost quantitatively. The Goodwin oscillator predicts that circadian periods are strongly dependent on the turnover of the clock mRNA or clock protein. A more rapid turnover of clock mRNA or clock protein results, in short, a slower turnover in longer period lengths.

  18. Sample Size Considerations in Prevention Research Applications of Multilevel Modeling and Structural Equation Modeling.

    PubMed

    Hoyle, Rick H; Gottfredson, Nisha C

    2015-10-01

    When the goal of prevention research is to capture in statistical models some measure of the dynamic complexity in structures and processes implicated in problem behavior and its prevention, approaches such as multilevel modeling (MLM) and structural equation modeling (SEM) are indicated. Yet the assumptions that must be satisfied if these approaches are to be used responsibly raise concerns regarding their use in prevention research involving smaller samples. In this article, we discuss in nontechnical terms the role of sample size in MLM and SEM and present findings from the latest simulation work on the performance of each approach at sample sizes typical of prevention research. For each statistical approach, we draw from extant simulation studies to establish lower bounds for sample size (e.g., MLM can be applied with as few as ten groups comprising ten members with normally distributed data, restricted maximum likelihood estimation, and a focus on fixed effects; sample sizes as small as N = 50 can produce reliable SEM results with normally distributed data and at least three reliable indicators per factor) and suggest strategies for making the best use of the modeling approach when N is near the lower bound.

  19. Heat Transfer Modeling for Rigid High-Temperature Fibrous Insulation

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Cunnington, George R.; Knutson, Jeffrey R.

    2012-01-01

    Combined radiation and conduction heat transfer through a high-temperature, high-porosity, rigid multiple-fiber fibrous insulation was modeled using a thermal model previously used to model heat transfer in flexible single-fiber fibrous insulation. The rigid insulation studied was alumina enhanced thermal barrier (AETB) at densities between 130 and 260 kilograms per cubic meter. The model consists of using the diffusion approximation for radiation heat transfer, a semi-empirical solid conduction model, and a standard gas conduction model. The relevant parameters needed for the heat transfer model were estimated from steady-state thermal measurements in nitrogen gas at various temperatures and environmental pressures. The heat transfer modeling methodology was evaluated by comparison with standard thermal conductivity measurements, and steady-state thermal measurements in helium and carbon dioxide gases. The heat transfer model is applicable over the temperature range of 300 to 1360 K, pressure range of 0.133 to 101.3 x 10(exp 3) Pa, and over the insulation density range of 130 to 260 kilograms per cubic meter in various gaseous environments.

  20. Low-temperature dynamic nuclear polarization with helium-cooled samples and nitrogen-driven magic-angle spinning.

    PubMed

    Thurber, Kent; Tycko, Robert

    2016-03-01

    We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized (13)C NMR signals in the 100-200 range are demonstrated with DNP at 25K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states.

  1. Low-temperature dynamic nuclear polarization with helium-cooled samples and nitrogen-driven magic-angle spinning.

    PubMed

    Thurber, Kent; Tycko, Robert

    2016-03-01

    We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized (13)C NMR signals in the 100-200 range are demonstrated with DNP at 25K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states. PMID:26920835

  2. Low-temperature dynamic nuclear polarization with helium-cooled samples and nitrogen-driven magic-angle spinning

    NASA Astrophysics Data System (ADS)

    Thurber, Kent; Tycko, Robert

    2016-03-01

    We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized 13C NMR signals in the 100-200 range are demonstrated with DNP at 25 K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30 K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states.

  3. On the temperature model of CO{sub 2} lasers

    SciTech Connect

    Nevdakh, Vladimir V; Ganjali, Monireh; Arshinov, K I

    2007-03-31

    A refined temperature model of CO{sub 2} lasers is presented, which takes into account the fact that vibrational modes of the CO{sub 2} molecule have the common ground vibrational level. New formulas for the occupation numbers and the vibrational energy storage in individual modes are obtained as well as expressions relating the vibrational temperatures of the CO{sub 2} molecules with the excitation and relaxation rates of lower vibrational levels of modes upon excitation of the CO{sub 2}-N{sub 2}-He mixture in an electric discharge. The character of dependences of the vibrational temperatures on the discharge current is discussed. (active media)

  4. REFINEMENT OF THE STREAM TEMPERATURE NETWORK MODEL WITH CORRECTIONS FOR SOLAR SHADINGS AND INFLOW TEMPERATURES

    NASA Astrophysics Data System (ADS)

    Miyamoto, Hitoshi; Maeba, Hiroshi; Nakayama, Kazuya; Michioku, Kohji

    A basin-wide stream network model was developed for stream temperature prediction in a river basin. The model used Horton’s geomorphologic laws for channel networks and river basins with stream ordering systems in order to connect channel segments from sources to the river mouth. Within the each segment, a theoretical solution derived from a thermal energy equation was used to predict longitudinal variation of stream temperatures. The model also took into account effects of solar radiation reduction due to both riparian vegetation and topography, thermal advection from the sources and lateral land-use. Comparison of the model prediction with observation in the Ibo River Basin of Japan showed very good agreement for the thermal structure throughout the river basin for almost all seasons, excluding the autumnal month in which the thermal budget on the stream water body was changed from positive to negative.

  5. Phase behaviors and membrane properties of model liposomes: temperature effect.

    PubMed

    Wu, Hsing-Lun; Sheng, Yu-Jane; Tsao, Heng-Kwong

    2014-09-28

    The phase behaviors and membrane properties of small unilamellar vesicles have been explored at different temperatures by dissipative particle dynamics simulations. The vesicles spontaneously formed by model lipids exhibit pre-transition from gel to ripple phase and main transition from ripple to liquid phase. The vesicle shape exhibits the faceted feature at low temperature, becomes more sphere-like with increasing temperature, but loses its sphericity at high temperature. As the temperature rises, the vesicle size grows but the membrane thickness declines. The main transition (Tm) can be identified by the inflection point. The membrane structural characteristics are analyzed. The inner and outer leaflets are asymmetric. The length of the lipid tail and area density of the lipid head in both leaflets decrease with increasing temperature. However, the mean lipid volume grows at low temperature but declines at high temperature. The membrane mechanical properties are also investigated. The water permeability grows exponentially with increasing T but the membrane tension peaks at Tm. Both the bending and stretching moduli have their minima near Tm. Those results are consistent with the experimental observations, indicating that the main signatures associated with phase transition are clearly observed in small unilamellar vesicles.

  6. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    NASA Astrophysics Data System (ADS)

    Portner, H.; Wolf, A.; Bugmann, H.

    2009-04-01

    Many biogeochemical models have been applied to study the response of the carbon cycle to changes in climate, whereby the process of carbon uptake (photosynthesis) has usually gained more attention than the equally important process of carbon release by respiration. The decomposition of soil organic matter is driven by a combination of factors with a prominent one being soil temperature [Berg and Laskowski(2005)]. One uncertainty concerns the response function used to describe the sensitivity of soil organic matter decomposition to temperature. This relationship is often described by one out of a set of similar exponential functions, but it has not been investigated how uncertainties in the choice of the response function influence the long term predictions of biogeochemical models. We built upon the well-established LPJ-GUESS model [Smith et al.(2001)]. We tested five candidate functions and calibrated them against eight datasets from different Ameriflux and CarboEuropeIP sites [Hibbard et al.(2006)]. We used a simple Exponential function with a constant Q10, the Arrhenius function, the Gaussian function [Tuomi et al.(2008), O'Connell(1990)], the Van't Hoff function [Van't Hoff(1901)] and the Lloyd&Taylor function [Lloyd and Taylor(1994)]. We assessed the impact of uncertainty in model formulation of temperature response on estimates of present and future long-term carbon storage in ecosystems and hence on the CO2 feedback potential to the atmosphere. We specifically investigated the relative importance of model formulation and the error introduced by using different data sets for the parameterization. Our results suggested that the Exponential and Arrhenius functions are inappropriate, as they overestimated the respiration rates at lower temperatures. The Gaussian, Van't Hoff and Lloyd&Taylor functions all fit the observed data better, whereby the functions of Gaussian and Van't Hoff underestimated the response at higher temperatures. We suggest, that the

  7. Automated biowaste sampling system, solids subsystem operating model, part 2

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Stauffer, R. E.

    1973-01-01

    The detail design and fabrication of the Solids Subsystem were implemented. The system's capacity for the collection, storage or sampling of feces and vomitus from six subjects was tested and verified.

  8. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    DOE PAGES

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; Carroll, Susan A.

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution featuresmore » including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.« less

  9. Sample stream distortion modeled in continuous-flow electrophoresis

    NASA Technical Reports Server (NTRS)

    Rhodes, P. H.

    1979-01-01

    Buoyancy-induced disturbances in an electrophoresis-type chamber were investigated. Five tracer streams (latex) were used to visualize the flows while a nine-thermistor array sensed the temperature field. The internal heating to the chamber was provided by a 400 Hz electrical field. Cooling to the chamber was provided on the front and back faces and, in addition, on both chamber side walls. Disturbances to the symmetric base flow in the chamber occurred in the broad plane of the chamber and resulted from the formation of lateral and axial temperature gradients. The effect of these gradients was to retard or increase local flow velocities at different positions in the chamber cross section, which resulted in lateral secondary flows being induced in the broad plane of the chamber. As the adverse temperature gradients increased in magnitude, the critical Rayleigh number was approached and reverse (separated) flow became apparent, which, subsequently, led to the onset of time variant secondary flows.

  10. Prediction of Protein Loop Conformations using the AGBNP Implicit Solvent Model and Torsion Angle Sampling.

    PubMed

    Felts, Anthony K; Gallicchio, Emilio; Chekmarev, Dmitriy; Paris, Kristina A; Friesner, Richard A; Levy, Ronald M

    2008-01-01

    The OPLS-AA all-atom force field and the Analytical Generalized Born plus Non-Polar (AGBNP) implicit solvent model, in conjunction with torsion angle conformational search protocols based on the Protein Local Optimization Program (PLOP), are shown to be effective in predicting the native conformations of 57 9-residue and 35 13-residue loops of a diverse series of proteins with low sequence identity. The novel nonpolar solvation free energy estimator implemented in AGBNP augmented by correction terms aimed at reducing the occurrence of ion pairing are important to achieve the best prediction accuracy. Extended versions of the previously developed PLOP-based conformational search schemes based on calculations in the crystal environment are reported that are suitable for application to loop homology modeling without the crystal environment. Our results suggest that in general the loop backbone conformation is not strongly influenced by crystal packing. The application of the temperature Replica Exchange Molecular Dynamics (T-REMD) sampling method for a few examples where PLOP sampling is insufficient are also reported. The results reported indicate that the OPLS-AA/AGBNP effective potential is suitable for high-resolution modeling of proteins in the final stages of homology modeling and/or protein crystallographic refinement.

  11. A Hierarchy of Snowmelt Models for Canadian Prairies: Temperature-Index, Modified Temperature Index and Energy-Balance Models

    NASA Astrophysics Data System (ADS)

    Yew Gan, Thian; Singh, Purushottam; Gobena, Adam

    2010-05-01

    Three semi-distributed snowmelt models were developed and applied to the Paddle River Basin (PRB) in the Canadian Prairies: (1) A physics-based, energy balance model (SDSM-EBM) that considers vertical energy exchange processes in open and forested areas, and snowmelt processes that include liquid and ice phases separately; (2) A modified temperature index model (SDSM-MTI) that uses both near surface soil temperature (Tg) and air temperature (Ta), and (3) A standard temperature index(SDSM-TI) method using Ta only. Other than the "regulatory" effects of beaver dams that affected the validation results on simulated runoff, both SDSM-MTI and SDSM EBM simulated reasonably accurate snowmelt runoff, snow water equivalent and snow depth. For the PRB, where snowpack is shallow to moderately deep, and winter is relatively severe, the advantage of using both Ta and Tg is partly attributed to Tg showing a stronger correlation with solar radiation than Ta during the spring snowmelt season, and partly to the onset of major snowmelt which usually happens when Tg approaches 0oC. After re-setting model parameters so that SDSM-MTI degenerated to SDSM-TI (effect of Tg is completely removed), the latter performed poorly, even after re-calibrating the melt factors using Ta alone. It seems that if reliable Tg data are available, they should be utilized to model the snowmelt processes in a Prairie environment particularly if the temperature-index approach is adopted.

  12. Measuring and modeling hemoglobin aggregation below the freezing temperature.

    PubMed

    Rosa, Mónica; Lopes, Carlos; Melo, Eduardo P; Singh, Satish K; Geraldes, Vitor; Rodrigues, Miguel A

    2013-08-01

    Freezing of protein solutions is required for many applications such as storage, transport, or lyophilization; however, freezing has inherent risks for protein integrity. It is difficult to study protein stability below the freezing temperature because phase separation constrains solute concentration in solution. In this work, we developed an isochoric method to study protein aggregation in solutions at -5, -10, -15, and -20 °C. Lowering the temperature below the freezing point in a fixed volume prevents the aqueous solution from freezing, as pressure rises until equilibrium (P,T) is reached. Aggregation rates of bovine hemoglobin (BHb) increased at lower temperature (-20 °C) and higher BHb concentration. However, the addition of sucrose substantially decreased the aggregation rate and prevented aggregation when the concentration reached 300 g/L. The unfolding thermodynamics of BHb was studied using fluorescence, and the fraction of unfolded protein as a function of temperature was determined. A mathematical model was applied to describe BHb aggregation below the freezing temperature. This model was able to predict the aggregation curves for various storage temperatures and initial concentrations of BHb. The aggregation mechanism was revealed to be mediated by an unfolded state, followed by a fast growth of aggregates that readily precipitate. The aggregation kinetics increased for lower temperature because of the higher fraction of unfolded BHb closer to the cold denaturation temperature. Overall, the results obtained herein suggest that the isochoric method could provide a relatively simple approach to obtain fundamental thermodynamic information about the protein and the aggregation mechanism, thus providing a new approach to developing accelerated formulation studies below the freezing temperature.

  13. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part I: Instrumentation.

    PubMed

    Bruno, Thomas J

    2016-01-15

    Building on the successful application in the laboratory of PLOT-cryoadsorption as a means of collecting vapor (or headspace) samples for chromatographic analysis, in this paper a field portable apparatus is introduced. This device fits inside of a briefcase (aluminum tool carrier), and can be easily transported by vehicle or by air. The portable apparatus functions entirely on compressed air, making it suitable for use in locations lacking electrical power, and for use in flammable and explosive environments. The apparatus consists of four aspects: a field capable PLOT-capillary platform, the supporting equipment platform, the service interface between the PLOT-capillary and the supporting equipment, and the necessary peripherals. Vapor sampling can be done with either a hand piece (containing the PLOT capillary) or with a custom fabricated standoff module. Both the hand piece and the standoff module can be heated and cooled to facilitate vapor collection and subsequent vapor sample removal. The service interface between the support platform and the sampling units makes use of a unique counter current approach that minimizes loss of cooling and heating due to heat transfer with the surroundings (recuperative thermostatting). Several types of PLOT-capillary elements and sampling probes are described in this report. Applications to a variety of samples relevant to forensic and environmental analysis are discussed in a companion paper. PMID:26687166

  14. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan

    Structural equation modeling (SEM) attempts to remove the negative influence of measurement error and allows for investigation of relationships at the level of the underlying constructs of interest. SEM has been regarded as a "large sample" technique since its inception. Recent developments in SEM, some of which are currently available in popular…

  15. Modeling acclimation of photosynthesis to temperature in evergreen conifer forests.

    PubMed

    Gea-Izquierdo, Guillermo; Mäkelä, Annikki; Margolis, Hank; Bergeron, Yves; Black, T Andrew; Dunn, Allison; Hadley, Julian; Kyaw Tha Paw U; Falk, Matthias; Wharton, Sonia; Monson, Russell; Hollinger, David Y; Laurila, Tuomas; Aurela, Mika; McCaughey, Harry; Bourque, Charles; Vesala, Timo; Berninger, Frank

    2010-10-01

    • In this study, we used a canopy photosynthesis model which describes changes in photosynthetic capacity with slow temperature-dependent acclimations. • A flux-partitioning algorithm was applied to fit the photosynthesis model to net ecosystem exchange data for 12 evergreen coniferous forests from northern temperate and boreal regions. • The model accounted for much of the variation in photosynthetic production, with modeling efficiencies (mean > 67%) similar to those of more complex models. The parameter describing the rate of acclimation was larger at the northern sites, leading to a slower acclimation of photosynthesis to temperature. The response of the rates of photosynthesis to air temperature in spring was delayed up to several days at the coldest sites. Overall photosynthesis acclimation processes were slower at colder, northern locations than at warmer, more southern, and more maritime sites. • Consequently, slow changes in photosynthetic capacity were essential to explaining variations of photosynthesis for colder boreal forests (i.e. where acclimation of photosynthesis to temperature was slower), whereas the importance of these processes was minor in warmer conifer evergreen forests.

  16. Study of Aerothermodynamic Modeling Issues Relevant to High-Speed Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.

    2014-01-01

    This paper examines the application of state-of-the-art coupled ablation and radiation simulations to highspeed sample return vehicles, such as those returning from Mars or an asteroid. A defining characteristic of these entries is that the surface recession rates and temperatures are driven by nonequilibrium convective and radiative heating through a boundary layer with significant surface blowing and ablation products. Measurements relevant to validating the simulation of these phenomena are reviewed and the Stardust entry is identified as providing the best relevant measurements. A coupled ablation and radiation flowfield analysis is presented that implements a finite-rate surface chemistry model. Comparisons between this finite-rate model and a equilibrium ablation model show that, while good agreement is seen for diffusion-limited oxidation cases, the finite-rate model predicts up to 50% lower char rates than the equilibrium model at sublimation conditions. Both the equilibrium and finite rate models predict significant negative mass flux at the surface due to sublimation of atomic carbon. A sensitivity analysis to flowfield and surface chemistry rates show that, for a sample return capsule at 10, 12, and 14 km/s, the sublimation rates for C and C3 provide the largest changes to the convective flux, radiative flux, and char rate. A parametric uncertainty analysis of the radiative heating due to radiation modeling parameters indicates uncertainties ranging from 27% at 10 km/s to 36% at 14 km/s. Applying the developed coupled analysis to the Stardust entry results in temperatures within 10% of those inferred from observations, and final recession values within 20% of measurements, which improves upon the 60% over-prediction at the stagnation point obtained through an uncoupled analysis. Emission from CN Violet is shown to be over-predicted by nearly and order-of-magnitude, which is consistent with the results of previous independent analyses. Finally, the

  17. Preisach Model of ER Fluids Considering Temperature Variations

    NASA Astrophysics Data System (ADS)

    Han, Y. M.; Choi, S. B.; Choi, H. J.

    This paper presents a new approach for hysteresis modeling of an electro-rheological (ER) fluid. The Preisach model is adopted to describe change of an ER fluid hysteresis with temperature, and its applicability is experimentally proved by examining two significant properties under two dominant temperature conditions. As a first step, the polymethylaniline (PMA)-based ER fluid is made by dispersing the chemically synthesized PMA particles into non-conducting oil. Then, using the Couette type electroviscometer, multiple first order descending (FOD) curves are constructed to consider temperature variations in the model. Subsequently, a nonlinear hysteresis model of the ER fluid is formulated between input (electric field) and output (yield stress). A compensation strategy is also formulated in a discrete manner through the Preisach model inversion to attain desired shear stress of the ER fluid. In order to demonstrate the effectiveness of the identified hysteresis model and the tracking performance of the control strategy, the field-dependent hysteresis loop and tracking error responses are experimentally evaluated in time domain and compared with responses obtained from Bingham model.

  18. Modeling apple surface temperature dynamics based on weather data.

    PubMed

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-01-01

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management. PMID:25350507

  19. Modeling apple surface temperature dynamics based on weather data.

    PubMed

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-01-01

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management.

  20. Modeling the effect of temperature on survival rate of Salmonella Enteritidis in yogurt.

    PubMed

    Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A

    2014-01-01

    The aim of the study was to determine the inactivation rates of Salmonella Enteritidis in commercially produced yogurt and to generate primary and secondary mathematical models to predict the behaviour of these bacteria during storage at different temperatures. The samples were inoculated with the mixture of three S. Enteritidis strains and stored at 5 degrees C, 10 degrees C, 15 degrees C, 20 degrees C and 25 degrees C for 24 h. The number of salmonellae was determined every two hours. It was found that the number of bacteria decreased linearly with storage time in all samples. Storage temperature and pH of yogurt significantly influenced survival rate of S. Enteritidis (p < 0.05). In samples kept at 5 degrees C the number of salmonellae decreased at the lowest rate, whereas at 25 degrees C the reduction in number of bacteria was the most dynamic. The natural logarithm of mean inactivation rates of Salmonella calculated from primary model was fitted to two secondary models: linear and polynomial. Equations obtained from both secondary models can be applied as a tool for prediction of inactivation rate of Salmonella in yogurt stored under temperature range from 5 to 25 degrees C; however, polynomial model gave the better fit to the experimental data.

  1. Spatiotemporal modeling of monthly soil temperature using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tang, Xiao-Ping; Guo, Nai-Jia; Yang, Chao; Liu, Hong-Bin; Shang, Yue-Feng

    2013-08-01

    Soil temperature data are critical for understanding land-atmosphere interactions. However, in many cases, they are limited at both spatial and temporal scales. In the current study, an attempt was made to predict monthly mean soil temperature at a depth of 10 cm using artificial neural networks (ANNs) over a large region with complex terrain. Gridded independent variables, including latitude, longitude, elevation, topographic wetness index, and normalized difference vegetation index, were derived from a digital elevation model and remote sensing images with a resolution of 1 km. The good performance and robustness of the proposed ANNs were demonstrated by comparisons with multiple linear regressions. On average, the developed ANNs presented a relative improvement of about 44 % in root mean square error, 70 % in mean absolute percentage error, and 18 % in coefficient of determination over classical linear models. The proposed ANN models were then applied to predict soil temperatures at unsampled locations across the study area. Spatiotemporal variability of soil temperature was investigated based on the obtained database. Future work will be needed to test the applicability of ANNs for estimating soil temperature at finer scales.

  2. Modelling Brain Temperature and Perfusion for Cerebral Cooling

    NASA Astrophysics Data System (ADS)

    Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael

    2015-11-01

    Brain temperature relies heavily on two aspects: i) blood perfusion and porous heat transport through tissue and ii) blood flow and heat transfer through embedded arterial and venous vasculature. Moreover brain temperature cannot be measured directly unless highly invasive surgical procedures are used. A 3D two-phase fluid-porous model for mapping flow and temperature in brain is presented with arterial and venous vessels extracted from MRI scans. Heat generation through metabolism is also included. The model is robust and reveals flow and temperature maps in unprecedented 3D detail. However, the Karmen-Kozeny parameters of the porous (tissue) phase need to be optimised for expected perfusion profiles. In order to optimise the K-K parameters a reduced order two-phase model is developed where 1D vessels are created with a tree generation algorithm embedded inside a 3D porous domain. Results reveal that blood perfusion is a strong function of the porosity distribution in the tissue. We present a qualitative comparison between the simulated perfusion maps and those obtained clinically. We also present results studying the effect of scalp cooling on core brain temperature and preliminary results agree with those observed clinically.

  3. A flexible pinhole camera model for coherent nonuniform sampling.

    PubMed

    Popescu, Voicu; Benes, Bedrich; Rosen, Paul; Cui, Jian; Wang, Lili

    2014-01-01

    The flexible pinhole camera (FPC) allows flexible modulation of the sampling rate over the field of view. The FPC is defined by a viewpoint and a map specifying the sampling locations on the image plane. The map is constructed from known regions of interest with interactive and automatic approaches. The FPC provides inexpensive 3D projection that allows rendering complex datasets quickly, in feed-forward fashion, by projection followed by rasterization. The FPC supports many types of data, including image, height field, geometry, and volume data. The resulting image is a coherent nonuniform sampling (CoNUS) of the dataset that matches the local variation of the dataset's importance. CoNUS images have been successfully implemented for remote visualization, focus-plus-context visualization, and acceleration of expensive rendering effects such as surface geometric detail and specular reflection. A video explaining and demonstrating the FPC is at http://youtu.be/kvFe5XjOPNM.

  4. Micro-electro-mechanical systems/near-infrared validation of different sampling modes and sample sets coupled with multiple models.

    PubMed

    Wu, Zhisheng; Shi, Xinyuan; Wan, Guang; Xu, Manfei; Zhan, Xueyan; Qiao, Yanjiang

    2015-01-01

    The aim of the present study was to demonstrate the reliability of micro-electro-mechanical systems/near-infrared technology by investigating analytical models of two modes of sampling (integrating sphere and fiber optic probe modes) and different sample sets. Baicalin in Yinhuang tablets was used as an example, and the experimental procedure included the optimization of spectral pretreatments, selection of wavelength regions using interval partial least squares, moving window partial least squares, and validation of the method using an accuracy profile. The results demonstrated that models that use the integrating sphere mode are better than those that use fiber optic probe modes. Spectra that use fiber optic probe modes tend to be more susceptible to interference information because the intensity of the incident light on a fiber optic probe mode is significantly weaker than that on an integrating sphere mode. According to the test set validation result of the method parameters, such as accuracy, precision, risk, and linearity, the selection of variables was found to make no significant difference to the performance of the full spectral model. The performance of the models whose sample sets ranged widely in concentration (i.e., 1-4 %) was found to be better than that of models whose samples had relatively narrow ranges (i.e., 1-2 %). The establishment and validation of this method can be used to clarify the analytical guideline in Chinese herbal medicine about two sampling modes and different sample sets in the micro-electro-mechanical systems/near-infrared technique.

  5. Sampling biases in datasets of historical mean air temperature over land.

    PubMed

    Wang, Kaicun

    2014-01-01

    Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends.

  6. Sampling biases in datasets of historical mean air temperature over land.

    PubMed

    Wang, Kaicun

    2014-01-01

    Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends. PMID:24717688

  7. Modeling the effect of temperature on survival rate of Listeria monocytogenes in yogurt.

    PubMed

    Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A

    2016-01-01

    The aim of the study was to (i) evaluate the behavior of Listeria monocytogenes in a commercially produced yogurt, (ii) determine the survival/inactivation rates of L. monocytogenes during cold storage of yogurt and (iii) to generate primary and secondary mathematical models to predict the behavior of these bacteria during storage at different temperatures. The samples of yogurt were inoculated with the mixture of three L. monocytogenes strains and stored at 3, 6, 9, 12 and 15°C for 16 days. The number of listeriae was determined after 0, 1, 2, 3, 5, 7, 9, 12, 14 and 16 days of storage. From each sample a series of decimal dilutions were prepared and plated onto ALOA agar (agar for Listeria according to Ottaviani and Agosti). It was found that applied temperature and storage time significantly influenced the survival rate of listeriae (p<0.01). The number of L. monocytogenes in all the samples decreased linearly with storage time. The slowest decrease in the number of the bacteria was found in the samples stored at 6°C (D-10 value = 243.9 h), whereas the highest reduction in the number of the bacteria was observed in the samples stored at 15°C (D-10 value = 87.0 h). The number of L. monocytogenes was correlated with the pH value of the samples (p<0.01). The natural logarithm of the mean survival/inactivation rates of L. monocytogenes calculated from the primary model was fitted to two secondary models, namely linear and polynomial. Mathematical equations obtained from both secondary models can be applied as a tool for the prediction of the survival/inactivation rate of L. monocytogenes in yogurt stored under temperature range from 3 to 15°C, however, the polynomial model gave a better fit to the experimental data. PMID:27487505

  8. Land-surface temperature measurement from space - Physical principles and inverse modeling

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming; Dozier, Jeff

    1989-01-01

    To apply the multiple-wavelength (split-window) method used for satellite measurement of sea-surface temperature from thermal-infrared data to land-surface temperatures, the authors statistically analyze simulations using an atmospheric radiative transfer model. The range of atmospheric conditions and surface temperatures simulated is wide enough to cover variations in clear atmospheric properties and surface temperatures, both of which are larger over land than over sea. Surface elevation is also included in the simulation as the most important topographic effect. Land covers characterized by measured or modeled spectral emissivities include snow, clay, sands, and tree leaf samples. The empirical inverse model can estimate the surface temperature with a standard deviation less than 0.3 K and a maximum error less than 1 K, for viewing angles up to 40 degrees from nadir under cloud-free conditions, given satellite measurements in three infrared channels. A band in the region from 10.2 to 11.0 microns will usually give the most reliable single-band estimate of surface temperature. In addition, a band in either the 3.5-4.0-micron region or in the 11.5-12.6-micron region must be included for accurate atmospheric correction, and a band below the ozone absorption feature at 9.6 microns (e.g., 8.2-8.8 microns) will increase the accuracy of the estimate of surface temperature.

  9. Building phenomenological models that relate proteolysis in pork muscles to temperature, water and salt content.

    PubMed

    Harkouss, Rami; Safa, Hassan; Gatellier, Philippe; Lebert, André; Mirade, Pierre-Sylvain

    2014-05-15

    Throughout dry-cured ham production, salt and water content, pH and temperature are key factors affecting proteolysis, one of the main biochemical processes influencing sensory properties and final quality of the product. The aim of this study was to quantify the effect of these variables (except pH) on the time course of proteolysis in laboratory-prepared pork meat samples. Based on a Doehlert design, samples of five different types of pork muscle were prepared, salted, dried and placed at different temperatures, and sampled at different times for quantification of proteolysis. Statistical analysis of the experimental results showed that the proteolysis index (PI) was correlated positively with temperature and water content, but negatively with salt content. Applying response surface methodology and multiple linear regressions enabled us to build phenomenological models relating PI to water and salt content, and to temperature. These models could then be integrated into a 3D numerical ham model, coupling salt and water transfers to proteolysis. PMID:24423495

  10. Sample mounting and transfer for coupling an ultrahigh vacuum variable temperature beetle scanning tunneling microscope with conventional surface probes

    SciTech Connect

    Nafisi, Kourosh; Ranau, Werner; Hemminger, John C.

    2001-01-01

    We present a new ultrahigh vacuum (UHV) chamber for surface analysis and microscopy at controlled, variable temperatures. The new instrument allows surface analysis with Auger electron spectroscopy, low energy electron diffraction, quadrupole mass spectrometer, argon ion sputtering gun, and a variable temperature scanning tunneling microscope (VT-STM). In this system, we introduce a novel procedure for transferring a sample off a conventional UHV manipulator and onto a scanning tunneling microscope in the conventional ''beetle'' geometry, without disconnecting the heating or thermocouple wires. The microscope, a modified version of the Besocke beetle microscope, is mounted on a 2.75 in. outer diameter UHV flange and is directly attached to the base of the chamber. The sample is attached to a tripod sample holder that is held by the main manipulator. Under UHV conditions the tripod sample holder can be removed from the main manipulator and placed onto the STM. The VT-STM has the capability of acquiring images between the temperature range of 180--500 K. The performance of the chamber is demonstrated here by producing an ordered array of island vacancy defects on a Pt(111) surface and obtaining STM images of these defects.

  11. Temperature Driven Annealing of Perforations in Bicellar Model Membranes

    SciTech Connect

    Nieh, Mu-Ping; Raghunathan, V.A.; Pabst, Georg; Harroun, Thad; Nagashima, K; Morales, H; Katsaras, John; Macdonald, P

    2011-01-01

    Bicellar model membranes composed of 1,2-dimyristoylphosphatidylcholine (DMPC) and 1,2-dihexanoylphosphatidylcholine (DHPC), with a DMPC/DHPC molar ratio of 5, and doped with the negatively charged lipid 1,2-dimyristoylphosphatidylglycerol (DMPG), at DMPG/DMPC molar ratios of 0.02 or 0.1, were examined using small angle neutron scattering (SANS), {sup 31}P NMR, and {sup 1}H pulsed field gradient (PFG) diffusion NMR with the goal of understanding temperature effects on the DHPC-dependent perforations in these self-assembled membrane mimetics. Over the temperature range studied via SANS (300-330 K), these bicellar lipid mixtures exhibited a well-ordered lamellar phase. The interlamellar spacing d increased with increasing temperature, in direct contrast to the decrease in d observed upon increasing temperature with otherwise identical lipid mixtures lacking DHPC. {sup 31}P NMR measurements on magnetically aligned bicellar mixtures of identical composition indicated a progressive migration of DHPC from regions of high curvature into planar regions with increasing temperature, and in accord with the 'mixed bicelle model' (Triba, M. N.; Warschawski, D. E.; Devaux, P. E. Biophys. J.2005, 88, 1887-1901). Parallel PFG diffusion NMR measurements of transbilayer water diffusion, where the observed diffusion is dependent on the fractional surface area of lamellar perforations, showed that transbilayer water diffusion decreased with increasing temperature. A model is proposed consistent with the SANS, {sup 31}P NMR, and PFG diffusion NMR data, wherein increasing temperature drives the progressive migration of DHPC out of high-curvature regions, consequently decreasing the fractional volume of lamellar perforations, so that water occupying these perforations redistributes into the interlamellar volume, thereby increasing the interlamellar spacing.

  12. A computer model of global thermospheric winds and temperatures

    NASA Technical Reports Server (NTRS)

    Killeen, T. L.; Roble, R. G.; Spencer, N. W.

    1987-01-01

    Output data from the NCAR Thermospheric GCM and a vector-spherical-harmonic (VSH) representation of the wind field are used in constructing a computer model of time-dependent global horizontal vector neutral wind and temperature fields at altitude 130-300 km. The formulation of the VSH model is explained in detail, and some typical results obtained with a preliminary version (applicable to December solstice at solar maximum) are presented graphically. Good agreement with DE-2 satellite measurements is demonstrated.

  13. Determination of cross-grain properties of clearwood samples under kiln-drying conditions at temperature up to 140 C

    SciTech Connect

    Keep, L.B.; Keey, R.B.

    2000-07-01

    Small specimens of Pinus radiata have been tested to determine the creep strain that occurs during the kiln drying of boards. The samples have been tested over a range of temperatures from 20 C to 140 C. The samples, measuring 150 x 50 x 5 mm, were conditioned at various relative humidities in a pilot-plant kiln, in which the experiments at constant moisture content (MC) in the range of 5--20% MC were undertaken to eliminate mechano-sorptive strains. To determine the creep strain, the samples were brought to their equilibrium moisture content (EMC), then mechanically loaded under tension in the direction perpendicular to the grain. The strain was measured using small linear position sensors (LPS) which detect any elongation or shrinkage in the sample. The instantaneous compliance was measured within 60 sec of the application of the load (stress). The subsequent creep was monitored by the continued logging of strain data from the LPS units. The results of these experiments are consistent with previous studies of Wu and Milota (1995) on Douglas-fir (Pseudotsuga Menziesii). An increase in temperature or moisture content causes a rise in the creep strain while the sample is under tension. Values for the instantaneous compliance range from 1.7 x 10{sup {minus}3} to 1.28 x 10{sup {minus}2} M/Pa at temperatures between 20 C and 140 C and moisture content in the range of 5--20%. The rates of change of the creep strains are in the order of magnitude 10{sup {minus}7} to 10{sup {minus}8 s{sup {minus}1}} for these temperatures and moisture contents. The experimental data have been fitted to the constitutive equations of Wu and Milota (1996) for Douglas-fir to give material parameters for the instantaneous and creep strain components for Pinus radiata.

  14. Modelling the effect of temperature on unsaturated soil behaviour

    NASA Astrophysics Data System (ADS)

    Dumont, Matthieu; Taibi, Said; Fleureau, Jean-Marie; Abou Bekr, Nabil; Saouab, Abdelghani

    2010-12-01

    A simple thermohydromechanical (THM) constitutive model for unsaturated soils is described. The effective stress concept is extended to unsaturated soils with the introduction of a capillary stress. This capillary stress is based on a microstructural model and calculated from attraction forces due to water menisci. The effect of desaturation and the thermal softening phenomenon are modelled with a minimal number of material parameters and based on existing models. THM process is qualitatively and quantitatively modelled by using experimental data and previous work to show the application of the model, including a drying path under mechanical stress with transition between saturated and unsaturated states, a heating path under constant suction and a deviatoric path with imposed suction and temperature. The results show that the present model can simulate the THM behaviour in unsaturated soils in a satisfactory way.

  15. Apply a hydrological model to estimate local temperature trends

    NASA Astrophysics Data System (ADS)

    Igarashi, Masao; Shinozawa, Tatsuya

    2014-03-01

    Continuous times series {f(x)} such as a depth of water is written f(x) = T(x)+P(x)+S(x)+C(x) in hydrological science where T(x),P(x),S(x) and C(x) are called the trend, periodic, stochastic and catastrophic components respectively. We simplify this model and apply it to the local temperature data such as given E. Halley (1693), the UK (1853-2010), Germany (1880-2010), Japan (1876-2010). We also apply the model to CO2 data. The model coefficients are evaluated by a symbolic computation by using a standard personal computer. The accuracy of obtained nonlinear curve is evaluated by the arithmetic mean of relative errors between the data and estimations. E. Halley estimated the temperature of Gresham College from 11/1692 to 11/1693. The simplified model shows that the temperature at the time rather cold compared with the recent of London. The UK and Germany data sets show that the maximum and minimum temperatures increased slowly from the 1890s to 1940s, increased rapidly from the 1940s to 1980s and have been decreasing since the 1980s with the exception of a few local stations. The trend of Japan is similar to these results.

  16. Models of Solar Irradiance Variability and the Instrumental Temperature Record

    NASA Technical Reports Server (NTRS)

    Marcus, S. L.; Ghil, M.; Ide, K.

    1998-01-01

    The effects of decade-to-century (Dec-Cen) variations in total solar irradiance (TSI) on global mean surface temperature Ts during the pre-Pinatubo instrumental era (1854-1991) are studied by using two different proxies for TSI and a simplified version of the IPCC climate model.

  17. STREAM TEMPERATURE SIMULATION OF FORESTED RIPARIAN AREAS: II. MODEL APPLICATION

    EPA Science Inventory

    The SHADE-HSPF modeling system described in a companion paper has been tested and applied to the Upper Grande Ronde (UGR) watershed in northeast Oregon. Sensitivities of stream temperature to the heat balance parameters in Hydrologic Simulation Program-FORTRAN (HSPF) and the ripa...

  18. Modeling temperature variations in a pilot plant thermophilic anaerobic digester.

    PubMed

    Valle-Guadarrama, Salvador; Espinosa-Solares, Teodoro; López-Cruz, Irineo L; Domaschko, Max

    2011-05-01

    A model that predicts temperature changes in a pilot plant thermophilic anaerobic digester was developed based on fundamental thermodynamic laws. The methodology utilized two simulation strategies. In the first, model equations were solved through a searching routine based on a minimal square optimization criterion, from which the overall heat transfer coefficient values, for both biodigester and heat exchanger, were determined. In the second, the simulation was performed with variable values of these overall coefficients. The prediction with both strategies allowed reproducing experimental data within 5% of the temperature span permitted in the equipment by the system control, which validated the model. The temperature variation was affected by the heterogeneity of the feeding and extraction processes, by the heterogeneity of the digestate recirculation through the heating system and by the lack of a perfect mixing inside the biodigester tank. The use of variable overall heat transfer coefficients improved the temperature change prediction and reduced the effect of a non-ideal performance of the pilot plant modeled.

  19. The topomer-sampling model of protein folding

    PubMed Central

    Debe, Derek A.; Carlson, Matt J.; Goddard, William A.

    1999-01-01

    Clearly, a protein cannot sample all of its conformations (e.g., ≈3100 ≈ 1048 for a 100 residue protein) on an in vivo folding timescale (<1 s). To investigate how the conformational dynamics of a protein can accommodate subsecond folding time scales, we introduce the concept of the native topomer, which is the set of all structures similar to the native structure (obtainable from the native structure through local backbone coordinate transformations that do not disrupt the covalent bonding of the peptide backbone). We have developed a computational procedure for estimating the number of distinct topomers required to span all conformations (compact and semicompact) for a polypeptide of a given length. For 100 residues, we find ≈3 × 107 distinct topomers. Based on the distance calculated between different topomers, we estimate that a 100-residue polypeptide diffusively samples one topomer every ≈3 ns. Hence, a 100-residue protein can find its native topomer by random sampling in just ≈100 ms. These results suggest that subsecond folding of modest-sized, single-domain proteins can be accomplished by a two-stage process of (i) topomer diffusion: random, diffusive sampling of the 3 × 107 distinct topomers to find the native topomer (≈0.1 s), followed by (ii) intratopomer ordering: nonrandom, local conformational rearrangements within the native topomer to settle into the precise native state. PMID:10077555

  20. Modeling Saturn Ring Temperature Variations as Solar Elevation Decreases

    NASA Astrophysics Data System (ADS)

    Spilker, L.; Flandes, A.; Altobelli, N.; Leyrat, C.; Pilorz, S.; Ferrari, C.

    2008-12-01

    After more than four years in orbit around Saturn, the Cassini Composite Infrared Spectrometer (CIRS) has acquired a wide-ranging set of thermal measurements of Saturn's main rings (A, B, C and Cassini Division). Temperatures were retrieved for the lit and unlit rings over a variety of ring geometries that include solar phase angle, spacecraft elevation, solar elevation and local hour angle. To first order, the largest temperature changes on the lit face of the rings are driven by variations in phase angle while differences in temperature with changing spacecraft elevation and local time are a secondary effect. Decreasing ring temperature with decreasing solar elevation are observed for both the lit and unlit faces of the rings after phase angle and local time effects are taken into account. For the lit rings, decreases of 2- 4 K are observed in the C ring and larger decreases, 7-10 and 10 - 13 K, are observed in the A and B rings respectively. Our thermal data cover a range of solar elevations from -21 to -8 degrees (south side of the rings). We test two simple models and evaluate how well they fit the observed decreases in temperature. The first model assumes that the particles are so widely spaced that they do not cast shadows on one another while the second model assumes that the particles are so close together they essentially form a slab. The optically thinnest and optically thickest regions of the rings show the best fits to these two end member models. We also extrapolate to the expected minimum ring temperatures at equinox. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA and at CEA Saclay supported by the "Programme National de Planetologie". Copyright 2008 California Institute of Technology. Government sponsorship acknowledged.

  1. Activation energy for a model ferrous-ferric half reaction from transition path sampling

    NASA Astrophysics Data System (ADS)

    Drechsel-Grau, Christof; Sprik, Michiel

    2012-01-01

    Activation parameters for the model oxidation half reaction of the classical aqueous ferrous ion are compared for different molecular simulation techniques. In particular, activation free energies are obtained from umbrella integration and Marcus theory based thermodynamic integration, which rely on the diabatic gap as the reaction coordinate. The latter method also assumes linear response, and both methods obtain the activation entropy and the activation energy from the temperature dependence of the activation free energy. In contrast, transition path sampling does not require knowledge of the reaction coordinate and directly yields the activation energy [C. Dellago and P. G. Bolhuis, Mol. Simul. 30, 795 (2004), 10.1080/08927020412331294869]. Benchmark activation energies from transition path sampling agree within statistical uncertainty with activation energies obtained from standard techniques requiring knowledge of the reaction coordinate. In addition, it is found that the activation energy for this model system is significantly smaller than the activation free energy for the Marcus model, approximately half the value, implying an equally large entropy contribution.

  2. Daily indoor-to-outdoor temperature and humidity relationships: a sample across seasons and diverse climatic regions

    NASA Astrophysics Data System (ADS)

    Nguyen, Jennifer L.; Dockery, Douglas W.

    2016-02-01

    The health consequences of heat and cold are usually evaluated based on associations with outdoor measurements collected at a nearby weather reporting station. However, people in the developed world spend little time outdoors, especially during extreme temperature events. We examined the association between indoor and outdoor temperature and humidity in a range of climates. We measured indoor temperature, apparent temperature, relative humidity, dew point, and specific humidity (a measure of moisture content in air) for one calendar year (2012) in a convenience sample of eight diverse locations ranging from the equatorial region (10 °N) to the Arctic (64 °N). We then compared the indoor conditions to outdoor values recorded at the nearest airport weather station. We found that the shape of the indoor-to-outdoor temperature and humidity relationships varied across seasons and locations. Indoor temperatures showed little variation across season and location. There was large variation in indoor relative humidity between seasons and between locations which was independent of outdoor airport measurements. On the other hand, indoor specific humidity, and to a lesser extent dew point, tracked with outdoor, airport measurements both seasonally and between climates, across a wide range of outdoor temperatures. These results suggest that, in general, outdoor measures of actual moisture content in air better capture indoor conditions than outdoor temperature and relative humidity. Therefore, in studies where water vapor is among the parameters of interest for examining weather-related health effects, outdoor measurements of actual moisture content can be more reliably used as a proxy for indoor exposure than the more commonly examined variables of temperature and relative humidity.

  3. An Importance Sampling EM Algorithm for Latent Regression Models

    ERIC Educational Resources Information Center

    von Davier, Matthias; Sinharay, Sandip

    2007-01-01

    Reporting methods used in large-scale assessments such as the National Assessment of Educational Progress (NAEP) rely on latent regression models. To fit the latent regression model using the maximum likelihood estimation technique, multivariate integrals must be evaluated. In the computer program MGROUP used by the Educational Testing Service for…

  4. Homogenous Nucleation and Crystal Growth in a Model Liquid from Direct Energy Landscape Sampling Simulation

    NASA Astrophysics Data System (ADS)

    Walter, Nathan; Zhang, Yang

    Nucleation and crystal growth are understood to be activated processes involving the crossing of free-energy barriers. Attempts to capture the entire crystallization process over long timescales with molecular dynamic simulations have met major obstacles because of molecular dynamics' temporal constraints. Herein, we circumvent this temporal limitation by using a brutal-force, metadynamics-like, adaptive basin-climbing algorithm and directly sample the free-energy landscape of a model liquid Argon. The algorithm biases the system to evolve from an amorphous liquid like structure towards an FCC crystal through inherent structure, and then traces back the energy barriers. Consequently, the sampled timescale is macroscopically long. We observe that the formation of a crystal involves two processes, each with a unique temperature-dependent energy barrier. One barrier corresponds to the crystal nucleus formation; the other barrier corresponds to the crystal growth. We find the two processes dominate in different temperature regimes. Compared to other computation techniques, our method requires no assumptions about the shape or chemical potential of the critical crystal nucleus. The success of this method is encouraging for studying the crystallization of more complex

  5. Modelling of temperature and perfusion during scalp cooling

    NASA Astrophysics Data System (ADS)

    Janssen, F. E. M.; Van Leeuwen, G. M. J.; Van Steenhoven, A. A.

    2005-09-01

    Hair loss is a feared side effect of chemotherapy treatment. It may be prevented by cooling the scalp during administration of cytostatics. The supposed mechanism is that by cooling the scalp, both temperature and perfusion are diminished, affecting drug supply and drug uptake in the hair follicle. However, the effect of scalp cooling varies strongly. To gain more insight into the effect of cooling, a computer model has been developed that describes heat transfer in the human head during scalp cooling. Of main interest in this study are the mutual influences of scalp temperature and perfusion during cooling. Results of the standard head model show that the temperature of the scalp skin is reduced from 34.4 °C to 18.3 °C, reducing tissue blood flow to 25%. Based upon variations in both thermal properties and head anatomies found in the literature, a parameter study was performed. The results of this parameter study show that the most important parameters affecting both temperature and perfusion are the perfusion coefficient Q10 and the thermal resistances of both the fat and the hair layer. The variations in the parameter study led to skin temperature ranging from 10.1 °C to 21.8 °C, which in turn reduced relative perfusion to 13% and 33%, respectively.

  6. Visual Sample Plan (VSP) Models and Code Verification

    SciTech Connect

    Gilbert, Richard O.; Davidson, James R.; Wilson, John E.; Pulsipher, Brent A.

    2001-03-06

    VSP is an easy to use, visual and graphic software tool being developed to select the right number and location of environmental samples so that the results of statistical tests performed to provide input to environmental decisions have the required confidence and performance. It is a significant help for implementing the 6th and 7th steps of the Data Quality Objectives (DQO) planning process ("Specify Tolerable Limits on Decision Errors" and "Optimize the Design for Obtaining Data," respectively).

  7. Comparison of climate model simulated and observed borehole temperature profiles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Rouco, J. F.; Stevens, M. B.; Beltrami, H.; Goosse, H.; Rath, V.; Zorita, E.; Smerdon, J.

    2009-04-01

    Advances in understanding climate variability through the last millennium lean on simulation and reconstruction efforts. Progress in the integration of both approaches can potentially provide new means of assessing confidence on model projections of future climate change, of constraining the range of climate sensitivity and/or attributing past changes found in proxy evidence to external forcing. This work addresses specifically possible strategies for comparison of paleoclimate model simulations and the information recorded in borehole temperature profiles (BTPs). First efforts have allowed to design means of comparison of model simulated and observed BTPs in the context of the climate of the last millennium. This can be done by diffusing the simulated temperatures into the ground in order to produce synthetic BTPs that can be in turn assigned to collocated, real BTPs. Results suggest that there is sensitivity of borehole temperatures at large and regional scales to changes in external forcing over the last centuries. The comparison between borehole climate reconstructions and model simulations may also be subjected to non negligible uncertainties produced by the influence of past glacial and Holocene changes. While the thermal climate influence of the last deglaciation can be found well below 1000 m depth, such type of changes can potentially exert an influence on our understanding of subsurface climate in the top ca. 500 m. This issue is illustrated in control and externally forced climate simulations of the last millennium with the ECHO-G and LOVECLIM models, respectively.

  8. Temperature-Corrected Model of Turbulence in Hot Jet Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Massey, Steven J.; Elmiligui, Alaa

    2007-01-01

    An improved correction has been developed to increase the accuracy with which certain formulations of computational fluid dynamics predict mixing in shear layers of hot jet flows. The CFD formulations in question are those derived from the Reynolds-averaged Navier-Stokes equations closed by means of a two-equation model of turbulence, known as the k-epsilon model, wherein effects of turbulence are summarized by means of an eddy viscosity. The need for a correction arises because it is well known among specialists in CFD that two-equation turbulence models, which were developed and calibrated for room-temperature, low Mach-number, plane-mixing-layer flows, underpredict mixing in shear layers of hot jet flows. The present correction represents an attempt to account for increased mixing that takes place in jet flows characterized by high gradients of total temperature. This correction also incorporates a commonly accepted, previously developed correction for the effect of compressibility on mixing.

  9. Zero temperature landscape of the random sine-Gordon model

    SciTech Connect

    Sanchez, A.; Bishop, A.R.; Cai, D.

    1997-04-01

    We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length ({approx} 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r){sup 2} type up to a certain correlation length ({xi} {approx} 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis.

  10. Hall Thruster Modeling with a Given Temperature Profile

    SciTech Connect

    L. Dorf; V. Semenov; Y. Raitses; N.J. Fisch

    2002-06-12

    A quasi one-dimensional steady-state model of the Hall thruster is presented. For given mass flow rate, magnetic field profile, and discharge voltage the unique solution can be constructed, assuming that the thruster operates in one of the two regimes: with or without the anode sheath. It is shown that for a given temperature profile, the applied discharge voltage uniquely determines the operating regime; for discharge voltages greater than a certain value, the sheath disappears. That result is obtained over a wide range of incoming neutral velocities, channel lengths and widths, and cathode plane locations. A good correlation between the quasi one-dimensional model and experimental results can be achieved by selecting an appropriate temperature profile. We also show how the presented model can be used to obtain a two-dimensional potential distribution.

  11. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  12. Shock structure and temperature overshoot in macroscopic multi-temperature model of mixtures

    SciTech Connect

    Madjarević, Damir Simić, Srboljub; Ruggeri, Tommaso

    2014-10-15

    The paper discusses the shock structure in macroscopic multi-temperature model of gaseous mixtures, recently established within the framework of extended thermodynamics. The study is restricted to weak and moderate shocks in a binary mixture of ideal gases with negligible viscosity and heat conductivity. The model predicts the existence of temperature overshoot of heavier constituent, like more sophisticated approaches, but also puts in evidence its non-monotonic behavior not documented in other studies. This phenomenon is explained as a consequence of weak energy exchange between the constituents, either due to large mass difference, or large rarefaction of the mixture. In the range of small Mach number it is also shown that shock thickness (or equivalently, the inverse of Knudsen number) decreases with the increase of Mach number, as well as when the mixture tends to behave like a single-component gas (small mass difference and/or presence of one constituent in traces)

  13. On effective temperature in network models of collective behavior.

    PubMed

    Porfiri, Maurizio; Ariel, Gil

    2016-04-01

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system-ordered or disordered. By establishing a fluctuation-dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems with small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order-disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena. PMID:27131488

  14. Numerical Modeling of High-Temperature Corrosion Processes

    NASA Technical Reports Server (NTRS)

    Nesbitt, James A.

    1995-01-01

    Numerical modeling of the diffusional transport associated with high-temperature corrosion processes is reviewed. These corrosion processes include external scale formation and internal subscale formation during oxidation, coating degradation by oxidation and substrate interdiffusion, carburization, sulfidation and nitridation. The studies that are reviewed cover such complexities as concentration-dependent diffusivities, cross-term effects in ternary alloys, and internal precipitation where several compounds of the same element form (e.g., carbides of Cr) or several compounds exist simultaneously (e.g., carbides containing varying amounts of Ni, Cr, Fe or Mo). In addition, the studies involve a variety of boundary conditions that vary with time and temperature. Finite-difference (F-D) techniques have been applied almost exclusively to model either the solute or corrodant transport in each of these studies. Hence, the paper first reviews the use of F-D techniques to develop solutions to the diffusion equations with various boundary conditions appropriate to high-temperature corrosion processes. The bulk of the paper then reviews various F-D modeling studies of diffusional transport associated with high-temperature corrosion.

  15. Shear modeling: thermoelasticity at high temperature and pressure for tantalum

    SciTech Connect

    Orlikowski, D; Soderlind, P; Moriarty, J A

    2004-12-06

    For large-scale constitutive strength models the shear modulus is typically assumed to be linearly dependent on temperature. However, for materials compressed beyond the Hugoniot or in regimes where there is very little experimental data, accurate and validated models must be used. To this end, we present here a new methodology that fully accounts for electron- and ion-thermal contributions to the elastic moduli over broad ranges of temperature (<20,000 K) and pressure (<10 Mbar). In this approach, the full potential linear muffin-tin orbital (FP-LMTO) method for the cold and electron-thermal contributions is closely coupled with ion-thermal contributions. For the latter two separate approaches are used. In one approach, the quasi-harmonic, ion-thermal contribution is obtained through a Brillouin zone sum of strain derivatives of the phonons, and in the other a full anharmonic ion-thermal contribution is obtained directly through Monte Carlo (MC) canonical distribution averages of strain derivatives on the multi-ion potential itself. Both approaches use quantum-based interatomic potentials derived from model generalized pseudopotential theory (MGPT). For tantalum, the resulting elastic moduli are compared to available ultrasonic measurements and diamond-anvil-cell compression experiments. Over the range of temperature and pressure considered, the results are then used in a polycrystalline averaging for the shear modulus to assess the linear temperature dependence for Ta.

  16. On effective temperature in network models of collective behavior.

    PubMed

    Porfiri, Maurizio; Ariel, Gil

    2016-04-01

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system-ordered or disordered. By establishing a fluctuation-dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems with small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order-disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena.

  17. On effective temperature in network models of collective behavior

    NASA Astrophysics Data System (ADS)

    Porfiri, Maurizio; Ariel, Gil

    2016-04-01

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system—ordered or disordered. By establishing a fluctuation-dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems with small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order-disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena.

  18. Modelling spoilage of fresh turbot and evaluation of a time-temperature integrator (TTI) label under fluctuating temperature.

    PubMed

    Nuin, Maider; Alfaro, Begoña; Cruz, Ziortza; Argarate, Nerea; George, Susie; Le Marc, Yvan; Olley, June; Pin, Carmen

    2008-10-31

    Kinetic models were developed to predict the microbial spoilage and the sensory quality of fresh fish and to evaluate the efficiency of a commercial time-temperature integrator (TTI) label, Fresh Check(R), to monitor shelf life. Farmed turbot (Psetta maxima) samples were packaged in PVC film and stored at 0, 5, 10 and 15 degrees C. Microbial growth and sensory attributes were monitored at regular time intervals. The response of the Fresh Check device was measured at the same temperatures during the storage period. The sensory perception was quantified according to a global sensory indicator obtained by principal component analysis as well as to the Quality Index Method, QIM, as described by Rahman and Olley [Rahman, H.A., Olley, J., 1984. Assessment of sensory techniques for quality assessment of Australian fish. CSIRO Tasmanian Regional Laboratory. Occasional paper n. 8. Available from the Australian Maritime College library. Newnham. Tasmania]. Both methods were found equally valid to monitor the loss of sensory quality. The maximum specific growth rate of spoilage bacteria, the rate of change of the sensory indicators and the rate of change of the colour measurements of the TTI label were modelled as a function of temperature. The temperature had a similar effect on the bacteria, sensory and Fresh Check kinetics. At the time of sensory rejection, the bacterial load was ca. 10(5)-10(6) cfu/g. The end of shelf life indicated by the Fresh Check label was close to the sensory rejection time. The performance of the models was validated under fluctuating temperature conditions by comparing the predicted and measured values for all microbial, sensory and TTI responses. The models have been implemented in a Visual Basic add-in for Excel called "Fish Shelf Life Prediction (FSLP)". This program predicts sensory acceptability and growth of spoilage bacteria in fish and the response of the TTI at constant and fluctuating temperature conditions. The program is freely

  19. Effects of high temperature on different restorations in forensic identification: Dental samples and mandible

    PubMed Central

    Patidar, Kalpana A; Parwani, Rajkumar; Wanjari, Sangeeta

    2010-01-01

    Introduction: The forensic odontologist strives to utilize the charred human dentition throughout each stage of dental evaluation, and restorations are as unique as fingerprints and their radiographic morphology as well as the types of filling materials are often the main feature for identification. The knowledge of detecting residual restorative material and composition of unrecovered adjacent restoration is a valuable tool-mark in the presumptive identification of the dentition of a burned victim. Gold, silver amalgam, silicate restoration, and so on, have a different resistance to prolonged high temperature, therefore, the identification of burned bodies can be correlated with adequate qualities and quantities of the traces. Most of the dental examination relies heavily on the presence of the restoration as well as the relationship of one dental structure to another. This greatly narrows the research for the final identification that is based on postmortem data. Aim: The purpose of this study is to examine the resistance of teeth and different restorative materials, and the mandible, to variable temperature and duration, for the purpose of identification. Materials and Methods: The study was conducted on 72 extracted teeth which were divided into six goups of 12 teeth each based on the type of restorative material. (Group 1 - unrestored teeth, group 2 - teeth restored with Zn3(PO4)2, group 3 - with silver amalgam, group 4 with glass ionomer cement, group 5 - Ni-Cr-metal crown, group 6 - metal ceramic crown) and two specimens of the mandible. The effect of incineration at 400°C (5 mins, 15 mins, 30 mins) and 1100°C (15 mins) was studied. Results: Damage to the teeth subjected to variable temperatures and time can be categorized as intact (no damage), scorched (superficially parched and discolored), charred (reduced to carbon by incomplete combustion) and incinerated (burned to ashes). PMID:21189989

  20. A time series investigation of the stability of nitramine and nitroaromatic explosives in surface water samples at ambient temperature.

    PubMed

    Douglas, Thomas A; Johnson, Laura; Walsh, Marianne; Collins, Charles

    2009-06-01

    We investigated the fate of nitramine and nitroaromatic explosives compounds in surface water to determine how surface water biogeochemistry affects the stability of explosives compounds. Five river water samples and 18.2 MOmega deionized water were spiked with 10 explosives compounds and the samples were held at ambient temperatures (20 degrees C) for 85 d. Surface water represented three rivers with a range of total organic carbon concentrations and two rivers draining glacial watersheds with minimal organic carbon but high suspended solids. 18.2 MOmega deionized water exhibited no explosives transformation. Nitroaromatic compound loss from solution was generally: tetryl>1,3,5-TNB>TNT>1,3-DNB>2,4-DNT. The HMX, RDX, 2,6-DNT, 2ADNT, and 4ADNT concentrations remained somewhat stable over time. The surface water with the highest total organic carbon concentration exhibited the most dramatic nitroaromatic loss from solution with tetryl, 1,3,5-TNB and TNT concentrations decreasing to below detection within 10d. The two water samples with high suspended solid loads exhibited substantial nitroaromatic explosives loss which could be attributable to adsorption onto fresh mineral surfaces and/or enhanced microbiologic biotransformation on mineral surfaces. An identical set of six water samples was spiked with explosives and acidified with sodium bisulfate to a pH of 2. Acidification maintained stable explosives concentrations in most of the water samples for the entire 85 d. Our results suggest sampling campaigns for explosives in surface water must account for biogeochemical characteristics. Acidification of samples with sodium bisulfate immediately following collection is a robust way to preserve nitroaromatic compound concentrations even at ambient temperature for up to three months.

  1. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    SciTech Connect

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  2. Effects of Low-Temperature Plasma-Sterilization on Mars Analog Soil Samples Mixed with Deinococcus radiodurans.

    PubMed

    Schirmack, Janosch; Fiebrandt, Marcel; Stapelmann, Katharina; Schulze-Makuch, Dirk

    2016-01-01

    We used Ar plasma-sterilization at a temperature below 80 °C to examine its effects on the viability of microorganisms when intermixed with tested soil. Due to a relatively low temperature, this method is not thought to affect the properties of a soil, particularly its organic component, to a significant degree. The method has previously been shown to work well on spacecraft parts. The selected microorganism for this test was Deinococcus radiodurans R1, which is known for its remarkable resistance to radiation effects. Our results showed a reduction in microbial counts after applying a low temperature plasma, but not to a degree suitable for a sterilization of the soil. Even an increase of the treatment duration from 1.5 to 45 min did not achieve satisfying results, but only resulted in in a mean cell reduction rate of 75% compared to the untreated control samples.

  3. Effects of Low-Temperature Plasma-Sterilization on Mars Analog Soil Samples Mixed with Deinococcus radiodurans

    PubMed Central

    Schirmack, Janosch; Fiebrandt, Marcel; Stapelmann, Katharina; Schulze-Makuch, Dirk

    2016-01-01

    We used Ar plasma-sterilization at a temperature below 80 °C to examine its effects on the viability of microorganisms when intermixed with tested soil. Due to a relatively low temperature, this method is not thought to affect the properties of a soil, particularly its organic component, to a significant degree. The method has previously been shown to work well on spacecraft parts. The selected microorganism for this test was Deinococcus radiodurans R1, which is known for its remarkable resistance to radiation effects. Our results showed a reduction in microbial counts after applying a low temperature plasma, but not to a degree suitable for a sterilization of the soil. Even an increase of the treatment duration from 1.5 to 45 min did not achieve satisfying results, but only resulted in in a mean cell reduction rate of 75% compared to the untreated control samples. PMID:27240407

  4. Effects of Low-Temperature Plasma-Sterilization on Mars Analog Soil Samples Mixed with Deinococcus radiodurans.

    PubMed

    Schirmack, Janosch; Fiebrandt, Marcel; Stapelmann, Katharina; Schulze-Makuch, Dirk

    2016-01-01

    We used Ar plasma-sterilization at a temperature below 80 °C to examine its effects on the viability of microorganisms when intermixed with tested soil. Due to a relatively low temperature, this method is not thought to affect the properties of a soil, particularly its organic component, to a significant degree. The method has previously been shown to work well on spacecraft parts. The selected microorganism for this test was Deinococcus radiodurans R1, which is known for its remarkable resistance to radiation effects. Our results showed a reduction in microbial counts after applying a low temperature plasma, but not to a degree suitable for a sterilization of the soil. Even an increase of the treatment duration from 1.5 to 45 min did not achieve satisfying results, but only resulted in in a mean cell reduction rate of 75% compared to the untreated control samples. PMID:27240407

  5. Directional infrared temperature and emissivity of vegetation: Measurements and models

    NASA Technical Reports Server (NTRS)

    Norman, J. M.; Castello, S.; Balick, L. K.

    1994-01-01

    Directional thermal radiance from vegetation depends on many factors, including the architecture of the plant canopy, thermal irradiance, emissivity of the foliage and soil, view angle, slope, and the kinetic temperature distribution within the vegetation-soil system. A one dimensional model, which includes the influence of topography, indicates that thermal emissivity of vegetation canopies may remain constant with view angle, or emissivity may increase or decrease as view angle from nadir increases. Typically, variations of emissivity with view angle are less than 0.01. As view angle increases away from nadir, directional infrared canopy temperature usually decreases but may remain nearly constant or even increase. Variations in directional temperature with view angle may be 5C or more. Model predictions of directional emissivity are compared with field measurements in corn canopies and over a bare soil using a method that requires two infrared thermometers, one sensitive to the 8 to 14 micrometer wavelength band and a second to the 14 to 22 micrometer band. After correction for CO2 absorption by the atmosphere, a directional canopy emissivity can be obtained as a function of view angle in the 8 to 14 micrometer band to an accuracy of about 0.005. Modeled and measured canopy emissivities for corn varied slightly with view angle (0.990 at nadir and 0.982 at 75 deg view zenith angle) and did not appear to vary significantly with view angle for the bare soil. Canopy emissivity is generally nearer to unity than leaf emissivity may vary by 0.02 with wavelength even though leaf emissivity. High spectral resolution, canopy thermal emissivity may vary by 0.02 with wavelength even though leaf emissivity may vary by 0.07. The one dimensional model provides reasonably accurate predictions of infrared temperature and can be used to study the dependence of infrared temperature on various plant, soil, and environmental factors.

  6. Temperature-dependent DIET of alkalis from SiO2 films: Comparison with a lunar sample

    NASA Astrophysics Data System (ADS)

    Yakshinskiy, Boris V.; Madey, Theodore E.

    2005-11-01

    We present recent results in an investigation of source mechanisms for the origin of alkali atoms (Na, K) in tenuous planetary atmospheres. A reversible temperature dependence has recently been observed in the electron and photon stimulated desorption (ESD and PSD) of Na from a lunar basalt sample. The observations were attributed to a temperature-related variations in binding sites with different desorption rates. We have now measured the reversible temperature-dependence of the ESD yields for neutral Na and K, and ionic Na+ and K+ from an SiO2 surface. The neutral desorption yields demonstrate opposite behavior from the lunar sample, which is presumably associated with different desorption mechanisms. The sticking probability S for atomic K is nearly constant over the substrate temperature range 100 500 K, whereas S for Na decreases with increasing T in this range. To clarify the charge-transfer desorption mechanism, we compare the DIET of monovalent atoms (Na, K) and divalent atoms (Ba). The threshold for ESD of Ba is ˜25 eV, much higher than that for Na, K (3 and 4 eV).

  7. Nonparametric Spatial Models for Extremes: Application to Extreme Temperature Data.

    PubMed

    Fuentes, Montserrat; Henry, John; Reich, Brian

    2013-03-01

    Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States. PMID:24058280

  8. Modeling Lunar Borehole Temperature in order to Reconstruct Historical Total Solar Irradiance and Estimate Surface Temperature in Permanently Shadowed Regions

    NASA Astrophysics Data System (ADS)

    Wen, G.; Cahalan, R. F.; Miyahara, H.; Ohmura, A.

    2007-12-01

    The Moon is an ideal place to reconstruct historical total solar irradiance (TSI). With undisturbed lunar surface albedo and the very low thermal diffusivity of lunar regolith, changes in solar input lead to changes in lunar surface temperature that diffuse downward to be recorded in the temperature profile in the near-surface layer. Using regolith thermal properties from Apollo, we model the heat transfer in the regolith layer, and compare modeled surface temperature to Apollo observations to check model performance. Using as alternative input scenarios two reconstructed TSI time series from 1610 to 2000 (Lean, 2000; Wang, Lean, and Sheeley 2005), we conclude that the two scenarios can be distinguished by detectable differences in regolith temperature, with the peak difference of about 10 mK occuring at a depth of about 10 m (Miyahara et al., 2007). The possibility that water ice exists in permanently shadowed areas near the lunar poles (Nozette et al., 1997; Spudis et al, 1998), makes it of interest to estimate surface temperature in such dark regions. "Turning off" the Sun in our time dependent model, we found it would take several hundred years for the surface temperature to drop from ~~100K immediately after sunset down to a nearly constant equilibrium temperature of about 24~~38 K, with the range determined by the range of possible input from Earth, from 0 W/m2 without Earth visible, up to about 0.1 W/m2 at maximum Earth phase. A simple equilibrium model (e.g., Huang 2007) is inappropriate to relate the Apollo-observed nighttime temperature to Earth's radiation budget, given the long multi- centennial time scale needed for equilibration of the lunar surface layer after sunset. Although our results provide the key mechanisms for reconstructing historical TSI, further research is required to account for topography of lunar surfaces, and new measurements of regolith thermal properties will also be needed once a new base of operations is

  9. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  10. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  11. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    DOE PAGES

    Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less

  12. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    NASA Astrophysics Data System (ADS)

    Li, Hong-Yi; Ruby Leung, L.; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan

    2015-12-01

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. It is then shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparing against the observed data from over 320 USGS stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. Furthermore, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August-October) by as much as 1˜2°C due to enhanced low-flow conditions, which have important implications to aquatic ecosystems. Sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  13. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    SciTech Connect

    Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  14. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    SciTech Connect

    Li, Hongyi; Leung, Lai-Yung R.; Tesfa, Teklu K.; Voisin, Nathalie; Hejazi, Mohamad I.; Liu, Lu; Liu, Ying; Rice, Jennie S.; Wu, Huan; Yang, Xiaofan

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. Sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  15. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras.

    PubMed

    Wolf, Alejandro; Pezoa, Jorge E; Figueroa, Miguel

    2016-01-01

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘ C, when the array's temperature varies by approximately 15 ∘ C. PMID:27447637

  16. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras

    PubMed Central

    Wolf, Alejandro; Pezoa, Jorge E.; Figueroa, Miguel

    2016-01-01

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘C, when the array’s temperature varies by approximately 15 ∘C. PMID:27447637

  17. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras.

    PubMed

    Wolf, Alejandro; Pezoa, Jorge E; Figueroa, Miguel

    2016-07-19

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘ C, when the array's temperature varies by approximately 15 ∘ C.

  18. Large Sample Hydrology : Building an international sample of watersheds to improve consistency and robustness of model evaluation

    NASA Astrophysics Data System (ADS)

    Mathevet, Thibault; Kumar, Rohini; Gupta, Hoshin; Vaze, Jai; Andréassian, Vazken

    2015-04-01

    This poster introduces the aims of the Large Sample Hydrology working group (LSH-WG) of the new IAHS Panta Rhei decade (2013-2022). The aim of the LSH-WG is to promote large sample hydrology, as discussed by Gupta et al. (2014) and to invite the community to collaborate on building and sharing a comprehensive and representative world-wide sample of watershed datasets. By doing so, LSH will allow the community to work towards 'hydrological consistency' (Martinez and Gupta, 2011) as a basis for hydrologic model development and evaluation, thereby increasing robustness of the model evaluation process. Classical model evaluation metrics based on 'robust statistics' are needed, but clearly not sufficient: multi-criteria assessments based on multiple hydrological signatures can help to better characterize hydrological functioning. Further, large-sample data sets can greatly facilitate: (i) improved understanding through rigorous testing and comparison of competing model hypothesis and structures, (ii) improved robustness of generalizations through statistical analyses that minimize the influence of outliers and case-specific studies, (iii) classification, regionalization and model transfer across a broad diversity of hydrometeorological contexts, and (iv) estimation of predictive uncertainties at a location and across locations (Mathevet et al., 2006; Andréassian et al., 2009; Gupta et al., 2014) References Andréassian, V., Perrin, C., Berthet, L., Le Moine, N., Lerat, J., Loumagne, C., Oudin, L., Mathevet, T., Ramos, M. H., and Valéry, A.: Crash tests for a standardized evaluation of hydrological models, Hydrology and Earth System Sciences, 1757-1764, 2009. Gupta, H. V., Perrin, C., Blöschl, G., Montanari, A., Kumar, R., Clark, M., and Andréassian, V.: Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., 18, 463-477, doi:10.5194/hess-18-463-2014, 2014. Martinez, G. F., and H. V.Gupta (2011), Hydrologic consistency as a basis for

  19. Mathematical model of the metal mould surface temperature optimization

    NASA Astrophysics Data System (ADS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-11-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  20. Mathematical model of the metal mould surface temperature optimization

    SciTech Connect

    Mlynek, Jaroslav Knobloch, Roman; Srb, Radek

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  1. Modelling of monovacancy diffusion in W over wide temperature range

    SciTech Connect

    Bukonte, L. Ahlgren, T.; Heinola, K.

    2014-03-28

    The diffusion of monovacancies in tungsten is studied computationally over a wide temperature range from 1300 K until the melting point of the material. Our modelling is based on Molecular Dynamics technique and Density Functional Theory. The monovacancy migration barriers are calculated using nudged elastic band method for nearest and next-nearest neighbour monovacancy jumps. The diffusion pre-exponential factor for monovacancy diffusion is found to be two to three orders of magnitude higher than commonly used in computational studies, resulting in attempt frequency of the order 10{sup 15} Hz. Multiple nearest neighbour jumps of monovacancy are found to play an important role in the contribution to the total diffusion coefficient, especially at temperatures above 2/3 of T{sub m}, resulting in an upward curvature of the Arrhenius diagram. The probabilities for different nearest neighbour jumps for monovacancy in W are calculated at different temperatures.

  2. Modeling the Surface Temperature of Earth-like Planets

    NASA Astrophysics Data System (ADS)

    Vladilo, Giovanni; Silva, Laura; Murante, Giuseppe; Filippi, Luca; Provenzale, Antonello

    2015-05-01

    We introduce a novel Earth-like planet surface temperature model (ESTM) for habitability studies based on the spatial-temporal distribution of planetary surface temperatures. The ESTM adopts a surface energy balance model (EBM) complemented by: radiative-convective atmospheric column calculations, a set of physically based parameterizations of meridional transport, and descriptions of surface and cloud properties more refined than in standard EBMs. The parameterization is valid for rotating terrestrial planets with shallow atmospheres and moderate values of axis obliquity (ɛ ≲ 45{}^\\circ ). Comparison with a 3D model of atmospheric dynamics from the literature shows that the equator-to-pole temperature differences predicted by the two models agree within ≈ 5 K when the rotation rate, insolation, surface pressure and planet radius are varied in the intervals 0.5≲ {Ω }/{{{Ω }}\\oplus }≲ 2, 0.75≲ S/{{S}\\circ }≲ 1.25, 0.3≲ p/(1 bar)≲ 10, and 0.5≲ R/{{R}\\oplus }≲ 2, respectively. The ESTM has an extremely low computational cost and can be used when the planetary parameters are scarcely known (as for most exoplanets) and/or whenever many runs for different parameter configurations are needed. Model simulations of a test-case exoplanet (Kepler-62e) indicate that an uncertainty in surface pressure within the range expected for terrestrial planets may impact the mean temperature by ˜ 60 K. Within the limits of validity of the ESTM, the impact of surface pressure is larger than that predicted by uncertainties in rotation rate, axis obliquity, and ocean fractions. We discuss the possibility of performing a statistical ranking of planetary habitability taking advantage of the flexibility of the ESTM.

  3. The room temperature preservation of filtered environmental DNA samples and assimilation into a phenol–chloroform–isoamyl alcohol DNA extraction

    PubMed Central

    Renshaw, Mark A; Olds, Brett P; Jerde, Christopher L; McVeigh, Margaret M; Lodge, David M

    2015-01-01

    Current research targeting filtered macrobial environmental DNA (eDNA) often relies upon cold ambient temperatures at various stages, including the transport of water samples from the field to the laboratory and the storage of water and/or filtered samples in the laboratory. This poses practical limitations for field collections in locations where refrigeration and frozen storage is difficult or where samples must be transported long distances for further processing and screening. This study demonstrates the successful preservation of eDNA at room temperature (20 °C) in two lysis buffers, CTAB and Longmire's, over a 2-week period of time. Moreover, the preserved eDNA samples were seamlessly integrated into a phenol–chloroform–isoamyl alcohol (PCI) DNA extraction protocol. The successful application of the eDNA extraction to multiple filter membrane types suggests the methods evaluated here may be broadly applied in future eDNA research. Our results also suggest that for many kinds of studies recently reported on macrobial eDNA, detection probabilities could have been increased, and at a lower cost, by utilizing the Longmire's preservation buffer with a PCI DNA extraction. PMID:24834966

  4. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    2000-05-01

    The flow VLE apparatus designed and built for a previous project was upgraded and recalibrated for data measurements for this project. The modifications include better and more accurate sampling technique, addition of a digital recorder to monitor temperature and pressure inside the VLE cell, and a new technique for remote sensing of the liquid level in the cell. VLE data measurements for three binary systems, tetralin-quinoline, benzene--ethylbenzene and ethylbenzene--quinoline, have been completed. The temperature ranges of data measurements were 325 C to 370 C for the first system, 180 C to 300 C for the second system, and 225 C to 380 C for the third system. The smoothed data were found to be fairly well behaved when subjected to thermodynamic consistency tests. SETARAM C-80 calorimeter was used for incremental enthalpy and heat capacity measurements for benzene--ethylbenzene binary liquid mixtures. Data were measured from 30 C to 285 C for liquid mixtures covering the entire composition range. An apparatus has been designed for simultaneous measurement of excess volume and incremental enthalpy of liquid mixtures at temperatures from 30 C to 300 C. The apparatus has been tested and is ready for data measurements. A flow apparatus for measurement of heat of mixing of liquid mixtures at high temperatures has also been designed, and is currently being tested and calibrated.

  5. Melting Temperature Mapping Method: A Novel Method for Rapid Identification of Unknown Pathogenic Microorganisms within Three Hours of Sample Collection.

    PubMed

    Niimi, Hideki; Ueno, Tomohiro; Hayashi, Shirou; Abe, Akihito; Tsurue, Takahiro; Mori, Masashi; Tabata, Homare; Minami, Hiroshi; Goto, Michihiko; Akiyama, Makoto; Yamamoto, Yoshihiro; Saito, Shigeru; Kitajima, Isao

    2015-07-28

    Acquiring the earliest possible identification of pathogenic microorganisms is critical for selecting the appropriate antimicrobial therapy in infected patients. We herein report the novel "melting temperature (Tm) mapping method" for rapidly identifying the dominant bacteria in a clinical sample from sterile sites. Employing only seven primer sets, more than 100 bacterial species can be identified. In particular, using the Difference Value, it is possible to identify samples suitable for Tm mapping identification. Moreover, this method can be used to rapidly diagnose the absence of bacteria in clinical samples. We tested the Tm mapping method using 200 whole blood samples obtained from patients with suspected sepsis, 85% (171/200) of which matched the culture results based on the detection level. A total of 130 samples were negative according to the Tm mapping method, 98% (128/130) of which were also negative based on the culture method. Meanwhile, 70 samples were positive according to the Tm mapping method, and of the 59 suitable for identification, 100% (59/59) exhibited a "match" or "broad match" with the culture or sequencing results. These findings were obtained within three hours of whole blood collection. The Tm mapping method is therefore useful for identifying infectious diseases requiring prompt treatment.

  6. Effect of temperature, sample size and gas flow rate on drying of Beulah-Zap lignite and Wyodak subbituminous coal

    SciTech Connect

    Vorres, K.S.

    1993-01-01

    Beulah-Zap lignite and Wyodak-Anderson ([minus]100 and [minus]20 mesh from the Argonne Premium Coal Sample Program) were dried in nitrogen under various conditions of temperature (20--80[degree]C), gas flow rates (20--160 cc/min), and sample sizes (20--160 mg). An equation relating the initial drying rate in the unimolecular mechanism was developed to relate the drying rate and these three variables over the initial 80--85% of the moisture loss for the lignite. The behavior of the Wyodak-Anderson subbituminous coal is very similar to that of the lignite. The nitrogen BET surface area of the subbituminous sample is much larger than the lignite.

  7. Effect of temperature, sample size and gas flow rate on drying of Beulah-Zap lignite and Wyodak subbituminous coal

    SciTech Connect

    Vorres, K.S.

    1993-03-01

    Beulah-Zap lignite and Wyodak-Anderson ({minus}100 and {minus}20 mesh from the Argonne Premium Coal Sample Program) were dried in nitrogen under various conditions of temperature (20--80{degree}C), gas flow rates (20--160 cc/min), and sample sizes (20--160 mg). An equation relating the initial drying rate in the unimolecular mechanism was developed to relate the drying rate and these three variables over the initial 80--85% of the moisture loss for the lignite. The behavior of the Wyodak-Anderson subbituminous coal is very similar to that of the lignite. The nitrogen BET surface area of the subbituminous sample is much larger than the lignite.

  8. Determination of ultraviolet filters in environmental water samples by temperature-controlled ionic liquid dispersive liquid-phase microextraction.

    PubMed

    Zhang, Yufeng; Lee, Hian Kee

    2013-01-01

    In the present study, a rapid, highly efficient and environmentally friendly sample preparation method named temperature-controlled ionic liquid dispersive liquid-phase microextraction (TC-IL-DLPME), followed by high performance liquid chromatography (HPLC) was developed for the extraction, preconcentration and determination of four benzophenone-type ultraviolet (UV) filters (viz. benzophenone (BP), 2-hydroxy-4-methoxybenzophenone (BP-3), ethylhexyl salicylate (EHS) and homosalate (HMS)) from water samples. An ultra-hydrophobic ionic liquid (IL) 1-hexyl-3-methylimidazolium tris(pentafluoroethyl)trifluorophosphate ([HMIM][FAP]), was used as the extraction solvent in TC-IL-DLPME. Temperature served two functions here, the promotion of the dispersal of the IL to the aqueous sample solution to form infinitesimal IL drops and increase the interface between them and the target analytes (at high temperature), and the facilitation of mass transfer between the phases, and achievement of phase separation (at low temperature). Due to the ultra-hydrophobic feature and high density of the extraction solvent, complete phase separation could be effected by centrifugation. Moreover, no disperser solvent was required. Another prominent feature of the procedure was the combination of extraction and centrifugation in a single step, which not only greatly reduced the total analysis time for TC-IL-DLPME but also simplified the sample preparation procedure. Various parameters that affected the extraction efficiency (such as type and volume of extraction solvent, temperature, salt addition, extraction time and pH) were evaluated. Under optimal conditions, the proposed method provided good enrichment factors in the range of 240-350, and relative standard deviations (n=5) below 6.3%. The limits of detection were in the range of 0.2-5.0 ng/mL, depending on the analytes. The linearities were between 1 and 500 ng/mL for BP, 5 and 1000 ng/mL for BP-3, 10 and 1000 ng/mL for HMS and 5 and 1000

  9. Determination of ultraviolet filters in environmental water samples by temperature-controlled ionic liquid dispersive liquid-phase microextraction.

    PubMed

    Zhang, Yufeng; Lee, Hian Kee

    2013-01-01

    In the present study, a rapid, highly efficient and environmentally friendly sample preparation method named temperature-controlled ionic liquid dispersive liquid-phase microextraction (TC-IL-DLPME), followed by high performance liquid chromatography (HPLC) was developed for the extraction, preconcentration and determination of four benzophenone-type ultraviolet (UV) filters (viz. benzophenone (BP), 2-hydroxy-4-methoxybenzophenone (BP-3), ethylhexyl salicylate (EHS) and homosalate (HMS)) from water samples. An ultra-hydrophobic ionic liquid (IL) 1-hexyl-3-methylimidazolium tris(pentafluoroethyl)trifluorophosphate ([HMIM][FAP]), was used as the extraction solvent in TC-IL-DLPME. Temperature served two functions here, the promotion of the dispersal of the IL to the aqueous sample solution to form infinitesimal IL drops and increase the interface between them and the target analytes (at high temperature), and the facilitation of mass transfer between the phases, and achievement of phase separation (at low temperature). Due to the ultra-hydrophobic feature and high density of the extraction solvent, complete phase separation could be effected by centrifugation. Moreover, no disperser solvent was required. Another prominent feature of the procedure was the combination of extraction and centrifugation in a single step, which not only greatly reduced the total analysis time for TC-IL-DLPME but also simplified the sample preparation procedure. Various parameters that affected the extraction efficiency (such as type and volume of extraction solvent, temperature, salt addition, extraction time and pH) were evaluated. Under optimal conditions, the proposed method provided good enrichment factors in the range of 240-350, and relative standard deviations (n=5) below 6.3%. The limits of detection were in the range of 0.2-5.0 ng/mL, depending on the analytes. The linearities were between 1 and 500 ng/mL for BP, 5 and 1000 ng/mL for BP-3, 10 and 1000 ng/mL for HMS and 5 and 1000

  10. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  11. A novel powder sample holder for the determination of glass transition temperatures by DMA.

    PubMed

    Mahlin, Denny; Wood, John; Hawkins, Nicholas; Mahey, Jas; Royall, Paul G

    2009-04-17

    The use of a new sample holder for dynamic mechanical analysis (DMA) as a means to characterise the Tg of powdered hydroxypropyl methyl cellulose (HPMC) has been investigated. A sample holder was constructed consisting of a rectangular stainless steel container and a lid engineered to fit exactly within the walls of the container when clamped within a TA instruments Q800 DMA in dual cantilever configuration. Physical mixtures of HPMC (E4M) and aluminium oxide powders were placed in the holder and subjected to oscillating strains (1 Hz, 10 Hz and 100 Hz) whilst heated at 3 degrees C/min. The storage and loss modulus signals showed a large reduction in the mechanical strength above 150 degrees C which was attributed to a glass transition. Optimal experimental parameters were determined using a design of experiment procedure and by analysing the frequency dependence of Tg in Arrhenius plots. The parameters were a clamping pressure of 62 kPa, a mass ratio of 0.2 HPMC in aluminium oxide, and a loading mass of either 120 mg or 180 mg. At 1 Hz, a Tg of 177+/-1.2 degrees C (n=6) for powdered HPMC was obtained. In conclusion, the new powder holder was capable of measuring the Tg of pharmaceutical powders and a simple optimization protocol was established, useful in further applications of the DMA powder holder. PMID:19167475

  12. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    NASA Astrophysics Data System (ADS)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  13. Modeling the effect of water activity and storage temperature on chemical stability of coffee brews.

    PubMed

    Manzocco, Lara; Nicoli, Maria Cristina

    2007-08-01

    This work was addressed to study the chemical stability of coffee brew derivatives as a function of water activity (aw) and storage temperature. To this purpose, coffee brew was freeze-dried, equilibrated at increasing aw values, and stored for up to 10 months at different temperatures from -30 to 60 degrees C. The chemical stability of the samples was assessed by measuring H3O+ formation during storage. Independently of storage temperature, the rate of H3O+ formation was considerably low only when aw was reduced below 0.5 (94% w/w). Beyond this critical boundary, the rate increased, reaching a maximum value at ca. 0.8 aw (78% w/w). Further hydration up to the aw of the freshly prepared beverage significantly increased chemical stability. It was suggested that mechanisms other than lactones' hydrolysis, probably related to nonenzymatic browning pathways, could contribute to the observed increase in acidity during coffee staling. The temperature dependence of H3O+ formation was well-described by the Arrhenius equation in the entire aw range considered. However, aw affected the apparent activation energy and frequency factor. These effects were described by simple equations that were used to set up a modified Arrhenius equation. This model was validated by comparing experimental values, not used to generate the model, with those estimated by the model itself. The model allowed efficient prediction of the chemical stability of coffee derivatives on the basis of only the aw value and storage temperature. PMID:17658750

  14. Stream Segment Temperature Model (SSTEMP) Version 2.0

    USGS Publications Warehouse

    Bartholow, John

    2002-01-01

    SSTEMP is a much-scaled down version of the Stream Network Temperature Model (SNTEMP) by Theurer et al. (1984). SSTEMP may be used to evaluate alternative reservoir release proposals, analyze the effects of changing riparian shade or the physical features of a stream, and examine the effects of different stream withdrawals and returns on instream temperature. Unlike the large network model, SNTEMP, this program handles only single stream segments for a single time period (e.g., month, week, day) for any given “run”. Initially designed as a training tool, SSTEMP may be used satisfactorily for a variety of simple cases that one might face on a day-to-day basis. It is especially useful to perform sensitivity and uncertainty analysis. The program requires inputs describing the average stream geometry, as well as (steady-state) hydrology and meteorology, and stream shading. SSTEMP optionally estimates the combined topographic and vegetative shade as well as solar radiation penetrating the water. It then predicts the mean daily water temperatures at specified distances downstream. It also estimates the daily maximum and minimum temperatures, and unlike SNTEMP, handles the special case of a dam with steady-state release at the upstream end of the segment. With good quality input data, SSTEMP should faithfully reproduce mean daily water temperatures throughout a stream reach. If it does not, there is a research opportunity to explain why not. One should not expect too much from SSTEMP if the input values are of poor quality or if the modeler has not adhered to the model’s assumptions.

  15. [Temperature dependence of parameters of plant photosynthesis models: a review].

    PubMed

    Borjigidai, Almaz; Yu, Gui-Rui

    2013-12-01

    This paper reviewed the progress on the temperature response models of plant photosynthesis. Mechanisms involved in changes in the photosynthesis-temperature curve were discussed based on four parameters, intercellular CO2 concentration, activation energy of the maximum rate of RuBP (ribulose-1,5-bisphosphate) carboxylation (V (c max)), activation energy of the rate of RuBP regeneration (J(max)), and the ratio of J(max) to V(c max) All species increased the activation energy of V(c max) with increasing growth temperature, while other parameters changed but differed among species, suggesting the activation energy of V(c max) might be the most important parameter for the temperature response of plant photosynthesis. In addition, research problems and prospects were proposed. It's necessary to combine the photosynthesis models at foliage and community levels, and to investigate the mechanism of plants in response to global change from aspects of leaf area, solar radiation, canopy structure, canopy microclimate and photosynthetic capacity. It would benefit the understanding and quantitative assessment of plant growth, carbon balance of communities and primary productivity of ecosystems.

  16. On the fate of the Standard Model at finite temperature

    NASA Astrophysics Data System (ADS)

    Rose, Luigi Delle; Marzo, Carlo; Urbano, Alfredo

    2016-05-01

    In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 1018 GeV, we find that the instability bound excludes values of the top mass M t ≳ 173 .6 GeV, with M h ≃ 125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.

  17. Rasch-modeling the Portuguese SOCRATES in a clinical sample.

    PubMed

    Lopes, Paulo; Prieto, Gerardo; Delgado, Ana R; Gamito, Pedro; Trigo, Hélder

    2010-06-01

    The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES) assesses motivation for treatment in the drug-dependent population. The development of adequate measures of motivation is needed in order to properly understand the role of this construct in rehabilitation. This study probed the psychometric properties of the SOCRATES in the Portuguese population by means of the Rasch Rating Scale Model, which allows the conjoint measurement of items and persons. The participants were 166 substance abusers under treatment for their addiction. Results show that the functioning of the five response categories is not optimal; our re-analysis indicates that a three-category system is the most appropriate one. By using this response category system, both model fit and estimation accuracy are improved. The discussion takes into account other factors such as item format and content in order to make suggestions for the development of better motivation-for-treatment scales.

  18. Systems Modeling for Crew Core Body Temperature Prediction Postlanding

    NASA Technical Reports Server (NTRS)

    Cross, Cynthia; Ochoa, Dustin

    2010-01-01

    The Orion Crew Exploration Vehicle, NASA s latest crewed spacecraft project, presents many challenges to its designers including ensuring crew survivability during nominal and off nominal landing conditions. With a nominal water landing planned off the coast of San Clemente, California, off nominal water landings could range from the far North Atlantic Ocean to the middle of the equatorial Pacific Ocean. For all of these conditions, the vehicle must provide sufficient life support resources to ensure that the crew member s core body temperatures are maintained at a safe level prior to crew rescue. This paper will examine the natural environments, environments created inside the cabin and constraints associated with post landing operations that affect the temperature of the crew member. Models of the capsule and the crew members are examined and analysis results are compared to the requirement for safe human exposure. Further, recommendations for updated modeling techniques and operational limits are included.

  19. A short-range objective nocturnal temperature forecasting model

    NASA Technical Reports Server (NTRS)

    Sutherland, R. A.

    1980-01-01

    A relatively simple, objective, nocturnal temperature forecasting model suitable for freezing and near-freezing conditions has been designed so that a user, presumably a weather forecaster, can put in standard meteorological data at a particular location and receive an hour-by-hour prediction of surface and air temperatures for that location for an entire night. The user has the option of putting in his own estimates of wind speeds and background sky radiation which are treated as independent variables. An analysis of 141 test runs show that 57.4% of the time the model predicts to within 1 C for the best cases and to within 3 C for 98.0% of all cases.

  20. Modeling compressive reaction and estimating model uncertainty in shock loaded porous samples of Hexanitrostilbene (HNS)

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Gump, Jared

    2011-06-01

    Neat pressings of HNS powders have been used in many explosive applications for over 50 years. However, characterization of its crystalline properties has lagged that of other explosives, and the solid stress has been inferred from impact experiments or estimated from mercury porosimetry. This lack of knowledge of the precise crystalline isotherm can contribute to large model uncertainty in the reacted response of pellets to shock impact. At high impact stresses, deflagration-to-detonation transition (DDT) processes initiated by compressive reaction have been interpreted from velocity interferometry at the surface of distended HNS-FP pellets. In particular, the Baer-Nunziato multiphase model in CTH, Sandia's Eulerian, finite volume shock propagation code, was used to predict compressive waves in pellets having approximately a 60% theoretical maximum density (TMD). These calculations were repeated with newly acquired isothermal compression measurements of fine-particle HNS using diamond anvil cells to compress the sample and powder x-ray diffraction to obtain the sample volume at each pressure point. Hence, estimating the model uncertainty provides a simple method for conveying the impact of future model improvements based upon new experimental data.

  1. Modeling compressive reaction and estimating model uncertainty in shock loaded porous samples of hexanitrostilbene (HNS)

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron L.; Gump, Jared C.

    2012-03-01

    Neat pressings of HNS powders have been used in many explosive applications for over 50 years. However, characterization of its crystalline properties has lagged that of other explosives, and the solid stress has been inferred from impact experiments or estimated from mercury porosimetry. This lack of knowledge of the precise crystalline isotherm can contribute to large model uncertainty in the reacted response of pellets to shock impact. At high impact stresses, deflagration-to-detonation transition (DDT) processes initiated by compressive reaction have been interpreted from velocity interferometry at the surface of distended HNS-FP pellets. In particular, the Baer-Nunziato multiphase model in CTH, Sandia's Eulerian, finite volume shock propagation code, was used to predict compressive waves in pellets having approximately a 60% theoretical maximum density (TMD). These calculations were repeated with newly acquired isothermal compression measurements of fineparticle HNS using diamond anvil cells to compress the sample and powder x-ray diffraction to obtain the sample volume at each pressure point. Hence, estimating the model uncertainty provides a simple method for conveying the impact of future model improvements based upon new experimental data.

  2. Preliminary fission track ages for samples from western Maine: Implications for the low temperature thermal history of the region

    SciTech Connect

    Lux, D.R. . Dept. of Geological Sciences); Johnson, K. )

    1992-01-01

    In order to elucidate the low temperature cooling history of the region, 12 preliminary samples from a N-S traverse from the central Sebago batholith to the Chain of Ponds pluton were selected for fission track dating. The range of Ar-40/Ar-39 ages from the selected samples is 140 Ma (371--231 Ma) for biotites, 63 Ma (304--241 Ma) for muscovites and 61 Ma (375--304 Ma) for hornblendes. Twelve samples were dated by the external detector method and yield ages between 114 and 79 Ma. The southern most samples from the Sebago batholith and Songo pluton, within the highest metamorphic zones, are the youngest and range from 101 to 79 Ma. Those from the Mooselookmeguntic and Chain of Ponds plutons, within lower metamorphic zones, vary between 114 and 92 Ma. This general discordance trend is similar but of much smaller magnitude than the regional pattern of Ar-40/Ar-39 cooling ages. The much smaller range of apatite ages than biotite ages suggests that large differences in the thermal regime across of the region during Late Paleozoic time had largely been erased by the Early Cretaceous. The new fission track ages are interpreted to represent regional cooling through the apatite closure temperature, assumed to be ca 100 C. Young apatite ages may be the result of a regional thermal disturbance related to the intrusion of magmas of the White Mountains Plutonic Suite, as the youngest plutons are similar in age to the apatites. Alternatively, they could be the result of regional exhumation of the Acadian orogen. The authors conclude that the latter interpretation is more consistent with their data and attribute the ages to time of regional exhumation and uplift through the apatite closure temperature.

  3. Solid state convection models of lunar internal temperature

    NASA Technical Reports Server (NTRS)

    Schubert, G.; Young, R. E.; Cassen, P.

    1975-01-01

    Thermal models of the Moon were made which include cooling by subsolidus creep and consideration of the creep behavior of geologic material. Measurements from the Apollo program on seismic velocities, electrical conductivity of the Moon's interior, and heat flux at two locations were used in the calculations. Estimates of 1500 to 1600 K were calculated for the temperature, and one sextillion to ten sextillion sq cm/sec were calcualted for the viscosity of the deep lunar interior.

  4. Reheating temperature in non-minimal derivative coupling model

    SciTech Connect

    Sadjadi, H. Mohseni; Goodarzi, Parviz E-mail: p_goodarzi@ut.ac.ir

    2013-07-01

    We consider the inflaton as a scalar field described by a non-minimal derivative coupling model with a power law potential. We study the slow roll inflation, the rapid oscillation phase, the radiation dominated and the recombination eras respectively, and estimate e-folds numbers during these epochs. Using these results and recent astrophysical data we determine the reheating temperature in terms of the spectral index and the amplitude of the power spectrum of scalar perturbations.

  5. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  6. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms 1. Modifications. Institutions that modify the model clauses will be deemed in compliance as long as they do not delete required...

  7. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 4 2014-01-01 2014-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  8. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 4 2012-01-01 2012-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  9. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 4 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  10. A regional neural network model for predicting mean daily river water temperature

    USGS Publications Warehouse

    Wagner, Tyler; DeWeber, Jefferson Tyrell

    2014-01-01

    Water temperature is a fundamental property of river habitat and often a key aspect of river resource management, but measurements to characterize thermal regimes are not available for most streams and rivers. As such, we developed an artificial neural network (ANN) ensemble model to predict mean daily water temperature in 197,402 individual stream reaches during the warm season (May–October) throughout the native range of brook trout Salvelinus fontinalis in the eastern U.S. We compared four models with different groups of predictors to determine how well water temperature could be predicted by climatic, landform, and land cover attributes, and used the median prediction from an ensemble of 100 ANNs as our final prediction for each model. The final model included air temperature, landform attributes and forested land cover and predicted mean daily water temperatures with moderate accuracy as determined by root mean squared error (RMSE) at 886 training sites with data from 1980 to 2009 (RMSE = 1.91 °C). Based on validation at 96 sites (RMSE = 1.82) and separately for data from 2010 (RMSE = 1.93), a year with relatively warmer conditions, the model was able to generalize to new stream reaches and years. The most important predictors were mean daily air temperature, prior 7 day mean air temperature, and network catchment area according to sensitivity analyses. Forest land cover at both riparian and catchment extents had relatively weak but clear negative effects. Predicted daily water temperature averaged for the month of July matched expected spatial trends with cooler temperatures in headwaters and at higher elevations and latitudes. Our ANN ensemble is unique in predicting daily temperatures throughout a large region, while other regional efforts have predicted at relatively coarse time steps. The model may prove a useful tool for predicting water temperatures in sampled and unsampled rivers under current conditions and future projections of climate

  11. Finite-temperature corrections in the dilated chiral quark model

    SciTech Connect

    Kim, Y.; Lee, Hyun Kyu |; Rho, M. |

    1995-03-01

    We calculate the finite-temperature corrections in the dilated chiral quark model using the effective potential formalism. Assuming that the dilaton limit is applicable at some short length scale, we interpret the results to represent the behavior of hadrons in dense and hot matter. We obtain the scaling law, f{sub {pi}}(T)/f{sub {pi}} = m{sub Q}(T)/m{sub Q} {approx_equal} m{sub {sigma}}(T)/m{sub {sigma}}while we argue, using PCAC, that pion mass does not scale within the temperature range involved in our Lagrangian. It is found that the hadron masses and the pion decay constant drop faster with temperature in the dilated chiral quark model than in the conventional linear sigma model that does not take into account the QCD scale anomaly. We attribute the difference in scaling in heat bath to the effect of baryonic medium on thermal properties of the hadrons. Our finding would imply that the AGS experiments (dense and hot matter) and the RHIC experiments (hot and dilute matter) will ``see`` different hadron properties in the hadronization exit phase.

  12. K-string tensions at finite temperature and integrable models

    NASA Astrophysics Data System (ADS)

    Caselle, Michele; Giudice, Pietro; Gliozzi, Ferdinando; Grinza, Paolo; Lottini, Stefano

    2007-11-01

    It has recently been pointed out that simple scaling properties of Polyakov correlation functions of gauge systems in the confining phase suggest that the ratios of k-string tensions in the low temperature region is constant up to terms of order T3. Here we argue that, at least in a three-dimensional Bbb Z4 gauge model, the above ratios are constant in the whole confining phase. This result is obtained by combining numerical experiments with known exact results on the mass spectrum of an integrable two-dimensional spin model describing the infrared behaviour of the gauge system near the deconfining transition.

  13. Modeling sample variables with an Experimental Factor Ontology

    PubMed Central

    Malone, James; Holloway, Ele; Adamusiak, Tomasz; Kapushesky, Misha; Zheng, Jie; Kolesnikov, Nikolay; Zhukova, Anna; Brazma, Alvis; Parkinson, Helen

    2010-01-01

    Motivation: Describing biological sample variables with ontologies is complex due to the cross-domain nature of experiments. Ontologies provide annotation solutions; however, for cross-domain investigations, multiple ontologies are needed to represent the data. These are subject to rapid change, are often not interoperable and present complexities that are a barrier to biological resource users. Results: We present the Experimental Factor Ontology, designed to meet cross-domain, application focused use cases for gene expression data. We describe our methodology and open source tools used to create the ontology. These include tools for creating ontology mappings, ontology views, detecting ontology changes and using ontologies in interfaces to enhance querying. The application of reference ontologies to data is a key problem, and this work presents guidelines on how community ontologies can be presented in an application ontology in a data-driven way. Availability: http://www.ebi.ac.uk/efo Contact: malone@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20200009

  14. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  15. Comparison of Temperature-Index Snowmelt Models for Use within an Operational Water Quality Model.

    PubMed

    Watson, Brett M; Putz, Gordon

    2014-01-01

    The accurate prediction of snowmelt runoff is a critical component of integrated hydrological and water quality models in regions where snowfall constitutes a significant portion of the annual precipitation. In cold regions, the accumulation of a snowpack and the subsequent spring snowmelt generally constitutes a major proportion of the annual water yield. Furthermore, the snowmelt runoff transports significant quantities of sediment and nutrients to receiving streams and strongly influences downstream water quality. Temperature-index models are commonly used in operational hydrological and water quality models to predict snowmelt runoff. Due to their simplicity, computational efficiency, low data requirements, and ability to consistently achieve good results, numerous temperature-index models of varying complexity have been developed in the past few decades. The objective of this study was to determine how temperature-index models of varying complexity would affect the performance of the water quality model SWAT (a modified version of SWAT that was developed for watersheds dominated by boreal forest) for predicting runoff. Temperature-index models used by several operational hydrological models were incorporated into SWAT. Model performance was tested on five watersheds on the Canadian Boreal Plain whose hydrologic response is dominated by snowmelt runoff. The results of this study indicate that simpler temperature-index models can perform as well as more complex temperature-index models for predicting runoff from the study watersheds. The outcome of this study has important implications because the incorporation of simpler temperature-index snowmelt models into hydrological and water quality models can lead to a reduction in the number of parameters that need to be optimized without sacrificing predictive accuracy.

  16. Chemical vapor deposition modeling for high temperature materials

    NASA Technical Reports Server (NTRS)

    Goekoglu, Sueleyman

    1992-01-01

    The formalism for the accurate modeling of chemical vapor deposition (CVD) processes has matured based on the well established principles of transport phenomena and chemical kinetics in the gas phase and on surfaces. The utility and limitations of such models are discussed in practical applications for high temperature structural materials. Attention is drawn to the complexities and uncertainties in chemical kinetics. Traditional approaches based on only equilibrium thermochemistry and/or transport phenomena are defended as useful tools, within their validity, for engineering purposes. The role of modeling is discussed within the context of establishing the link between CVD process parameters and material microstructures/properties. It is argued that CVD modeling is an essential part of designing CVD equipment and controlling/optimizing CVD processes for the production and/or coating of high performance structural materials.

  17. Two dimensional modelling of three core cable transient temperature rise

    SciTech Connect

    Lyall, J. )

    1990-01-01

    This paper describes a study of the transient temperature rise of a three core table. Results from a computer program that models the two dimensional heat flow are compared with those obtained using the normally applied one dimensional model. The modelling technique is an alternative to the finite difference and finite element methods. It develops the concept of a thermal resistance/capacitance analogue as can be done using the finite difference method but does so more directly without the need to use the partial differential equation. In addition, it provides the flexibility of the finite element method when modelling a complex geometry and material combination such as that found in a 3-core cable without the complexity of its mathematics.

  18. Control and diagnosis of temperature, density, and uniformity in x-ray heated iron/magnesium samples for opacity measurementsa)

    NASA Astrophysics Data System (ADS)

    Nagayama, T.; Bailey, J. E.; Loisel, G.; Hansen, S. B.; Rochau, G. A.; Mancini, R. C.; MacFarlane, J. J.; Golovkin, I.

    2014-05-01

    Experimental tests are in progress to evaluate the accuracy of the modeled iron opacity at solar interior conditions, in particular to better constrain the solar abundance problem [S. Basu and H. M. Antia, Phys. Rep. 457, 217 (2008)]. Here, we describe measurements addressing three of the key requirements for reliable opacity experiments: control of sample conditions, independent sample condition diagnostics, and verification of sample condition uniformity. The opacity samples consist of iron/magnesium layers tamped by plastic. By changing the plastic thicknesses, we have controlled the iron plasma conditions to reach (1) Te = 167 ± 3 eV and ne = (7.1 ± 1.5)× 1021 cm-3, (2) Te = 170 ± 2 eV and ne = (2.0 ± 0.2) × 1022 cm-3, and (3) Te = 196 ± 6 eV and ne = (3.8 ± 0.8) × 1022 cm-3, which were measured by magnesium tracer K-shell spectroscopy. The opacity sample non-uniformity was directly measured by a separate experiment where Al is mixed into the side of the sample facing the radiation source and Mg into the other side. The iron condition was confirmed to be uniform within their measurement uncertainties by Al and Mg K-shell spectroscopy. The conditions are suitable for testing opacity calculations needed for modeling the solar interior, other stars, and high energy density plasmas.

  19. Determination of filbertone in spiked olive oil samples using headspace-programmed temperature vaporization-gas chromatography-mass spectrometry.

    PubMed

    Pérez Pavón, José Luis; del Nogal Sánchez, Miguel; Fernández Laespada, María Esther; Moreno Cordero, Bernardo

    2009-07-01

    A sensitive method for the fast analysis of filbertone in spiked olive oil samples is presented. The applicability of a headspace (HS) autosampler in combination with a gas chromatograph (GC) equipped with a programmable temperature vaporizer (PTV) and a mass spectrometric (MS) detector is explored. A modular accelerated column heater (MACH) was used to control the temperature of the capillary gas chromatography column. This module can be heated and cooled very rapidly, shortening total analysis cycle times to a considerable extent. The proposed method does not require any previous analyte extraction, filtration and preconcentration step, as in most methods described to date. Sample preparation is reduced to placing the olive oil sample in the vial. This reduces the analysis time and the experimental errors associated with this step of the analytical process. By using headspace generation, the volatiles of the sample are analysed without interference by the non-volatile matrix, and by using injection in solvent-vent mode at the PTV inlet, most of the compounds that are more volatile than filbertone are purged and the matrix effect is minimised. Use of a liner packed with Tenax-TA allowed the compound of interest to be retained during the venting process. The limits of detection and quantification were as low as 0.27 and 0.83 microg/L, respectively, and precision (measured as the relative standard deviation) was 5.7%. The method was applied to the determination of filbertone in spiked olive oil samples and the results revealed the good accuracy obtained with the method.

  20. COMPUTER MODEL OF TEMPERATURE DISTRIBUTION IN OPTICALLY PUMPED LASER RODS

    NASA Technical Reports Server (NTRS)

    Farrukh, U. O.

    1994-01-01

    Managing the thermal energy that accumulates within a solid-state laser material under active pumping is of critical importance in the design of laser systems. Earlier models that calculated the temperature distribution in laser rods were single dimensional and assumed laser rods of infinite length. This program presents a new model which solves the temperature distribution problem for finite dimensional laser rods and calculates both the radial and axial components of temperature distribution in these rods. The modeled rod is either side-pumped or end-pumped by a continuous or a single pulse pump beam. (At the present time, the model cannot handle a multiple pulsed pump source.) The optical axis is assumed to be along the axis of the rod. The program also assumes that it is possible to cool different surfaces of the rod at different rates. The user defines the laser rod material characteristics, determines the types of cooling and pumping to be modeled, and selects the time frame desired via the input file. The program contains several self checking schemes to prevent overwriting memory blocks and to provide simple tracing of information in case of trouble. Output for the program consists of 1) an echo of the input file, 2) diffusion properties, radius and length, and time for each data block, 3) the radial increments from the center of the laser rod to the outer edge of the laser rod, and 4) the axial increments from the front of the laser rod to the other end of the rod. This program was written in Microsoft FORTRAN77 and implemented on a Tandon AT with a 287 math coprocessor. The program can also run on a VAX 750 mini-computer. It has a memory requirement of about 147 KB and was developed in 1989.

  1. Effects of electrostatic discharge on three cryogenic temperature sensor models

    NASA Astrophysics Data System (ADS)

    Courts, S. Scott; Mott, Thomas B.

    2014-01-01

    Cryogenic temperature sensors are not usually thought of as electrostatic discharge (ESD) sensitive devices. However, the most common cryogenic thermometers in use today are thermally sensitive diodes or resistors - both electronic devices in their base form. As such, they are sensitive to ESD at some level above which either catastrophic or latent damage can occur. Instituting an ESD program for safe handling and installation of the sensor is costly and it is desirable to balance the risk of ESD damage against this cost. However, this risk cannot be evaluated without specific knowledge of the ESD vulnerability of the devices in question. This work examines three types of cryogenic temperature sensors for ESD sensitivity - silicon diodes, Cernox{trade mark, serif} resistors, and wire wound platinum resistors, all manufactured by Lake Shore Cryotronics, Inc. Testing was performed per TIA/EIA FOTP129 (Human Body Model). Damage was found to occur in the silicon diode sensors at discharge levels of 1,500 V. For Cernox{trade mark, serif} temperature sensors, damage was observed at 3,500 V. The platinum temperature sensors were not damaged by ESD exposure levels of 9,900 V. At the lower damage limit, both the silicon diode and the Cernox{trade mark, serif} temperature sensors showed relatively small calibration shifts of 1 to 3 K at room temperature. The diode sensors were stable with time and thermal cycling, but the long term stability of the Cernox{trade mark, serif} sensors was degraded. Catastrophic failure occurred at higher levels of ESD exposure.

  2. Effects of electrostatic discharge on three cryogenic temperature sensor models

    SciTech Connect

    Courts, S. Scott; Mott, Thomas B.

    2014-01-29

    Cryogenic temperature sensors are not usually thought of as electrostatic discharge (ESD) sensitive devices. However, the most common cryogenic thermometers in use today are thermally sensitive diodes or resistors - both electronic devices in their base form. As such, they are sensitive to ESD at some level above which either catastrophic or latent damage can occur. Instituting an ESD program for safe handling and installation of the sensor is costly and it is desirable to balance the risk of ESD damage against this cost. However, this risk cannot be evaluated without specific knowledge of the ESD vulnerability of the devices in question. This work examines three types of cryogenic temperature sensors for ESD sensitivity - silicon diodes, Cernox(trade mark, serif) resistors, and wire wound platinum resistors, all manufactured by Lake Shore Cryotronics, Inc. Testing was performed per TIA/EIA FOTP129 (Human Body Model). Damage was found to occur in the silicon diode sensors at discharge levels of 1,500 V. For Cernox(trade mark, serif) temperature sensors, damage was observed at 3,500 V. The platinum temperature sensors were not damaged by ESD exposure levels of 9,900 V. At the lower damage limit, both the silicon diode and the Cernox(trade mark, serif) temperature sensors showed relatively small calibration shifts of 1 to 3 K at room temperature. The diode sensors were stable with time and thermal cycling, but the long term stability of the Cernox(trade mark, serif) sensors was degraded. Catastrophic failure occurred at higher levels of ESD exposure.

  3. Computer Modeling of Planetary Surface Temperatures in Introductory Astronomy Courses

    NASA Astrophysics Data System (ADS)

    Barker, Timothy; Goodman, J.

    2013-01-01

    Barker, T., and Goodman, J. C., Wheaton College, Norton, MA Computer modeling is an essential part of astronomical research, and so it is important that students be exposed to its powers and limitations in the first (and, perhaps, only) astronomy course they take in college. Building on the ideas of Walter Robinson (“Modeling Dynamic Systems,” Springer, 2002) we have found that STELLA software (ISEE Systems) allows introductory astronomy students to do sophisticated modeling by the end of two classes of instruction, with no previous experience in computer programming or calculus. STELLA’s graphical interface allows students to visualize systems in terms of “flows” in and out of “stocks,” avoiding the need to invoke differential equations. Linking flows and stocks allows feedback systems to be constructed. Students begin by building an easily understood system: a leaky bucket. This is a simple negative feedback system in which the volume in the bucket (a “stock”) depends on a fixed inflow rate and an outflow that increases in proportion to the volume in the bucket. Students explore how changing inflow rate and feedback parameters affect the steady-state volume and equilibration time of the system. This model is completed within a 50-minute class meeting. In the next class, students are given an analogous but more sophisticated problem: modeling a planetary surface temperature (“stock”) that depends on the “flow” of energy from the Sun, the planetary albedo, the outgoing flow of infrared radiation from the planet’s surface, and the infrared return from the atmosphere. Students then compare their STELLA model equilibrium temperatures to observed planetary temperatures, which agree with model ones for worlds without atmospheres, but give underestimates for planets with atmospheres, thus introducing students to the concept of greenhouse warming. We find that if we give the students part of this model at the start of a 50-minute class they are

  4. Temperature dependence of full set tensor properties of KTiOPO4 single crystal measured from one sample

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Tang, Liguo; Ji, Nianjing; Liu, Gang; Wang, Jiyang; Jiang, Huaidong; Cao, Wenwu

    2016-03-01

    The temperature dependence of the complete set of elastic, dielectric, and piezoelectric constants of KTiOPO4 single crystal has been measured from 20 °C to 150 °C. All 17 independent constants for the mm2 symmetry piezoelectric crystal were measured from one sample using extended resonance ultrasound spectroscopy (RUS), which guaranteed the self-consistency of the matrix data. The unique characteristics of the RUS method allowed the accomplishment of such a challenging task, which could not be done by any other existing methods. It was found that the elastic constants ( c11 E , c13 E , c22 E , and c33 E ) and piezoelectric constants ( d 15 , d 24 , and d 32 ) strongly depend on temperature, while other constants are only weakly temperature dependent in this temperature range. These as-grown single domain data allowed us to calculate the orientation dependence of elastic, dielectric, and piezoelectric properties of KTiOPO4, which are useful for finding the optimum cut for particular applications.

  5. A Physical Method for Generating the Surface Temperature from Passive Microwave Observations by Addressing the Thermal Sampling Depth for Barren Land

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Zhou, J.; Dai, F.

    2015-12-01

    The land surface temperature (LST) is an important parameter in studying the global and regional climate change. Passive microwave (PMW) remote sensing is less influenced by the atmosphere and has a unique advantage in cloudy regions compared to satellite thermal infrared (TIR) remote sensing. However, the accuracy of LST estimation of many PMW remote sensing models, especially in barren land, is unsatisfactory due to the neglected discrepancy of thermal sampling depth between PMW and TIR. Here, a physical method for PMW remote sensing is proposed to generate the surface temperature, which has the same physically meaning as the TIR surface temperature, by addressing the thermal sampling depth over barren land surface. The method was applied to the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) data. Validation with the synchronous Moderate Resolution Imaging Spectroradiometer (MODIS) LSTs demonstrates that the method has better performances in estimating LSTs than another two methods that neglect the thermal sampling depth. In Northwest China and a part of Mongolia, the root mean squared errors (RMSEs) the physical method were 3.9 K and 3.7K for daytime and nighttime cases, respectively. In the region of western Namibia, the corresponding RMSEs were 3.8 K and 4.5 K. Further comparison with the in-situ measured LST temperatures at a ground station confirmed the better performance of the proposed method, compared with another two methods. The proposed method will be beneficial for improving the accuracies of the LSTs estimated from PMW observations and integrating the LST products generated from both the TIR and PMW remote sensing.

  6. Determination of plasma temperature and electron density of iron in iron slag samples using laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Hussain, T.; Gondal, M. A.; Shamraiz, M.

    2016-08-01

    Plasma temperature and electron density of iron in iron slag samples taken from a local plant is studied. Optimal experimental conditions were evaluated using Nd: YAG laser at 1064 nm. Some toxic elements were identified and quantitative measurements were also made. Plasma temperature and electron density were estimated using standard equations and well resolved iron spectral lines in the 229.06-358.11 nm region at 10, 20, 30 and 40 mJ laser pulse energy with 4.5 μs delay time. These parameters were found to increase with increase in laser pulse energy. The Boltzmann distribution and experimentally measured line intensities support the assumption that the laser-induced plasma was in local thermal equilibrium. It is worth mentioning that iron and steel sector generates tons of solid waste and residues annually containing variety of contaminants which can be harmful to the environment and therefore knowledge, proper analysis and investigation of such iron slag is important.

  7. Rheological modelling of physiological variables during temperature variations at rest

    NASA Astrophysics Data System (ADS)

    Vogelaere, P.; de Meyer, F.

    1990-06-01

    The evolution with time of cardio-respiratory variables, blood pressure and body temperature has been studied on six males, resting in semi-nude conditions during short (30 min) cold stress exposure (0°C) and during passive recovery (60 min) at 20°C. Passive cold exposure does not induce a change in HR but increases VO 2, VCO 2 Ve and core temperature T re, whereas peripheral temperature is significantly lowered. The kinetic evolution of the studied variables was investigated using a Kelvin-Voigt rheological model. The results suggest that the human body, and by extension the measured physiological variables of its functioning, does not react as a perfect viscoelastic system. Cold exposure induces a more rapid adaptation for heart rate, blood pressure and skin temperatures than that observed during the rewarming period (20°C), whereas respiratory adjustments show an opposite evolution. During the cooling period of the experiment the adaptative mechanisms, taking effect to preserve core homeothermy and to obtain a higher oxygen supply, increase the energy loss of the body.

  8. Low reheating temperatures in monomial and binomial inflationary models

    NASA Astrophysics Data System (ADS)

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-06-01

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied phi2 inflationary potential is no longer favored by current CMB data, as well as phip with p>2, a phi1 potential and canonical reheating (0wre=) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, ns, implies an upper bound on the reheating temperature of Trelesssim 6× 1010 GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a phi1 potential. We find that as a subdominant phi2 term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of Tre=4 MeV is excluded by the Planck 2015 68% confidence limit.

  9. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. Modeling problem behaviors in a nationally representative sample of adolescents.

    PubMed

    O'Connor, Kate L; Dolphin, Louise; Fitzgerald, Amanda; Dooley, Barbara

    2016-07-01

    Research on multiple problem behaviors has focused on the concept of Problem Behavior Syndrome (PBS). Problem Behavior Theory (PBT) is a complex and comprehensive social-psychological framework designed to explain the development of a range of problem behaviors. This study examines the structure of PBS and the applicability of PBT in adolescents. Participants were 6062 adolescents; aged 12-19 (51.3% female) who took part in the My World Survey-Second Level (MWS-SL). Regarding PBS, Confirmatory Factor Analysis established that problem behaviors, such as alcohol and drug use loaded significantly onto a single, latent construct for males and females. Using Structural Equation Modeling, the PBT framework was found to be a good fit for males and females. Socio-demographic, perceived environment system and personality accounted for over 40% of the variance in problem behaviors for males and females. Our findings have important implications for understanding how differences in engaging in problem behaviors vary by gender. PMID:27161989

  11. Degradation of free tryptophan in a cookie model system and its application in commercial samples.

    PubMed

    Morales, Francisco J; Açar, Ozge C; Serpen, Arda; Arribas-Lorenzo, Gema; Gökmen, Vural

    2007-08-01

    The stability of free tryptophan (Trp) was examined in five cookie-resembling models at varying baking temperatures and durations. Trp was measured by HPLC coupled with a fluorescent detector. Trp degradation was significantly greater in cookies formulated with glucose compared with sucrose, regardless of the temperatures and durations of baking. A lag period was clearly observed in cookies formulated with sucrose. The type of sugar used in the dough formulation affected not only the thermal destruction kinetics but also the degree of degradation of free Trp. However, the type of leavening agent (ammonium bicarbonate versus sodium bicarbonate) did not affect the rate of Trp destruction as happens in Maillard-driven reactions. In addition, the free Trp content was analyzed in nine different flours and sixty-two commercial cookies, and it was found that free Trp varied from 0.4 to 1287.9 mg/kg for rice and wheat bran, respectively. It was found that free Trp was significantly higher in dietetic commercial samples formulated with wheat bran compared with other flours.

  12. Infrasound in Mesopause Temperatures: Modelling, Observations and Analyses

    NASA Astrophysics Data System (ADS)

    Pilger, Christoph; Schmidt, Carsten; Bittner, Michael

    2010-05-01

    Infrasound is typically observed in surface level measurements of the ambient air-pressure. A novel approach performed at the German Remote Sensing Data Center of the German Aerospace Center (DLR-DFD) is the detection of infrasonic signals in temperature time series of the mesopause altitude region (at about 80-100 km). The infrasonic pressure fluctuations correspond to temperature fluctuations in the atmosphere via ideal gas law assumptions. The development and magnitude of these fluctuations can be modelled regarding propagation, attenuation and amplification processes in the atmosphere. The modelling results are quantified in order to compare it to instrumental observations of mesopause temperatures. The observations are performed at DLR-DFD using the airglow measurement technique and the GRIPS instruments (GRound-based Infrared P-branch Spectrometers). Their temporal resolution of 15 seconds permits the observation of signals within the infrasound period range. Spectral intensities are estimated applying the wavelet analysis to the complete data set of more than one year of routine measurements in order to derive a statistical distribution of wave activity in the frequency range from 0.5 to 5 minutes. Selected events are discussed with respect to the origin of the observed structures.

  13. Impact of short-term storage temperature on determination of microbial community composition and abundance in aerated forest soil and anoxic pond sediment samples.

    PubMed

    Brandt, Franziska B; Breidenbach, Björn; Brenzinger, Kristof; Conrad, Ralf

    2014-12-01

    Sampling strategy is important for unbiased analysis of the characteristics of microbial communities in the environment. During field work it is not always possible to analyze fresh samples immediately or store them frozen. Therefore, the effect of short-term storage temperature was investigated on the abundance and composition of bacterial, archaeal and denitrifying communities in environmental samples from two different sampling sites. Oxic forest soil and anoxic pond sediment were investigated by measuring microbial abundance (DNA) and transcriptional activity (RNA). Prior to investigating the effect of storage temperature, samples were immediately analyzed, in order to represent the original situation in the habitat. The effect of storage temperature was then determined after 11 days at different low temperatures (room temperature, 4 °C, −22 °C and −80 °C). Community profiling using terminal restriction fragment length polymorphism (T-RFLP) showed no significant differences between the immediately analyzed reference sample and the samples stored at different incubation temperatures, both for DNA and RNA extracts. The abundance of microbial communities was determined using quantitative PCR and it also revealed a stable community size at all temperatures tested. By contrast, incubation at an elevated temperature (37 °C) resulted in changed bacterial community composition. In conclusion, short-term storage, even at room temperature, did not affect microbial community composition, abundance and transcriptional activity in aerated forest soil and anoxic pond sediment.

  14. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  15. Baryon number dissipation at finite temperature in the standard model

    SciTech Connect

    Mottola, E. ); Raby, S. . Dept. of Physics); Starkman, G. . Dept. of Astronomy)

    1990-01-01

    We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, {gamma} is given in terms of real time correlation functions of the operator E{center dot}B, and is directly proportional to the sphaleron transition rate, {Gamma}: {gamma} {preceq} n{sub f}{Gamma}/T{sup 3}. Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs.

  16. Comparison of ET estimations by the three-temperature model, SEBAL model and eddy covariance observations

    NASA Astrophysics Data System (ADS)

    Zhou, Xinyao; Bi, Shaojie; Yang, Yonghui; Tian, Fei; Ren, Dandan

    2014-11-01

    The three-temperature (3T) model is a simple model which estimates plant transpiration from only temperature data. In-situ field experimental results have shown that 3T is a reliable evapotranspiration (ET) estimation model. Despite encouraging results from recent efforts extending the 3T model to remote sensing applications, literature shows limited comparisons of the 3T model with other remote sensing driven ET models. This research used ET obtained from eddy covariance to evaluate the 3T model and in turn compared the model-simulated ET with that of the more traditional SEBAL (Surface Energy Balance Algorithm for Land) model. A field experiment was conducted in the cotton fields of Taklamakan desert oasis in Xinjiang, Northwest China. Radiation and surface temperature were obtained from hyperspectral and thermal infrared images for clear days in 2013. The images covered the time period of 0900-1800 h at four different phenological stages of cotton. Meteorological data were automatically recorded in a station located at the center of the cotton field. Results showed that the 3T model accurately captured daily and seasonal variations in ET. As low dry soil surface temperatures induced significant errors in the 3T model, it was unsuitable for estimating ET in the early morning and late afternoon periods. The model-simulated ET was relatively more accurate for squaring, bolling and boll-opening stages than for seedling stage of cotton during when ET was generally low. Wind speed was apparently not a limiting factor of ET in the 3T model. This was attributed to the fact that surface temperature, a vital input of the model, indirectly accounted for the effect of wind speed on ET. Although the 3T model slightly overestimated ET compared with SEBAL and eddy covariance, it was generally reliable for estimating daytime ET during 0900-1600 h.

  17. Fluid sampling and chemical modeling of geopressured brines containing methane. Final report, March 1980-February 1981

    SciTech Connect

    Dudak, B.; Galbraith, R.; Hansen, L.; Sverjensky, D.; Weres, O.

    1982-07-01

    The development of a flowthrough sampler capable of obtaining fluid samples from geopressured wells at temperatures up to 400/sup 0/F and pressures up to 20,000 psi is described. The sampler has been designed, fabricated from MP35N alloy, laboratory tested, and used to obtain fluid samples from a geothermal well at The Geysers, California. However, it has not yet been used in a geopressured well. The design features, test results, and operation of this device are described. Alternative sampler designs are also discussed. Another activity was to review the chemistry and geochemistry of geopressured brines and reservoirs, and to evaluate the utility of available computer codes for modeling the chemistry of geopressured brines. The thermodynamic data bases for such codes are usually the limiting factor in their application to geopressured systems, but it was concluded that existing codes can be updated with reasonable effort and can usefully explain and predict the chemical characteristics of geopressured systems, given suitable input data.

  18. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet synchrotron ionization quadrupole mass spectrometry: Application to low-temperature kinetics and product detection

    NASA Astrophysics Data System (ADS)

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D.; Ferrell, Sarah J.; Leone, Stephen R.; Wilson, Kevin R.

    2011-12-01

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radical-neutral chemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has been developed that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion with excellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by the airfoil is negligible. The reaction of C2H with C2H2 is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification based on the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic rates close to the collision-determined limit.

  19. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet synchrotron ionization quadrupole mass spectrometry: application to low-temperature kinetics and product detection.

    PubMed

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D; Ferrell, Sarah J; Leone, Stephen R; Wilson, Kevin R

    2011-12-01

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radical-neutral chemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has been developed that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion with excellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by the airfoil is negligible. The reaction of C(2)H with C(2)H(2) is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification based on the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic rates close to the collision-determined limit.

  20. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet (VUV) synchrotron ionization quadrupole mass spectrometry: Application to low--temperature kinetics and product detection

    SciTech Connect

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D; Ferrell, Sarah J; Leone, Stephen R; Wilson, Kevin R

    2011-10-12

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radicalneutralchemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has beendeveloped that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion withexcellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by theairfoil is negligible. The reaction of C2H with C2H2 is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification basedon the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic ratesclose to the collision-determined limit.

  1. Temperature Effect on Micelle Formation: Molecular Thermodynamic Model Revisited.

    PubMed

    Khoshnood, Atefeh; Lukanov, Boris; Firoozabadi, Abbas

    2016-03-01

    Temperature affects the aggregation of macromolecules such as surfactants, polymers, and proteins in aqueous solutions. The effect on the critical micelle concentration (CMC) is often nonmonotonic. In this work, the effect of temperature on the micellization of ionic and nonionic surfactants in aqueous solutions is studied using a molecular thermodynamic model. Previous studies based on this technique have predicted monotonic behavior for ionic surfactants. Our investigation shows that the choice of tail transfer energy to describe the hydrophobic effect between the surfactant tails and the polar solvent molecules plays a key role in the predicted CMC. We modify the tail transfer energy by taking into account the effect of the surfactant head on the neighboring methylene group. The modification improves the description of the CMC and the predicted micellar size for aqueous solutions of sodium n-alkyl sulfate, dodecyl trimethylammonium bromide (DTAB), and n-alkyl polyoxyethylene. The new tail transfer energy describes the nonmonotonic behavior of CMC versus temperature. In the DTAB-water system, we redefine the head size by including the methylene group, next to the nitrogen, in the head. The change in the head size along with our modified tail transfer energy improves the CMC and aggregation size prediction significantly. Tail transfer is a dominant energy contribution in micellar and microemulsion systems. It also promotes the adsorption of surfactants at fluid-fluid interfaces and affects the formation of adsorbed layer at fluid-solid interfaces. Our proposed modifications have direct applications in the thermodynamic modeling of the effect of temperature on molecular aggregation, both in the bulk and at the interfaces.

  2. Sampling through time and phylodynamic inference with coalescent and birth-death models.

    PubMed

    Volz, Erik M; Frost, Simon D W

    2014-12-01

    Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information.

  3. Sampling through time and phylodynamic inference with coalescent and birth-death models.

    PubMed

    Volz, Erik M; Frost, Simon D W

    2014-12-01

    Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173

  4. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  5. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  6. Utility of Oral Swab Sampling for Ebola Virus Detection in Guinea Pig Model.

    PubMed

    Spengler, Jessica R; Chakrabarti, Ayan K; Coleman-McCray, JoAnn D; Martin, Brock E; Nichol, Stuart T; Spiropoulou, Christina F; Bird, Brian H

    2015-10-01

    To determine the utility of oral swabs for diagnosing infection with Ebola virus, we used a guinea pig model and obtained daily antemortem and postmortem swab samples. According to quantitative reverse transcription PCR analysis, the diagnostic value was poor for antemortem swab samples but excellent for postmortem samples.

  7. Utility of Oral Swab Sampling for Ebola Virus Detection in Guinea Pig Model.

    PubMed

    Spengler, Jessica R; Chakrabarti, Ayan K; Coleman-McCray, JoAnn D; Martin, Brock E; Nichol, Stuart T; Spiropoulou, Christina F; Bird, Brian H

    2015-10-01

    To determine the utility of oral swabs for diagnosing infection with Ebola virus, we used a guinea pig model and obtained daily antemortem and postmortem swab samples. According to quantitative reverse transcription PCR analysis, the diagnostic value was poor for antemortem swab samples but excellent for postmortem samples. PMID:26401603

  8. A simplified physically-based model to calculate surface water temperature of lakes from air temperature in climate change scenarios

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Toffolon, M.

    2012-12-01

    Modifications of water temperature are crucial for the ecology of lakes, but long-term analyses are not usually able to provide reliable estimations. This is particularly true for climate change studies based on Global Circulation Models, whose mesh size is normally too coarse for explicitly including even some of the biggest lakes on Earth. On the other hand, modeled predictions of air temperature changes are more reliable, and long-term, high-resolution air temperature observational datasets are more available than water temperature measurements. For these reasons, air temperature series are often used to obtain some information about the surface temperature of water bodies. In order to do that, it is common to exploit regression models, but they are questionable especially when it is necessary to extrapolate current trends beyond maximum (or minimum) measured temperatures. Moreover, water temperature is influenced by a variety of processes of heat exchange across the lake surface and by the thermal inertia of the water mass, which also causes an annual hysteresis cycle between air and water temperatures that is hard to consider in regressions. In this work we propose a simplified, physically-based model for the estimation of the epilimnetic temperature in lakes. Starting from the zero-dimensional heat budget, we derive a simplified first-order differential equation for water temperature, primarily forced by a seasonally varying external term (mainly related to solar radiation) and an exchange term explicitly depending on the difference between air and water temperatures. Assuming annual sinusoidal cycles of the main heat flux components at the atmosphere-lake interface, eight parameters (some of them can be disregarded, though) are identified, which can be calibrated if two temporal series of air and water temperature are available. We note that such a calibration is supported by the physical interpretation of the parameters, which provide good initial

  9. Model for temperature-dependent magnetization of nanocrystalline materials

    SciTech Connect

    Bian, Q.; Niewczas, M.

    2015-01-07

    A magnetization model of nanocrystalline materials incorporating intragrain anisotropies, intergrain interactions, and texture effects has been extended to include the thermal fluctuations. The method relies on the stochastic Landau–Lifshitz–Gilbert theory of magnetization dynamics and permits to study the magnetic properties of nanocrystalline materials at arbitrary temperature below the Currie temperature. The model has been used to determine the intergrain exchange constant and grain boundary anisotropy constant of nanocrystalline Ni at 100 K and 298 K. It is found that the thermal fluctuations suppress the strength of the intergrain exchange coupling and also reduce the grain boundary anisotropy. In comparison with its value at 2 K, the interparticle exchange constant decreases by 16% and 42% and the grain boundary anisotropy constant decreases by 28% and 40% at 100 K and 298 K, respectively. An application of the model to study the grain size-dependent magnetization indicates that when the thermal activation energy is comparable to the free energy of grains, the decrease in the grain size leads to the decrease in the magnetic permeability and saturation magnetization. The mechanism by which the grain size influences the magnetic properties of nc–Ni is discussed.

  10. Model for temperature-dependent magnetization of nanocrystalline materials

    NASA Astrophysics Data System (ADS)

    Bian, Q.; Niewczas, M.

    2015-01-01

    A magnetization model of nanocrystalline materials incorporating intragrain anisotropies, intergrain interactions, and texture effects has been extended to include the thermal fluctuations. The method relies on the stochastic Landau-Lifshitz-Gilbert theory of magnetization dynamics and permits to study the magnetic properties of nanocrystalline materials at arbitrary temperature below the Currie temperature. The model has been used to determine the intergrain exchange constant and grain boundary anisotropy constant of nanocrystalline Ni at 100 K and 298 K. It is found that the thermal fluctuations suppress the strength of the intergrain exchange coupling and also reduce the grain boundary anisotropy. In comparison with its value at 2 K, the interparticle exchange constant decreases by 16% and 42% and the grain boundary anisotropy constant decreases by 28% and 40% at 100 K and 298 K, respectively. An application of the model to study the grain size-dependent magnetization indicates that when the thermal activation energy is comparable to the free energy of grains, the decrease in the grain size leads to the decrease in the magnetic permeability and saturation magnetization. The mechanism by which the grain size influences the magnetic properties of nc-Ni is discussed.

  11. Modelling mass balance and temperature sensitivity on Shallap glacier, Peru

    NASA Astrophysics Data System (ADS)

    Gurgiser, W.; Marzeion, B.; Nicholson, L. I.; Ortner, M.; Kaser, G.

    2013-12-01

    Due to pronounced dry seasons in the tropical Andes of Peru glacier melt water is an important factor for year-round water availability for the local society. Andean glaciers have been shrinking during the last decades but present day's magnitudes of glacier mass balance and sensitivities to changes in atmospheric drivers are not well known. Therefore we have calculated spatial distributed glacier mass and energy balance of Shallap glacier (4700 m - 5700 m, 9°S), Cordillera Blanca, Peru, on hourly time steps for the period Sept. 2006 to Aug. 2008 with records from an AWS close to the glacier as model input. Our model evaluation against measured surface height change in the ablation zone of the glacier yields our model results to be reasonable and within an expectable error range. For the mass balance characteristics we found similar vertical gradients and accumulation area ratios but markedly differences in specific mass balance from year to year. The differences were mainly caused by large differences in annual ablation in the glacier area below 5000m. By comparing the meteorological conditions in both years we found for the year with more negative mass balance that total precipitation was only slightly lower but mean annual temperature was higher, thus the fraction of liquid precipitation and the snow line altitude too. As shortwave net energy turned out to be the key driver of ablation in all seasons the deviations in snow line altitude and surface albedo explain most of the deviations in available melt energy. Hence, mass balance of tropical Shallap glacier was not only sensitive to precipitation but also to temperature which has not been expected for glaciers in the Peruvian Andes before. We furthermore have investigated impacts of increasing temperature due to its multiple effects on glacier mass and energy balance (fraction of liquid precipitation, long wave incoming radiation, sensible and latent heat flux). Presenting these results should allow for better

  12. Multi-water-bag models of ion temperature gradient instability in cylindrical geometry

    SciTech Connect

    Coulette, David; Besse, Nicolas

    2013-05-15

    Ion temperature gradient instabilities play a major role in the understanding of anomalous transport in core fusion plasmas. In the considered cylindrical geometry, ion dynamics is described using a drift-kinetic multi-water-bag model for the parallel velocity dependency of the ion distribution function. In a first stage, global linear stability analysis is performed. From the obtained normal modes, parametric dependencies of the main spectral characteristics of the instability are then examined. Comparison of the multi-water-bag results with a reference continuous Maxwellian case allows us to evaluate the effects of discrete parallel velocity sampling induced by the Multi-Water-Bag model. Differences between the global model and local models considered in previous works are discussed. Using results from linear, quasilinear, and nonlinear numerical simulations, an analysis of the first stage saturation dynamics of the instability is proposed, where the divergence between the three models is examined.

  13. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  14. Application of the Tripartite Model to a Complicated Sample of Residential Youth with Externalizing Problems

    ERIC Educational Resources Information Center

    Chin, Eu Gene; Ebesutani, Chad; Young, John

    2013-01-01

    The tripartite model of anxiety and depression has received strong support among child and adolescent populations. Clinical samples of children and adolescents in these studies, however, have usually been referred for treatment of anxiety and depression. This study investigated the fit of the tripartite model with a complicated sample of…

  15. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  16. Assessment of precipitation and temperature data from CMIP3 global climate models for hydrologic simulation

    NASA Astrophysics Data System (ADS)

    McMahon, T. A.; Peel, M. C.; Karoly, D. J.

    2015-01-01

    The objective of this paper is to identify better performing Coupled Model Intercomparison Project phase 3 (CMIP3) global climate models (GCMs) that reproduce grid-scale climatological statistics of observed precipitation and temperature for input to hydrologic simulation over global land regions. Current assessments are aimed mainly at examining the performance of GCMs from a climatology perspective and not from a hydrology standpoint. The performance of each GCM in reproducing the precipitation and temperature statistics was ranked and better performing GCMs identified for later analyses. Observed global land surface precipitation and temperature data were drawn from the Climatic Research Unit (CRU) 3.10 gridded data set and re-sampled to the resolution of each GCM for comparison. Observed and GCM-based estimates of mean and standard deviation of annual precipitation, mean annual temperature, mean monthly precipitation and temperature and Köppen-Geiger climate type were compared. The main metrics for assessing GCM performance were the Nash-Sutcliffe efficiency (NSE) index and root mean square error (RMSE) between modelled and observed long-term statistics. This information combined with a literature review of the performance of the CMIP3 models identified the following better performing GCMs from a hydrologic perspective: HadCM3 (Hadley Centre for Climate Prediction and Research), MIROCm (Model for Interdisciplinary Research on Climate) (Center for Climate System Research (The University of Tokyo), National Institute for Environmental Studies, and Frontier Research Center for Global Change), MIUB (Meteorological Institute of the University of Bonn, Meteorological Research Institute of KMA, and Model and Data group), MPI (Max Planck Institute for Meteorology) and MRI (Japan Meteorological Research Institute). The future response of these GCMs was found to be representative of the 44 GCM ensemble members which confirms that the selected GCMs are reasonably

  17. Real-Time Infrared Overtone Laser Control of Temperature in Picoliter H(2)O Samples: "Nanobathtubs" for Single Molecule Microscopy.

    PubMed

    Holmstrom, Erik D; Nesbitt, David J

    2010-01-01

    An approach for high spatiotemporal control of aqueous sample temperatures in confocal microscopy is reported. This technique exploits near-IR diode-laser illumination to locally heat picoliter volumes of water via first-overtone excitation in the OH-stretch manifold. A thin water cell after the objective resonantly removes any residual IR light from the detection system, allowing for continuous observation of single-molecule fluorescence throughout the heating event. This technique is tested quantitatively by reproducing single-molecule RNA folding results obtained from "bulk" stage heating measurements. Calibration of sample temperatures is obtained from time-correlated single-photon counting studies of Rhodamine B fluorescence decay. We obtain an upper limit to the heating response time (τ(heat) < 20 ms) consistent with even faster estimates (τ(heat) ≈ 0.25 ms) based on laser spot size, H(2)O heat capacit,y and absorption cross section. This combination of fast, noncontact heating of picoliter volumes provides new opportunities for real-time thermodynamic/kinetic studies at the single-molecule level.

  18. Finite temperature corrections in 2d integrable models

    NASA Astrophysics Data System (ADS)

    Caselle, M.; Hasenbusch, M.

    2002-09-01

    We study the finite size corrections for the magnetization and the internal energy of the 2d Ising model in a magnetic field by using transfer matrix techniques. We compare these corrections with the functional form recently proposed by Delfino and LeClair-Mussardo for the finite temperature behaviour of one-point functions in integrable 2d quantum field theories. We find a perfect agreement between theoretical expectations and numerical results. Assuming the proposed functional form as an input in our analysis we obtain a relevant improvement in the precision of the continuum limit estimates of both quantities.

  19. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  20. Control and diagnosis of temperature, density, and uniformity in x-ray heated iron/magnesium samples for opacity measurements

    SciTech Connect

    Nagayama, T.; Bailey, J. E.; Loisel, G.; Hansen, S. B.; Rochau, G. A.; Mancini, R. C.; MacFarlane, J. J.; Golovkin, I.

    2014-05-15

    Experimental tests are in progress to evaluate the accuracy of the modeled iron opacity at solar interior conditions, in particular to better constrain the solar abundance problem [S. Basu and H. M. Antia, Phys. Rep. 457, 217 (2008)]. Here, we describe measurements addressing three of the key requirements for reliable opacity experiments: control of sample conditions, independent sample condition diagnostics, and verification of sample condition uniformity. The opacity samples consist of iron/magnesium layers tamped by plastic. By changing the plastic thicknesses, we have controlled the iron plasma conditions to reach (1) T{sub e} = 167 ± 3 eV and n{sub e} = (7.1 ± 1.5)× 10{sup 21} cm{sup −3}, (2) T{sub e} = 170 ± 2 eV and n{sub e} = (2.0 ± 0.2) × 10{sup 22} cm{sup −3}, and (3) T{sub e} = 196 ± 6 eV and n{sub e} = (3.8 ± 0.8) × 10{sup 22} cm{sup −3}, which were measured by magnesium tracer K-shell spectroscopy. The opacity sample non-uniformity was directly measured by a separate experiment where Al is mixed into the side of the sample facing the radiation source and Mg into the other side. The iron condition was confirmed to be uniform within their measurement uncertainties by Al and Mg K-shell spectroscopy. The conditions are suitable for testing opacity calculations needed for modeling the solar interior, other stars, and high energy density plasmas.

  1. An intermediate temperature modeling study of the combustion of neopentane

    SciTech Connect

    Curran, H.J.; Pitz, W.J.; Westbrook, C.K.

    1995-10-01

    Low temperature hydrocarbon fuel oxidation proceeds via straight and branched chain reactions involving alkyl and alkyl peroxy radicals. These reactions play a critical role in the chemistry leading to knock or autoignition in spark ignition engines. As part of an on-going study in the understanding of low temperature oxidation of hydrocarbon fuels, the authors have investigated neopentane oxidation. A detailed chemical kinetic reaction mechanism is used to study the oxidation of neopentane in a closed reactor at 500 Torr pressure, and at a temperature of 753 K when small amounts of neopentane are added to slowly reacting mixtures of H{sub 2} + O{sub 2} + N{sub 2}. The major primary products formed in the experiments included isobutene, 3,3-dimethyloxetan, acetone, methane and formaldehyde. The major secondary products were, 2,2-dimethyloxiran, propene, isobuteraldehyde, methacrolein, and 2-methylprop-2-en-1-ol. It was found that the current model was able to explain both primary and secondary product formation with a high degree of accuracy. Furthermore, it was found that almost all secondary product formation could be explained through the oxidation of isobutene--a major primary product.

  2. Brillouin scattering study of equation of state of multicomponent liquids: Model oil samples

    SciTech Connect

    Bohidar, H.B.

    1988-10-01

    Brillouin scattering technique has been used to measure the pressure dependence of sound velocity v/sub s/(P) in model oil samples (each containing 5-hydrocarbon liquids) at room temperature T = 20/sub 1/ /sup 0/C. Experimental results are reported on two blends of model oil, each containing a mixture of n-heptane, n-tetradecane, cyclo-hexane, benzene, and toluene in different volume fractions. The compositions of these two differed in their cyclohexane and toluene contents. The pressure dependence of v/sub s/(P) has been measured up to 815 bars and the results could be least-square fitted to v/sub s/(P) = A/sub 0/+A/sub 1/P+A/sub 2/P/sup 2/ within the limits of experimental error ( +- 1%). The modified Tait's equation and linear pressure dependence of bulk modulii have been used in consistence with an earlier work to interpret the parabolic pressure dependence of v/sub s/(P). This yields the values of Tait parameters (B and C) and hence allows the explicit pressure dependence of density and compressibility of these multicomponent liquids to be evaluated.

  3. Spring Fluids from a Low-temperature Hydrothermal System at Dorado Outcrop: The First Samples of a Massive Global Flux

    NASA Astrophysics Data System (ADS)

    Wheat, C. G.; Fisher, A. T.; McManus, J.; Hulme, S.; Orcutt, B.

    2015-12-01

    Hydrothermal circulation through the volcanic ocean crust extracts about one fourth of Earth's lithospheric heat. Most of this advective heat loss occurs through ridge flanks, areas far from the magmatic influence of seafloor spreading, at relatively low temperatures (2-25 degrees Celsius). This process results in a flux of seawater through the oceanic crust that is commensurate with that delivered to the ocean from rivers. Given this large flow, even a modest (1-5 percent) change in concentration during circulation would impact geochemical cycles for many ions. Until recently such fluids that embody this process have not been collected or quantified despite the importance of this process, mainly because no site of focused, low-temperature discharge has been found. In 2013 we used Sentry (an AUV) and Jason II (an ROV) to generate a bathymetric map and locate springs within a geologic context on Dorado Outcrop, a ridge flank hydrothermal system that typifies such hydrothermal processes in the Pacific. Dorado Outcrop is located on 23 M.y. old seafloor of the Cocos Plate, where 70-90 percent of the lithospheric heat is removed. Spring fluids collected in 2013 confirmed small chemical anomalies relative to seawater, requiring new methods to collect, analyze, and interpret samples and data. In 2014 the submersible Alvin utilized these methods to recover the first high-quality spring samples from this system and year-long experiments. These unique data and samples represent the first of their type. For example, the presence of dissolved oxygen is the first evidence of an oxic ridge flank hydrothermal fluid, even though such fluids have been postulated to exist throughout a vast portion of the oceanic crust. Furthermore, chemical data confirm modest anomalies relative to seawater for some elements. Such anomalies, if characteristic throughout the global ocean, impact global geochemical cycles, crustal evolution, and subsurface microbial activity.

  4. Order-parameter-aided temperature-accelerated sampling for the exploration of crystal polymorphism and solid-liquid phase transitions

    SciTech Connect

    Yu, Tang-Qing Vanden-Eijnden, Eric; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Tuckerman, Mark

    2014-06-07

    The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency.

  5. Order-parameter-aided temperature-accelerated sampling for the exploration of crystal polymorphism and solid-liquid phase transitions

    NASA Astrophysics Data System (ADS)

    Yu, Tang-Qing; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Vanden-Eijnden, Eric; Tuckerman, Mark

    2014-06-01

    The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency.

  6. High Temperature Chemical Kinetic Combustion Modeling of Lightly Methylated Alkanes

    SciTech Connect

    Sarathy, S M; Westbrook, C K; Pitz, W J; Mehl, M

    2011-03-01

    Conventional petroleum jet and diesel fuels, as well as alternative Fischer-Tropsch (FT) fuels and hydrotreated renewable jet (HRJ) fuels, contain high molecular weight lightly branched alkanes (i.e., methylalkanes) and straight chain alkanes (n-alkanes). Improving the combustion of these fuels in practical applications requires a fundamental understanding of large hydrocarbon combustion chemistry. This research project presents a detailed high temperature chemical kinetic mechanism for n-octane and three lightly branched isomers octane (i.e., 2-methylheptane, 3-methylheptane, and 2,5-dimethylhexane). The model is validated against experimental data from a variety of fundamental combustion devices. This new model is used to show how the location and number of methyl branches affects fuel reactivity including laminar flame speed and species formation.

  7. Enthalpy balance methods versus temperature models in ice sheets

    NASA Astrophysics Data System (ADS)

    Calvo, Natividad; Durany, José; Vázquez, Carlos

    2015-05-01

    In this paper we propose and numerically solve an original enthalpy formulation for the problem governing the thermal behaviour of polythermal ice sheets. Although the modelling follows some ideas introduced in Aschwanden and Blatter (2009), nonlinear basal boundary conditions in both cold and temperate regions are also considered, thus including the sliding effects in the frame of a fully coupled shallow ice approximation (SIA) model. One of the main novelties of this work comes from the introduction of the Heaviside multivalued operator to take into account the discontinuity of the thermal diffusion function at the cold-temperate transition surface (CTS) free boundary. Moreover, we propose a duality method for maximal monotone operators to solve simultaneously the nonlinear diffusive term and the free boundary. Some numerical simulation examples with real data from Antarctica are presented and illustrate the small differences between the computed results from the enthalpy formulation here proposed and the alternative formulation in terms of the temperature (Calvo et al., 2001).

  8. A Test of Model Validation from Observed Temperature Trends

    NASA Astrophysics Data System (ADS)

    Singer, S. F.

    2006-12-01

    How much of current warming is due to natural causes and how much is manmade? This requires a comparison of the patterns of observed warming with the best available models that incorporate both anthropogenic (greenhouse gases and aerosols) as well as natural climate forcings (solar and volcanic). Fortunately, we have the just published U.S.-Climate Change Science Program (CCSP) report (www.climatescience.gov/Library/sap/sap1-1/finalreport/default.htm), based on best current information. As seen in Fig. 1.3F of the report, modeled surface temperature trends change little with latitude, except for a stronger warming in the Arctic. The observations, however, show a strong surface warming in the northern hemisphere but not in the southern hemisphere (see Fig. 3.5C and 3.6D). The Antarctic is found to be cooling and Arctic temperatures, while currently rising, were higher in the 1930s than today. Although the Executive Summary of the CCSP report claims "clear evidence" for anthropogenic warming, based on comparing tropospheric and surface temperature trends, the report itself does not confirm this. Greenhouse models indicate that the tropics should provide the most sensitive location for their validation; trends there should increase by 200-300 percent with altitude, peaking at around 10 kilometers. The observations, however, show the opposite: flat or even decreasing tropospheric trend values (see Fig. 3.7 and also Fig. 5.7E). This disparity is demonstrated most strikingly in Fig. 5.4G, which shows the difference between surface and troposphere trends for a collection of models (displayed as a histogram) and for balloon and satellite data. [The disparities are less apparent in the Summary, which displays model results in terms of "range" rather than as histograms.] There may be several possible reasons for the disparity: Instrumental and other effects that exaggerate or otherwise distort observed temperature trends. Or, more likely: Shortcomings in models that result

  9. Improving the Performance of Temperature Index Snowmelt Model of SWAT by Using MODIS Land Surface Temperature Data

    PubMed Central

    Yang, Yan; Onishi, Takeo; Hiramatsu, Ken

    2014-01-01

    Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R2) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations. PMID:25165746

  10. Improving the performance of temperature index snowmelt model of SWAT by using MODIS land surface temperature data.

    PubMed

    Yang, Yan; Onishi, Takeo; Hiramatsu, Ken

    2014-01-01

    Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R (2)) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations. PMID:25165746

  11. Low reheating temperatures in monomial and binomial inflationary models

    SciTech Connect

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-06-23

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ{sup 2} inflationary potential is no longer favored by current CMB data, as well as ϕ{sup p} with p>2, a ϕ{sup 1} potential and canonical reheating (w{sub re}=0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n{sub s}, implies an upper bound on the reheating temperature of T{sub re}≲6×10{sup 10} GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ{sup 1} potential. We find that as a subdominant ϕ{sup 2} term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T{sub re}=4 MeV is excluded by the Planck 2015 68% confidence limit.

  12. Model-based estimation of changes in air temperature seasonality

    NASA Astrophysics Data System (ADS)

    Barbosa, Susana; Trigo, Ricardo

    2010-05-01

    Seasonality is a ubiquitous feature in climate time series. Climate change is expected to involve not only changes in the mean of climate parameters but also changes in the characteristics of the corresponding seasonal cycle. Therefore the identification and quantification of changes in seasonality is a highly relevant topic in climate analysis, particularly in a global warming context. However, the analysis of seasonality is far from a trivial task. A key challenge is the discrimination between long-term changes in the mean and long-term changes in the seasonal pattern itself, which requires the use of appropriate statistical approaches in order to be able to distinguish between overall trends in the mean and trends in the seasons. Model based approaches are particularly suitable for the analysis of seasonality, enabling to assess uncertainties in the amplitude and phase of seasonal patterns within a well defined statistical framework. This work addresses the changes in the seasonality of air temperature over the 20th century. The analysed data are global air temperature values close to surface (2m above ground) and mid-troposphere (500 hPa geopotential height) from the recently developed 20th century reanalysis. This new 3-D Reanalysis dataset is available since 1891, considerably extending all other Reanalyses currently in use (e.g. NCAR, ECWMF), and was obtained with the Ensemble Filter (Compo et al., 2006) by assimilation of pressure observations into a state-of-the-art atmospheric general circulation model that includes the radiative effects of historical time-varying CO2 concentrations, volcanic aerosol emissions and solar output variations. A modeling approach based on autoregression (Barbosa et al, 2008; Barbosa, 2009) is applied within a Bayesian framework for the estimation of a time varying seasonal pattern and further quantification of changes in the amplitude and phase of air temperature over the 20th century. Barbosa, SM, Silva, ME, Fernandes, MJ

  13. Comparison of eruptive and intrusive samples from Unzen Volcano, Japan: Effects of contrasting pressure temperature time paths

    NASA Astrophysics Data System (ADS)

    Almberg, L. D.; Larsen, J. F.; Eichelberger, J. C.; Vogel, T. A.; Patino, L. C.

    2008-07-01

    Core samples from the conduit of Unzen Volcano, obtained only 9 years after cessation of the 1991-1995 eruption, exhibit important differences in physical characteristics and mineralogy, and subtle differences in bulk chemistry from erupted samples. These differences in the conduit samples reflect emplacement under a confining pressure where about half of the original magmatic water was retained in the melt phase, maintenance at hypersolidus temperature for some unknown but significant time span, and subsequent subsolidus hydrothermal alteration. In contrast, magma that extruded as lava underwent decompression to 1 atm with nearly complete loss of magmatic water and cooling at a sufficiently rapid rate to produce glass. The resulting hypabyssal texture of the conduit samples, while clearly distinct from eruptive rocks, is also distinct from plutonic suites. Given the already low temperature of the conduit (less than 200 °C, [Nakada, S., Uto, K., Yoshimoto, M., Eichelberger, J.C., Shimizu, H., 2005. Scientific Results of Conduit Drilling in the Unzen Scientific Drilling Project (USDP), Sci. Drill., 1, 18-22]) when it was sampled by drilling, this texture must have developed within a decade, and perhaps within a much shorter time, after emplacement. The fact that all trace-element concentrations of the conduit and the last-emplaced lava of the spine, 1300 m above it, are identical to within analytical uncertainty provides strong evidence that both were produced during the same eruption sequence. Changes in conduit magma that occurred between emplacement and cooling to the solidus were collapse of vesicles from less than or equal to the equilibrium value of about 50 vol.% to about 0.1 vol.%; continued resorption of quartz and reaction of biotite phenocrysts due to heating of magma prior to ascent by intruding mafic magma; breakdown of hornblende; and micro-crystallization of rhyolitic melt to feldspar and quartz. Subsolidus changes were deposition of calcite and

  14. Estimating species - area relationships by modeling abundance and frequency subject to incomplete sampling.

    PubMed

    Yamaura, Yuichi; Connor, Edward F; Royle, J Andrew; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-07-01

    Models and data used to describe species-area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species-area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species-area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density-area relationships and occurrence probability-area relationships can alter the form of species-area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied to a

  15. Estimating species - area relationships by modeling abundance and frequency subject to incomplete sampling.

    PubMed

    Yamaura, Yuichi; Connor, Edward F; Royle, J Andrew; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-07-01

    Models and data used to describe species-area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species-area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species-area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density-area relationships and occurrence probability-area relationships can alter the form of species-area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied to a

  16. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    USGS Publications Warehouse

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  17. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    PubMed Central

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  18. Detection of high molecular weight organic tracers in vegetation smoke samples by high-temperature gas chromatography-mass spectrometry

    SciTech Connect

    Elias, V.O.; Simoneit, B.R.T. ); Pereira, A.S.; Cardoso, J.N. ); Cabral, J.A. )

    1999-07-15

    High-temperature high-resolution gas chromatography (HTGC) is an established technique for the separation of complex mixtures of high molecular weight (HMW) compounds which do not elute when analyzed on conventional GC columns. The combination of this technique with mass spectrometry is not so common and application to aerosols is novel. The HTGC and HTGC-MS analyses of smoke samples taken by particle filtration from combustion of different species of plants provided the characterization of various classes of HMW compounds reported to occur for the first time in emissions from biomass burning. Among these components are a series of wax esters with up to 58 carbon numbers, aliphatic hydrocarbons, triglycerides, long chain methyl ketones, alkanols and a series of triterpenyl fatty acid esters which have been characterized as novel natural products. Long chain fatty acids with more than 32 carbon numbers are not present in the smoke samples analyzed. The HMW compounds in smoke samples from the burning of plants from Amazonia indicate the input of directly volatilized natural products in the original plants during their combustion. However, the major organic compounds extracted from smoke consist of a series of lower molecular weight polar components, which are not natural products but the result of the thermal breakdown of cellulose and lignin. In contrast, the HMW natural products may be suitable tracers for specific sources of vegetation combustion because they are emitted as particles without thermal alternation in the smoke and can thus be related directly to the original plant material.

  19. Effect of UV irradiation, sample thickness and storage temperature on storability, bacterial activity and functional properties of liquid egg.

    PubMed

    Abdanan Mehdizadeh, S; Minaei, S; Karimi Torshizi, M A; Mohajerani, E

    2015-07-01

    Effect of sample thickness, ultraviolet irradiation and storage temperature on bacterial activity, storability and functional properties (foamability and stability) of liquid egg were investigated. Eggs were contaminated with prepared Salmonella suspension 108/mL. Separated albumen and yolk samples were poured in three thicknesses (1, 2 and 3 mm) and irradiated at 3, 5 10, 15 min with ultraviolet radiation and were stored at 5, 15, 25, 37 °C for up to 8 days. Observations indicated that all ultraviolet irradiation times, reduced the total count of Salmonella bacteria in egg samples. Although, functional properties were improved, protein oxidation in both albumen and yolk increased. After the first 2 days of storage, total counts of Salmonella and protein oxidation of eggs decreased solely in the 5 °C treatment. It is concluded that irradiation treatment can be used to decrease bacterial contamination of liquid egg albeit not below the safe level for raw consumption. Furthermore, the best irradiation times to improve foam ability and stability were 10 and 5 min, respectively.

  20. An improved temperature model of the Antarctic uppermost mantle for the benefit of GIA modelling

    NASA Astrophysics Data System (ADS)

    Stolk, Ward; Kaban, Mikhail; van der Wal, Wouter; Wiens, Doug

    2014-05-01

    Mass changes in Antarctica's ice cap influence the underlying lithosphere and upper mantle. The dynamics of the solid earth are in turn coupled back to the surface and ice dynamics. Furthermore, mass changes due to lithosphere and uppermost mantle dynamics pollute measurements of ice mass change in Antarctica. Thus an improved understanding of temperature, composition and rheology of the Antarctic lithosphere is required, not only to improve geodynamic modelling of the Antarctic continent (e.g. glacial isostatic adjustment (GIA) modelling), but also to improve climate monitoring and research. Recent field studies in Antarctica have generated much new data. These data, especially an improved assessment of crustal thickness and seismic tomography of the upper mantle, now allow for the construction of an improved regional temperature model of the Antarctic uppermost mantle. Even a small improvement in the temperature models for the uppermost mantle could have a significant effect on GIA modelling in Antarctica. Our regional temperature model is based on a joint analysis of a high resolution seismic tomography model (Heeszel et al., forthcoming) and a recent global gravity model (Foerste et al., 2011). The model will be further constrained by additional local data where available. Based on an initial general mantle composition, the temperature and density in the uppermost mantle is modelled, elaborating on the the methodology of Goes et al. (2000) and Cammarano et al. (2003). The gravity signal of the constructed model is obtained using forward gravity modelling. This signal is compared with the observed gravity signal and differences form the basis for the compositional model in the next iteration. The first preliminary results of this study, presented here, will focus on the cratonic areas in East-Antarctica, for which modelling converges after a few iterations. Cammarano, F. and Goes, S. and Vacher, P. and Giardini, D. (2003) Inferring upper-mantle temperatures from

  1. On surface temperature, greenhouse gases, and aerosols: models and observations

    SciTech Connect

    Mitchell, J.F.B.; Davis, R.A.; Ingram, W.J.; Senior, C.A.

    1995-10-01

    The effect of changes in atmospheric carbon dioxide concentrations and sulphate aerosols on near-surface temperature is investigated using a version of the Hadley Centre atmospheric model coupled to a mixed layer ocean. The scattering of sunlight by sulphate aerosols is represented by appropriately enhancing the surface albedo. On doubling atmospheric carbon dioxide concentrations, the global mean temperature increases by 5.2 K. An integration with a 39% increase in CO{sub 2}, giving the estimated change in radiative heating due to increases in greenhouse gases since 1900, produced an equilibrium warming of 2.3 K, which, even allowing for oceanic inertia, is significantly higher than the observed warming over the same period. Furthermore, the simulation suggests a substantial warming everywhere, whereas the observations indicate isolated regions of cooling, including parts of the northern midlatitude continents. The addition of an estimate of the effect of scattering by current industrial aerosols (uncertain by a factor of at least 3) leads to improved agreement with the observed pattern of changes over the northern continents and reduces the global mean warming by about 30%. Doubling the aerosol forcing produces patterns that are still compatible with the observations, but further increase leads to unrealistically extensive cooling in the midlatitudes. The diurnal range of surface temperature decreases over most of the northern extratropics on increasing CO{sub 2}, in agreement with recent observations. The addition of the current industrial aerosol had little detectable effect on the diurnal range in the model because the direct effect of reduced solar heating at the surface is approximately balanced by the indirect effects of cooling. Thus, the ratio of the reduction in diurnal range to the mean warming is increased, in closer agreement with observations. Results from further sensitivity experiments with larger increases in aerosol and CO{sub 2} are presented.

  2. Continuous Measurements of Electrical Conductivity and Viscosity of Lherzorite Analogue Samples during Slow Increases and Decreases in Temperature: Melting and Pre-melting Effects

    NASA Astrophysics Data System (ADS)

    Sueyoshi, K.; Hiraga, T.

    2014-12-01

    It has been considered that transport properties of the mantle (ex. electrical conductivity, viscosity, seismic attenuation) changes dramatically during ascend of the mantle especially at around the mantle solidus. To understand the mechanism of such changes, we measured the electrical conductivity and viscosity of the lherzorite analogues during slow increases and decreases in temperature reproducing the mantle crossing its solidus. Two types of samples, one was forsterite plus 20% diopside and the other was 50% forsterite, 40% enstatite and 10% diopside with addition of 0.5% spinel, were synthesized from Mg(OH)2, SiO2, CaCO3 and MgAl2O4 (spinel) powders with particle size of <50 nm. Samples were expected to exhibit different manners in initiation of partial melt and amount of melt during the temperature change. We continuously measured electrical conductivity of these samples at every temperature during gradual temperature change, which crosses the sample solidus (~1380℃ and 1230℃ for forsterite + diopside sample and spinel-added samples, respectively). Sample viscosity were also measured under constant loads of 0.5~50 MPa. The electrical conductivity and viscosity at well below (>150℃) the sample solidus exhibited linear distributions in their Arrhenius plots indicating that a single mechanism controls for each transport property within the experimental temperature ranges. Such linear relationship especially in the electrical conductivity was no longer observed at higher temperature regime exhibiting its exponential increase until the temperature reached the sample solidus. Such dramatic change with changing temperature has not been detected for the sample viscosity. Monotonic increase of electrical conductivity in accordance with increasing melt fraction was observed above the sample solidus.

  3. Network Model-Assisted Inference from Respondent-Driven Sampling Data

    PubMed Central

    Gile, Krista J.; Handcock, Mark S.

    2015-01-01

    Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328

  4. Modification of an RBF ANN-Based Temperature Compensation Model of Interferometric Fiber Optical Gyroscopes

    PubMed Central

    Cheng, Jianhua; Qi, Bing; Chen, Daidai; Jr. Landry, René

    2015-01-01

    This paper presents modification of Radial Basis Function Artificial Neural Network (RBF ANN)-based temperature compensation models for Interferometric Fiber Optical Gyroscopes (IFOGs). Based on the mathematical expression of IFOG output, three temperature relevant terms are extracted, which include: (1) temperature of fiber loops; (2) temperature variation of fiber loops; (3) temperature product term of fiber loops. Then, the input-modified RBF ANN-based temperature compensation scheme is established, in which temperature relevant terms are transferred to train the RBF ANN. Experimental temperature tests are conducted and sufficient data are collected and post-processed to form the novel RBF ANN. Finally, we apply the modified RBF ANN based on temperature compensation model in two IFOGs with temperature compensation capabilities. The experimental results show the proposed temperature compensation model could efficiently reduce the influence of environment temperature on the output of IFOG, and exhibit a better temperature compensation performance than conventional scheme without proposed improvements. PMID:25985163

  5. Development and application of a thermophysical property model for cane fiberboard subjected to high temperatures

    SciTech Connect

    Hensel, S.J.; Gromada, R.J.

    1994-06-01

    A thermophysical property model has been developed to analytically determine the thermal response of cane fiberboard when exposed to temperatures and heat fluxes associated with the 10 CFR 71 hypothetical accident condition (HAC) and associated post fire cooling. The complete model was developed from high temperature cane fiberboard 1-D test results and consists of heating and cooling sub-models. The heating property model accounts for the enhanced heat transfer of the hot gases in the fiberboard, the loss of energy via venting, and the loss of mass from venting during the heating portion of the test. The cooling property model accounts for the degraded material effects and the continued heat transfer associated with the hot gases after removal of the external heating source. Agreement between the test results of a four inch thick fiberboard sample with the analytical application of the complete property model is quite good and will be presented. A comparison of analysis results and furnace test data for the 9966 package suggests that the property model sufficiently accounts for the heat transfer in an actual package.

  6. Comparison of Three Models for Snow Microwave Brightness Temperature Simulation

    NASA Astrophysics Data System (ADS)

    Royer, A.; Roy, A.; Montpetit, B.; Picard, G.; Brucker, L.; Langlois, A.

    2015-12-01

    This presentation compares three microwave radiative transfer models commonly used for snow brightness temperature (TB) simulations, namely: Dense Media Radiative Transfer - Multi Layers (DMRT-ML), Microwave Emission Model of Layered Snowpacks (MEMLS) and Helsinki University of Technology n-layers (HUT n-layers). Using the same new comprehensive sets of measured detailed snowpack physical properties (input data), we compared simulated TBs at 11, 19 and 37 GHz from these 3 models based on different electromagnetic approaches using three different snow grain metrics, i.e. respectively measured specific surface area (SSA), calculated correlation length using the Debye relationship and measured maximum extent. Comparison with surface-based radiometric measurements for different types of snow (in southern Québec, and in subarctic and arctic areas) shows similar averaged root mean square errors in the range of 10 K or less between measured and simulated TBs when simulations are optimized using scaling factors applied on these metrics. This means that, in practice, the different approaches of these models (physical to empirical) converge to similar results when driven with appropriate scaled in-situ measurements. We discuss the results relatively to the uncertainties in snow microstructure measurements. In particular, we show that the scaling factor to be applied on the SSA measurements in order to minimize the DMRT-ML simulated TBs compared to measured TBs is not due to uncertainty in SSA measurements.

  7. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    PubMed

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  8. Stability of spironolactone in rat plasma: strict temperature control of blood and plasma samples is required in rat pharmacokinetic studies.

    PubMed

    Tokumura, Tadakazu; Muraoka, Atsushi; Masutomi, Takashi; Machida, Yoshiharu

    2005-06-01

    The stability of spironolactone (SPN) in rat plasma was studied and its degradation was found to be an apparent first-order reaction. The apparent first-order rate constants (k(obs)) at 37, 23.5 and 0 degrees C were 3.543+/-0.261 (h-1, mean+/-S.D., n=3), 6.278+/-0.045 (x10(-1) h-1), and 7.336+/-0.843 (x10(-2) h-1), respectively. The half-lives were 0.20 h, 1.10 h, and 9.53 h. The degradation rate of SPN in rat plasma was markedly decreased when NaF, an esterase inhibitor, was added to the plasma, and the degradation was catalyzed by esterase in the plasma. These results indicated that not only plasma but also blood and serum samples in rat pharmacokinetic studies should be cooled to 0 degrees C, the temperature maintained, and treated as soon as possible. In pharmacokinetic studies reported previously, the temperature control of plasma, blood, and serum samples was not described. The pharmacokinetic study in rats after intravenous administration of SPN at 20 mg/kg was performed with strict temperature control of plasma and blood samples. The AUC, MRT, CL and Vd(ss) values (mean+/-S.E. of 4 rats) for SPN were 4100.8+/-212.9 ng h/ml, 0.29+/-0.01 h, 4915.7+/-248.0 ml/h/kg, and 1435.4+/-48.4 ml/kg, respectively. The AUC value was much larger than that previously reported. The AUC, MRT, Cmax and Tmax values (mean+/-S.E. of 4 rats) of canrenone, an active metabolite of SPN, after the administration of SPN were 4196.1+/-787.5 ng h/ml, 1.99+/-0.13 h, 1546.3+/-436.4 ng/ml and 1.0+/-0.0 h, respectively. This AUC value was almost identical to the value previously reported. PMID:15930762

  9. Relationship between fire temperature and changes in chemical soil properties: a conceptual model of nutrient release

    NASA Astrophysics Data System (ADS)

    Thomaz, Edivaldo L.; Doerr, Stefan H.

    2014-05-01

    The purpose of this study was to evaluate the effects of fire temperatures (i.e., soil heating) on nutrient release and aggregate physical changes in soil. A preliminary conceptual model of nutrient release was established based on results obtained from a controlled burn in a slash-and-burn agricultural system located in Brazil. The study was carried out in clayey subtropical soil (humic Cambisol) from a plot that had been fallow for 8 years. A set of three thermocouples were placed in four trenches at the following depths: 0 cm on the top of the mineral horizon, 1.0 cm within the mineral horizon, and 2 cm within the mineral horizon. Three soil samples (true independent sample) were collected approximately 12 hours post-fire at depths of 0-2.5 cm. Soil chemical changes were more sensitive to fire temperatures than aggregate physical soil characteristics. Most of the nutrient response to soil heating was not linear. The results demonstrated that moderate temperatures (< 400°C) had a major effect on nutrient release (i.e., the optimum effect), whereas high temperatures (> 500 °C) decreased soil fertility.

  10. Modeling Tree Shade Effect on Urban Ground Surface Temperature.

    PubMed

    Napoli, Marco; Massetti, Luciano; Brandani, Giada; Petralli, Martina; Orlandini, Simone

    2016-01-01

    There is growing interest in the role that urban forests can play as urban microclimate modifiers. Tree shade and evapotranspiration affect energy fluxes and mitigate microclimate conditions, with beneficial effects on human health and outdoor comfort. The aim of this study was to investigate surface temperature () variability under the shade of different tree species and to test the capability in predicting of a proposed heat transfer model. Surface temperature data on asphalt and grass under different shading conditions were collected in the Cascine park, Florence, Italy, and were used to test the performance of a one-dimensional heat transfer model integrated with a routine for estimating the effect of plant canopies on surface heat transfer. Shading effects of 10 tree species commonly used in Italian urban settings were determined by considering the infrared radiation and the tree canopy leaf area index (LAI). The results indicate that, on asphalt, was negatively related to the LAI of trees ( reduction ranging from 13.8 to 22.8°C). On grass, this relationship was weaker probably because of the combined effect of shade and grass evapotranspiration on ( reduction ranged from 6.9 to 9.4°C). A sensitivity analysis confirmed that other factors linked to soil water content play an important role in reduction of grassed areas. Our findings suggest that the energy balance model can be effectively used to estimate of the urban pavement under different shading conditions and can be applied to the analysis of microclimate conditions of urban green spaces. PMID:26828170

  11. Modeling temperature and stress in rocks exposed to the sun

    NASA Astrophysics Data System (ADS)

    Hallet, B.; Mackenzie, P.; Shi, J.; Eppes, M. C.

    2012-12-01

    The potential contribution of solar-driven thermal cycling to the progressive breakdown of surface rocks on the Earth and other planets is recognized but under studied. To shed light on this contribution we have launched a collaborative study integrating modern instrumental and numerical approaches to define surface temperatures, stresses, strains, and microfracture activity in exposed boulders, and to shed light on the thermo-mechanical response of boulders to diurnal solar exposure. The instrumental portion of our study is conducted by M. Eppes and coworkers who have monitored the surface and environmental conditions of two ~30 cm dia. granite boulders (one in North Carolina, one in New Mexico) in the field for one and tow years, respectively. Each boulder is instrumented with 8 thermocouples, 8 strain gauges, a surface moisture sensor and 6 acoustic emission (AE) sensors to monitor microfracture activity continuously and to locate it within 2.5 cm. Herein, we focus on the numerical modeling. Using a commercially available finite element program, MSC.Marc®2008r1, we have developed an adaptable, realistic thermo-mechanical model to investigate quantitatively the temporal and spatial distributions of both temperature and stress throughout a boulder. The model accounts for the effects of latitude and season (length of day and the sun's path relative to the object), atmospheric damping (reduction of solar radiation when traveling through the Earth's atmosphere), radiative interaction between the boulder and its surrounding soil, secondary heat exchange of the rock with air, and transient heat conduction in both rock and soil. Using representative thermal and elastic rock properties, as well as realistic representations of the size, shape and orientation of a boulder instrumented in the field in North Carolina, the model is validated by comparison with direct measurements of temperature and strain on the surface of one boulder exposed to the sun. Using the validated

  12. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  13. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry. Progress report: January 1, 1993--December 31, 1993

    SciTech Connect

    Montaser, A.

    1993-12-31

    In this research, new high-temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spec