Science.gov

Sample records for sample temperature modeling

  1. Modeling the temperature-dependent peptide vibrational spectra based on implicit-solvent model and enhance sampling technique

    NASA Astrophysics Data System (ADS)

    Tianmin, Wu; Tianjun, Wang; Xian, Chen; Bin, Fang; Ruiting, Zhang; Wei, Zhuang

    2016-01-01

    We herein review our studies on simulating the thermal unfolding Fourier transform infrared and two-dimensional infrared spectra of peptides. The peptide-water configuration ensembles, required forspectrum modeling, aregenerated at a series of temperatures using the GBOBC implicit solvent model and the integrated tempering sampling technique. The fluctuating vibrational Hamiltonians of the amide I vibrational band are constructed using the Frenkel exciton model. The signals are calculated using nonlinear exciton propagation. The simulated spectral features such as the intensity and ellipticity are consistent with the experimental observations. Comparing the signals for two beta-hairpin polypeptides with similar structures suggests that this technique is sensitive to peptide folding landscapes. Project supported by the National Natural Science Foundation of China (Grant No. 21203178), the National Natural Science Foundation of China (Grant No. 21373201), the National Natural Science Foundation of China (Grant No. 21433014), the Science and Technological Ministry of China (Grant No. 2011YQ09000505), and “Strategic Priority Research Program” of the Chinese Academy of Sciences (Grant Nos. XDB10040304 and XDB100202002).

  2. Temperature Control Diagnostics for Sample Environments

    SciTech Connect

    Santodonato, Louis J; Walker, Lakeisha MH; Church, Andrew J; Redmon, Christopher Mckenzie

    2010-01-01

    In a scientific laboratory setting, standard equipment such as cryocoolers are often used as part of a custom sample environment system designed to regulate temperature over a wide range. The end user may be more concerned with precise sample temperature control than with base temperature. But cryogenic systems tend to be specified mainly in terms of cooling capacity and base temperature. Technical staff at scientific user facilities (and perhaps elsewhere) often wonder how to best specify and evaluate temperature control capabilities. Here we describe test methods and give results obtained at a user facility that operates a large sample environment inventory. Although this inventory includes a wide variety of temperature, pressure, and magnetic field devices, the present work focuses on cryocooler-based systems.

  3. Recommended Maximum Temperature For Mars Returned Samples

    NASA Technical Reports Server (NTRS)

    Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.

    2016-01-01

    The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.

  4. Finite sample effect in temperature gradient focusing.

    PubMed

    Lin, Hao; Shackman, Jonathan G; Ross, David

    2008-06-01

    Temperature gradient focusing (TGF) is a new and promising equilibrium gradient focusing method which can provide high concentration factors for improved detection limits in combination with high-resolution separation. In this technique, temperature-dependent buffer chemistry is employed to generate a gradient in the analyte electrophoretic velocity. By the application of a convective counter-flow, a zero-velocity point is created within a microchannel, at which location the ionic analytes accumulate or focus. In general, the analyte concentration is small when compared with buffer ion concentrations, such that the focusing mechanism works in the ideal, linearized regime. However, this presumption may at times be violated due to significant sample concentration growth or the use of a low-concentration buffer. Under these situations the sample concentration becomes non-negligible and can induce strong nonlinear interactions with buffer ions, which eventually lead to peak shifting and distortion, and the loss of detectability and resolution. In this work we combine theory, simulation, and experimental data to present a detailed study on nonlinear sample-buffer interactions in TGF. One of the key results is the derivation of a generalized Kohlrausch regulating function (KRF) that is valid for systems in which the electrophoretic mobilities are not constant but vary spatially. This generalized KRF greatly facilitates analysis, allowing reduction of the problem to a single equation describing sample concentration evolution, and is applicable to other problems with heterogeneous electrophoretic mobilities. Using this sample evolution equation we have derived an understanding of the nonlinear peak deformation phenomenon observed experimentally in TGF. We have used numerical simulations to validate our theory and to quantitatively predict TGF. Our simulation results demonstrate excellent agreement with experimental data, and also indicate that the proper inclusion of

  5. Multiphoton cryo microscope with sample temperature control

    NASA Astrophysics Data System (ADS)

    Breunig, H. G.; Uchugonova, A.; König, K.

    2013-02-01

    We present a multiphoton microscope system which combines the advantages of multiphoton imaging with precise control of the sample temperature. The microscope provides online insight in temperature-induced changes and effects in plant tissue and animal cells with subcellular resolution during cooling and thawing processes. Image contrast is based on multiphoton fluorescence intensity or fluorescence lifetime in the range from liquid nitrogen temperature up to +600°C. In addition, micro spectra from the imaged regions can be recorded. We present measurement results from plant leaf samples as well as Chinese hamster ovary cells.

  6. Temperature effects: methane generation from landfill samples

    SciTech Connect

    Hartz, K.E.; Ham, R.K.; Klink, R.E.

    1982-08-01

    The objective of this investigation was to study the impact of temperature variations on the rate of methane generation from solid waste. The temperatures investigated ranged from 21/sup 0/C to 48/sup 0/C. Two approaches were applied. These were short term residence at seven different temperatures and intermediate term residence at two different temperatures. For the short term studies, samples were obtained from the Freshkills landfill (N.Y.) and the Operating Industries landfill (Calif.). Three samples were used in the intermediate term studies, and were from Palos Verdes landfill and Menlo Park landfill, both in California. From the short term results, energy of activation values of 22.4 kilo calories per mole to 23.7 kilo calories per mole were calculated. The intermediate term results produced values ranging from 18.7 to 21.8 kilo calories per mole. From the results it was concluded that some minor population shifts occurred with minor temperature changes but all of the energy of activation values were higher than any previous reportings. In addition, the temperature of 41/sup 0/C was found to be the optimum for methane generation on a short term basis.

  7. Coercivity maxima at low temperatures. [of lunar samples

    NASA Technical Reports Server (NTRS)

    Schwerer, F. C.; Nagata, T.

    1974-01-01

    Recent measurements have shown that the magnetic coercive forces of some Apollo lunar samples show an unexpected decrease with decreasing temperature at cryogenic temperatures. This behavior can be explained quantitatively in terms of a model which considers additive contributions from a soft, reversible magnetic phase and from a harder, hysteretic magnetic phase.

  8. Temperature effects: methane generation from landfill samples

    SciTech Connect

    Hartz, K.E.; Klink, R.E.; Ham, R.K.

    1982-08-01

    The objective of the investigation described to study the impact of temperature variations on the rate of methane generation from solid waste. The temperatures investigated ranged from 21/sup 0/C to 46/sup 0/C. Two approaches were applied. These were short term residence at seven different temperatures and intermediate term residence at two different temperatures. From the short term results, energy of activation values of 22.4 kilo calories per mole to 23.7 kilo calories per mole were calculated. The temperature of 41/sup 0/C was found to be the optimum for methane generation on a short term basis. 8 refs.

  9. Apparatus for low temperature thermal desorption spectroscopy of portable samples

    NASA Astrophysics Data System (ADS)

    Stuckenholz, S.; Büchner, C.; Ronneburg, H.; Thielsch, G.; Heyde, M.; Freund, H.-J.

    2016-04-01

    An experimental setup for low temperature thermal desorption spectroscopy (TDS) integrated in an ultrahigh vacuum-chamber housing a high-end scanning probe microscope for comprehensive multi-tool surface science analysis is described. This setup enables the characterization with TDS at low temperatures (T > 22 K) of portable sample designs, as is the case for scanning probe optimized setups or high-throughput experiments. This combination of techniques allows a direct correlation between surface morphology, local spectroscopy, and reactivity of model catalysts. The performance of the multi-tool setup is illustrated by measurements of a model catalyst. TDS of CO from Mo(001) and from Mo(001) supported MgO thin films were carried out and combined with scanning tunneling microscopy measurements.

  10. Apparatus for low temperature thermal desorption spectroscopy of portable samples.

    PubMed

    Stuckenholz, S; Büchner, C; Ronneburg, H; Thielsch, G; Heyde, M; Freund, H-J

    2016-04-01

    An experimental setup for low temperature thermal desorption spectroscopy (TDS) integrated in an ultrahigh vacuum-chamber housing a high-end scanning probe microscope for comprehensive multi-tool surface science analysis is described. This setup enables the characterization with TDS at low temperatures (T > 22 K) of portable sample designs, as is the case for scanning probe optimized setups or high-throughput experiments. This combination of techniques allows a direct correlation between surface morphology, local spectroscopy, and reactivity of model catalysts. The performance of the multi-tool setup is illustrated by measurements of a model catalyst. TDS of CO from Mo(001) and from Mo(001) supported MgO thin films were carried out and combined with scanning tunneling microscopy measurements. PMID:27131703

  11. Spin models and boson sampling

    NASA Astrophysics Data System (ADS)

    Garcia Ripoll, Juan Jose; Peropadre, Borja; Aspuru-Guzik, Alan

    Aaronson & Arkhipov showed that predicting the measurement statistics of random linear optics circuits (i.e. boson sampling) is a classically hard problem for highly non-classical input states. A typical boson-sampling circuit requires N single photon emitters and M photodetectors, and it is a natural idea to rely on few-level systems for both tasks. Indeed, we show that 2M two-level emitters at the input and output ports of a general M-port interferometer interact via an XY-model with collective dissipation and a large number of dark states that could be used for quantum information storage. More important is the fact that, when we neglect dissipation, the resulting long-range XY spin-spin interaction is equivalent to boson sampling under the same conditions that make boson sampling efficient. This allows efficient implementations of boson sampling using quantum simulators & quantum computers. We acknowledge support from Spanish Mineco Project FIS2012-33022, CAM Research Network QUITEMAD+ and EU FP7 FET-Open Project PROMISCE.

  12. Experiment 2030. EE-2 Temperature Log and Downhole Water Sample

    SciTech Connect

    Grigsby, Charles O.

    1983-07-29

    A temperature log and downhole water sample run were conducted in EE-2 on July 13, 1983. The temperature log was taken to show any changes which had occurred in the fracture-to-wellbore intersections as a result of the Experiment 2020 pumping and to locate fluid entries for taking the water sample. The water sample was requested primarily to determine the arsenic concentration in EE-2 fluids (see memo from C.Grigsby, June 28, 1983 concerning arsenic in EE-3 samples.) The temperature log was run using the thermistor in the ESS-6 water samples.

  13. Proximity effect thermometer for local temperature measurements on mesoscopic samples.

    SciTech Connect

    Aumentado, J.; Eom, J.; Chandrasekhar, V.; Baldo, P. M.; Rehn, L. E.; Materials Science Division; Northwestern Univ; Univ. of Chicago

    1999-11-29

    Using the strong temperature-dependent resistance of a normal metal wire in proximity to a superconductor, we have been able to measure the local temperature of electrons heated by flowing a direct-current (dc) in a metallic wire to within a few tens of millikelvin at low temperatures. By placing two such thermometers at different parts of a sample, we have been able to measure the temperature difference induced by a dc flowing in the samples. This technique may provide a flexible means of making quantitative thermal and thermoelectric measurements on mesoscopic metallic samples.

  14. Axonal model for temperature stimulation.

    PubMed

    Fribance, Sarah; Wang, Jicheng; Roppolo, James R; de Groat, William C; Tai, Changfeng

    2016-10-01

    Recent studies indicate that a rapid increase in local temperature plays an important role in nerve stimulation by laser. To analyze the temperature effect, our study modified the classical HH axonal model by incorporating a membrane capacitance-temperature relationship. The modified model successfully simulated the generation and propagation of action potentials induced by a rapid increase in local temperature when the Curie temperature of membrane capacitance is below 40 °C, while the classical model failed to simulate the axonal excitation by temperature stimulation. The new model predicts that a rapid increase in local temperature produces a rapid increase in membrane capacitance, which causes an inward membrane current across the membrane capacitor strong enough to depolarize the membrane and generate an action potential. If the Curie temperature of membrane capacitance is 31 °C, a temperature increase of 6.6-11.2 °C within 0.1-2.6 ms is required for axonal excitation and the required increase is smaller for a faster increase. The model also predicts that: (1) the temperature increase could be smaller if the global axon temperature is higher; (2) axons of small diameter require a smaller temperature increase than axons of large diameter. Our study indicates that the axonal membrane capacitance-temperature relationship plays a critical role in inducing the transient membrane depolarization by a rapidly increasing temperature, while the effects of temperature on ion channel kinetics cannot induce depolarization. The axonal model developed in this study will be very useful for analyzing the axonal response to local heating induced by pulsed infrared laser. PMID:27342462

  15. Calibration of tip and sample temperature of a scanning tunneling microscope using a superconductive sample

    SciTech Connect

    Stocker, Matthias; Pfeifer, Holger; Koslowski, Berndt

    2014-05-15

    The temperature of the electrodes is a crucial parameter in virtually all tunneling experiments. The temperature not only controls the thermodynamic state of the electrodes but also causes thermal broadening, which limits the energy resolution. Unfortunately, the construction of many scanning tunneling microscopes inherits a weak thermal link between tip and sample in order to make one side movable. Such, the temperature of that electrode is badly defined. Here, the authors present a procedure to calibrate the tip temperature by very simple means. The authors use a superconducting sample (Nb) and a standard tip made from W. Due to the asymmetry in the density of states of the superconductor (SC)—normal metal (NM) tunneling junction, the SC temperature controls predominantly the density of states while the NM controls the thermal smearing. By numerically simulating the I-V curves and numerically optimizing the tip temperature and the SC gap width, the tip temperature can be accurately deduced if the sample temperature is known or measureable. In our case, the temperature dependence of the SC gap may serve as a temperature sensor, leading to an accurate NM temperature even if the SC temperature is unknown.

  16. Helium POT System for Maintaining Sample Temperature after Cryocooler Deactivation

    NASA Astrophysics Data System (ADS)

    Haid, B. J.

    2006-04-01

    A system for maintaining a sample at a constant temperature below 10 K after deactivating the cooling source is demonstrated. In this system, the cooling source is a 4 K GM cryocooler that is joined with the sample through an extension that consists of a helium pot and a thermal resistance. Upon stopping the cryocooler, the power applied to a heater located on the sample side of the thermal resistance is decreased gradually to maintain an appropriate temperature rise across the thermal resistance as the helium pot warms. The sample temperature is held constant in this manner without the use of solid or liquid cryogens and without mechanically disconnecting the sample from the cooler. Shutting off the cryocooler significantly reduces sample motion that results from vibration and expansion/contraction of the cold-head housing. The reduction in motion permits certain procedures that are very sensitive to sample position stability, but are performed with limited duration. A proof-of-concept system was built and operated with the helium pot pressurized to the cryocooler's charge pressure. A sample with 200 mW of continuous heat dissipation was maintained at 7 K while the cryocooler operated intermittently with a duty cycle of 9.5 minutes off and 20 minutes on.

  17. CORRECTINH LAND SURFACE MODEL PREDICTIONS FOR THE IMPACT OF SPARSELY SAMPLED RAINFALL RATE RETRIEVALS USING AN ENSEMBLE KALMAN FILTER AND REMOTE SEURFACE BRIGHTNESS TEMPERATURE OBSERVATIONS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Current attempts to measure short-term (< 1 month) rainfall accumulations using spaceborne radiometers are characterized by large sampling errors associated with relatively infrequent observation rates (2 to 8 measurements per day). This degrades the value of spaceborne rainfall retrievals for the ...

  18. Noncontact true temperature measurement. [of levitated sample using laser pyrometer

    NASA Technical Reports Server (NTRS)

    Lee, Mark C.; Allen, James L.

    1987-01-01

    A laser pyrometer has been developed for acquiring the true temperature of a levitated sample. The laser beam is first expanded to cover the entire cross-sectional surface of the target. For calibration of such a system, the reflectivity signal of an ideal 0.95 cm diameter gold-coated sphere (reflectivity = 0.99) is used as the reference for any other real targets. The emissivity of the real target can then be calculated. The overall system constant is obtained by passively measuring the radiance of a blackbody furnace (emissivity = 1.0) at a known, arbitrary temperature. Since the photo sensor used is highly linear over the entire operating temperature range, the true temperature of the target can then be computed. Preliminary results indicate that true temperatures thus obtained are in excellent correlation with thermocouple measured temperatures.

  19. Simple microcalorimeter for measuring microgram samples at low temperatures

    NASA Astrophysics Data System (ADS)

    Doettinger-Zech, S. G.; Uhl, M.; Sisson, D. L.; Kapitulnik, A.

    2001-05-01

    An innovative microcalorimeter has been developed for measuring specific heat of very small microgram samples in the temperature range from 1.5 to 50 K and in magnetic fields up to 11 T. The device is built from a commercial sapphire temperature chip (Cernox), which is modified by means of standard microfabrication techniques and which is used as a sample holder, temperature sensor, and sample heater. Compared to existing microcalorimeters the simple design of our instrument allows a fabrication of the device in a few process steps by using facilities present in a standard laboratory clean room. As an illustrative example for the performance of our device, the specific heat of an underdoped (La1-xSrx)2CuO4 and CaRuO3 single crystal has been measured by means of the relaxation time method as well as the ac method.

  20. Rotating sample magnetometer for cryogenic temperatures and high magnetic fields

    NASA Astrophysics Data System (ADS)

    Eisterer, M.; Hengstberger, F.; Voutsinas, C. S.; Hörhager, N.; Sorta, S.; Hecher, J.; Weber, H. W.

    2011-06-01

    We report on the design and implementation of a rotating sample magnetometer (RSM) operating in the variable temperature insert (VTI) of a cryostat equipped with a high-field magnet. The limited space and the cryogenic temperatures impose the most critical design parameters: the small bore size of the magnet requires a very compact pick-up coil system and the low temperatures demand a very careful design of the bearings. Despite these difficulties the RSM achieves excellent resolution at high magnetic field sweep rates, exceeding that of a typical vibrating sample magnetometer by about a factor of ten. In addition the gas-flow cryostat and the high-field superconducting magnet provide a temperature and magnetic field range unprecedented for this type of magnetometer.

  1. Response of TGS ferroelectric samples to rapid temperature impulses

    NASA Astrophysics Data System (ADS)

    Trybus, M.; Proszak, W.; Woś, B.

    2013-11-01

    Tryglicine sulphate (TGS) is one of the most extensively studied ferroelectric materials, which undergoes second order phase transition and shows the pyroelectric effect. In our present experiments we study the electric properties of TGS, in relation to domain switching, observing the samples' response to controlled temperature pulses. The charge released in the processes of domain switching was previously studied under constant temperature growth. Our method allows us to observe the released pyroelectric charge in both the ferroelectric and paraelectric phases. To perform our experiment we designed new measurement software and constructed a novel thermostatic sample holder containing Peltier's cells as heating/cooling elements.

  2. Parametric models for samples of random functions

    SciTech Connect

    Grigoriu, M.

    2015-09-15

    A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.

  3. Ultra sound absorption measurements in rock samples at low temperatures

    NASA Technical Reports Server (NTRS)

    Herminghaus, C.; Berckhemer, H.

    1974-01-01

    A new technique, comparable with the reverberation method in room acoustics, is described. It allows Q-measurements at rock samples of arbitrary shape in the frequency range of 50 to 600 kHz in vacuum (.1 mtorr) and at low temperatures (+20 to -180 C). The method was developed in particular to investigate rock samples under lunar conditions. Ultrasound absorption has been measured at volcanics, breccia, gabbros, feldspar and quartz of different grain size and texture yielding the following results: evacuation raises Q mainly through lowering the humidity in the rock. In a dry compact rock, the effect of evacuation is small. With decreasing temperature, Q generally increases. Between +20 and -30 C, Q does not change much. With further decrease of temperature in many cases distinct anomalies appear, where Q becomes frequency dependent.

  4. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  5. Fast temperature spectrometer for samples under extreme conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Dongzhou; Jackson, Jennifer M.; Zhao, Jiyong; Sturhahn, Wolfgang; Alp, E. Ercan; Toellner, Thomas S.; Hu, Michael Y.

    2015-01-01

    We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample's transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of 57Fe0.9Ni0.1 at high pressure.

  6. Fast temperature spectrometer for samples under extreme conditions.

    PubMed

    Zhang, Dongzhou; Jackson, Jennifer M; Zhao, Jiyong; Sturhahn, Wolfgang; Alp, E Ercan; Toellner, Thomas S; Hu, Michael Y

    2015-01-01

    We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample's transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of (57)Fe0.9Ni0.1 at high pressure. PMID:25638070

  7. Temperature data for phenological models.

    PubMed

    Snyder, R L; Spano, D; Duce, P; Cesaraccio, C

    2001-11-01

    In an arid environment, the effect of evaporation on energy balance can affect air temperature recordings and greatly impact on degree-day calculations. This is an important consideration when choosing a site or climate data for phenological models. To our knowledge, there is no literature showing the effect of the underlying surface and its fetch around a weather station on degree-day accumulations. In this paper, we present data to show that this is a serious consideration, and it can lead to dubious models. Microscale measurements of temperature and energy balance are presented to explain why the differences occur. For example, the effect of fetch of irrigated grass and wetting of bare soil around a weather station on diurnal temperature are reported. A 43-day experiment showed that temperature measured on the upwind edge of an irrigated grass area averaged 4% higher than temperatures recorded 200 m inside the grass field. When the single-triangle method was used with a 10 degrees C threshold and starting on May 19, the station on the upwind edge recorded 900 degree-days on June 28, whereas the interior station recorded 900 degree-days on July 1. Clearly, a difference in fetch can lead to big errors for large degree-day accumulations. Immediately after wetting, the temperature over a wet soil surface was similar to that measured over grass. However, the temperature over the soil increased more than that over the grass as the soil surface dried. Therefore, the observed difference between temperatures measured over bare soil and those over grass increases with longer periods between wettings. In most arid locations, measuring temperature over irrigated grass gives a lower mean annual temperature, resulting in lower annual cumulative degree-day values. This was verified by comparing measurements over grass with those over bare soil at several weather stations in a range of climates. To eliminate the effect of rainfall frequency, using temperature data collected

  8. Thermal modeling of core sampling in flammable gas waste tanks. Part 1: Push-mode sampling

    SciTech Connect

    Unal, C.; Stroh, K.; Pasamehmetoglu, K.O.

    1997-08-01

    The radioactive waste stored in underground storage tanks at Hanford site is routinely being sampled for waste characterization purposes. The push- and rotary-mode core sampling is one of the sampling methods employed. The waste includes mixtures of sodium nitrate and sodium nitrite with organic compounds that can produce violent exothermic reactions if heated above 160 C during core sampling. A self-propagating waste reaction would produce very high temperatures that eventually result in failure of the tank and radioactive material releases to environment. A two-dimensional thermal model based on a lumped finite volume analysis method is developed. The enthalpy of each node is calculated from the first law of thermodynamics. A flash temperature and effective contact area concept were introduced to account the interface temperature rise. No maximum temperature rise exceeding the critical value of 60 C was found in the cases studied for normal operating conditions. Several accident conditions are also examined. In these cases it was found that the maximum drill bit temperature remained below the critical reaction temperature as long as a 30 scfm purge flow is provided the push-mode drill bit during sampling in rotary mode. The failure to provide purge flow resulted in exceeding the limiting temperatures in a relatively short time.

  9. A low temperature scanning force microscope for biological samples

    SciTech Connect

    Gustafsson, M. G.L.

    1993-05-01

    An SFM has been constructed capable of operating at 143 K. Two contributions to SFM technology are described: a new method of fabricating tips, and new designs of SFM springs that significantly lower the noise level. The SFM has been used to image several biological samples (including collagen, ferritin, RNA, purple membrane) at 143 K and room temperature. No improvement in resolution resulted from 143 K operation; several possible reasons for this are discussed. Possibly sharper tips may help. The 143 K SFM will allow the study of new categories of samples, such as those prepared by freeze-frame, single molecules (temperature dependence of mechanical properties), etc. The SFM was used to cut single collagen molecules into segments with a precision of {le} 10 nm.

  10. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  11. Advances in downhole sampling of high temperature solutions

    SciTech Connect

    Bayhurst, G.K.; Janecky, D.R.

    1991-01-01

    A fluid sampler capable of sampling hot and/or deep wells has been developed at Los Alamos National Laboratory. In collaboration with Leutert Instruments, an off-the-shelf sampler design was modified to meet gas-tight and minimal chemical reactivity/contamination specifications for use in geothermal wells and deep ocean drillholes. This downhole sampler has been routinely used at temperatures up to 300{degrees}C and hole depths of greater than 5 km. We have tested this sampler in various continental wells, including Valles Caldera VC-2a and VC-2b, German KTB, Cajon Pass, and Yellowstone Y-10. Both the standard commercial and enhanced samplers have also been used to obtain samples from a range of depths in the Ocean Drilling Project's hole 504B and during recent mid-ocean ridge drilling efforts. The sampler has made it possible to collect samples at temperatures and conditions beyond the limits of other tools with the added advantage of chemical corrosion resistance.

  12. A Simple Model for Solidification of Undercooled Metallic Samples

    NASA Astrophysics Data System (ADS)

    Saleh, Abdala M.; Clemente, Roberto A.

    2004-06-01

    A simple model for reproducing temperature recalescence behaviour in spherical undercooled liquid metallic samples, undergoing crystallization transformations, is presented. The model is applied to constant heat extraction rate, uniform but time dependent temperature distribution inside the sample (even after the start of crystallization), a classical temperature dependent rate of nucleation (including contributions from different specific heats for different phases and also a catalytic factor to model the possibility of heterogeneous distributed impurities) and the solidified grain interface velocity is taken proportional to the temperature undercooling. Different assumptions are considered for the sample transformed fraction as function of the extended volume of nuclei, like the classical Kolmogoroff, Johnson-Mehl, Avrami one (corresponding to random distribution of nuclei), the Austin-Rickett one (corresponding to some kind of clusterized distribution) and also an empirical one corresponding to some ordering in the distribution of nuclei. As an example of application, a published experimental temperature curve for a zirconium sample in the electromagnetic containerless facility TEMPUS, during the 2nd International Microgravity Laboratory Mission in 1994, is modeled. Some thermo-physical parameters of interest for Zr are discussed.

  13. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested

  14. Tissue Sampling Guides for Porcine Biomedical Models.

    PubMed

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. PMID:26883152

  15. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  16. Monte Carlo Sampling of Negative-temperature Plasma States

    SciTech Connect

    John A. Krommes; Sharadini Rath

    2002-07-19

    A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.

  17. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    SciTech Connect

    Montaser, A.

    1992-01-01

    New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)

  18. Beam Heating of Samples: Modeling and Verification. Part 2

    NASA Technical Reports Server (NTRS)

    Kazmierczak, Michael; Gopalakrishnan, Pradeep; Kumar, Raghav; Banerjee Rupak; Snell, Edward; Bellamy, Henry; Rosenbaum, Gerd; vanderWoerd, Mark

    2006-01-01

    Energy absorbed from the X-ray beam by the sample requires cooling by forced convection (i.e. cryostream) to minimize temperature increase and the damage caused to the sample by the X-ray heating. In this presentation we will first review the current theoretical models and recent studies in the literature, which predict the sample temperature rise for a given set of beam parameters. It should be noted that a common weakness of these previous studies is that none of them provide actual experimental confirmation. This situation is now remedied in our investigation where the problem of x-ray sample heating is taken up once more. We have theoretically investigated, and at the same time, in addition to the numerical computations, performed experiments to validate the predictions. We have modeled, analyzed and experimentally tested the temperature rise of a 1 mm diameter glass sphere (sample surrogate) exposed to an intense synchrotron X-ray beam, while it is being cooled in a uniform flow of nitrogen gas. The heat transfer, including external convection and internal heat conduction was theoretically modeled using CFD to predict the temperature variation in the sphere during cooling and while it was subjected to an undulator (ID sector 19) X-ray beam at the APS. The surface temperature of the sphere during the X-ray beam heating was measured using the infrared camera measurement technique described in a previous talk. The temperatures from the numerical predictions and experimental measurements are compared and discussed. Additional results are reported for the two different sphere sizes and for two different supporting pin orientations.

  19. Current Sharing Temperature Test and Simulation with GANDALF Code for ITER PF2 Conductor Sample

    NASA Astrophysics Data System (ADS)

    Li, Shaolei; Wu, Yu; Liu, Bo; Weng, Peide

    2011-10-01

    Cable-in-conduit conductor (CICC) conductor sample of the PF2 coil for ITER was tested in the SULTAN facility. According to the test results, the CICC conductor sample exhibited a stable performance regarding the current sharing temperature. Under the typical operational conditions of a current of 45 kA, a magnetic field of 4 T and a temperature of 5 K for PF2, the test result for the conductor current sharing temperature is 6.71 K, with a temperature margin of 1.71 K. For a comparison thermal-hydraulic analysis of the PF2 conductor was carried out using GANDALF code in a 1-D model, and the result is consistent with the test one.

  20. COAL SAMPLING AND ANALYSIS: METHODS AND MODELS

    EPA Science Inventory

    The report provides information on coal sampling and analysis (CSD) techniques and procedures and presents a statistical model for estimating SO2 emissions. (New Source Performance Standards for large coal-fired boilers and certain State Implementation Plans require operators to ...

  1. Mixture models for distance sampling detection functions.

    PubMed

    Miller, David L; Thomas, Len

    2015-01-01

    We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used "key function plus series adjustment" (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set. PMID:25793744

  2. Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea

    NASA Astrophysics Data System (ADS)

    Lorentzen, Torbjørn

    2014-02-01

    The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model

  3. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  4. Annealed Importance Sampling for Neural Mass Models.

    PubMed

    Penny, Will; Sengupta, Biswa

    2016-03-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  5. Latin hypercube sampling with the SESOIL model

    SciTech Connect

    Hetrick, D.M.; Luxmoore, R.J.; Tharp, M.L.

    1994-09-01

    The seasonal soil compartment model SESOIL, a one-dimensional vertical transport code for chemicals in the unsaturated soil zone, has been coupled with the Monte Carlo computer code PRISM, which utilizes a Latin hypercube sampling method. Frequency distributions are assigned to each of 64 soil, chemical, and climate input variables for the SESOIL model, and these distributions are randomly sampled to generate N (200, for example) input data sets. The SESOIL model is run by PRISM for each set of input values, and the combined set of model variables and predictions are evaluated statistically by PRISM to summarize the relative influence of input variables on model results. Output frequency distributions for selected SESOIL components are produced. As an initial analysis and to illustrate the PRISM/SESOIL approach, input data were compiled for the model for three sites at different regions of the country (Oak Ridge, Tenn.; Fresno, Calif.; Fargo, N.D.). The chemical chosen for the analysis was trichloroethylene (TCE), which was initially loaded in the soil column at a 60- to 90-cm depth. The soil type at each site was assumed to be identical to the cherty silt loam at Oak Ridge; the only difference in the three data sets was the climatic data. Output distributions for TCE mass flux volatilized, TCE mass flux to groundwater, and residual TCE concentration in the lowest soil layer are vastly different for the three sites.

  6. EVALUATION OF STATIONARY SOURCE PARTICULATE MEASUREMENT METHODS. VOLUME III. GAS TEMPERATURE CONTROL DURING METHOD 5 SAMPLING

    EPA Science Inventory

    A study was conducted to measure changes in gas temperature along the length of a Method 5 sampling train due to variations in stack gas temperature, sampling rate, filter box temperature and method for controlling the probe heating element. For each run condition, temperatures w...

  7. Far infrared reflectance of sintered nickel manganite samples for negative temperature coefficient thermistors

    SciTech Connect

    Nikolic, M.V. . E-mail: maria@mi.sanu.ac.yu; Paraskevopoulos, K.M.; Aleksic, O.S.; Zorba, T.T.; Savic, S.M.; Lukovic, D.T.

    2007-08-07

    Single phase complex spinel (Mn, Ni, Co, Fe){sub 3}O{sub 4} samples were sintered at 1050, 1200 and 1300 deg. C for 30 min and at 1200 deg. C for 120 min. Morphological changes of the obtained samples with the sintering temperature and time were analyzed by X-ray diffraction and scanning electron microscope (SEM). Room temperature far infrared reflectivity spectra for all samples were measured in the frequency range between 50 and 1200 cm{sup -1}. The obtained spectra for all samples showed the presence of the same oscillators, but their intensities increased with the sintering temperature and time in correlation with the increase in sample density and microstructure changes during sintering. The measured spectra were numerically analyzed using the Kramers-Kroenig method and the four-parameter model of coupled oscillators. Optical modes were calculated for six observed ionic oscillators belonging to the spinel structure of (Mn, Ni, Co, Fe){sub 3}O{sub 4} of which four were strong and two were weak.

  8. Bayesian nonparametric models for ranked set sampling.

    PubMed

    Gemayel, Nader; Stasny, Elizabeth A; Wolfe, Douglas A

    2015-04-01

    Ranked set sampling (RSS) is a data collection technique that combines measurement with judgment ranking for statistical inference. This paper lays out a formal and natural Bayesian framework for RSS that is analogous to its frequentist justification, and that does not require the assumption of perfect ranking or use of any imperfect ranking models. Prior beliefs about the judgment order statistic distributions and their interdependence are embodied by a nonparametric prior distribution. Posterior inference is carried out by means of Markov chain Monte Carlo techniques, and yields estimators of the judgment order statistic distributions (and of functionals of those distributions). PMID:25326663

  9. Fast temperature spectrometer for samples under extreme conditions

    SciTech Connect

    Zhang, Dongzhou; Jackson, Jennifer M.; Sturhahn, Wolfgang; Zhao, Jiyong; Alp, E. Ercan; Toellner, Thomas S.; Hu, Michael Y.

    2015-01-15

    We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample’s transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of {sup 57}Fe{sub 0.9}Ni{sub 0.1} at high pressure.

  10. Finite-size Scaling Considerations on the Ground State Microcanonical Temperature in Entropic Sampling Simulations

    NASA Astrophysics Data System (ADS)

    Caparica, A. A.; DaSilva, Cláudio J.

    2015-12-01

    In this work, we discuss the behavior of the microcanonical temperature {partial S(E)}/{partial E} obtained by means of numerical entropic sampling studies. It is observed that in almost all cases, the slope of the logarithm of the density of states S( E) is not infinite in the ground state, since as expected it should be directly related to the inverse temperature {1}/{T}. Here, we show that these finite slopes are in fact due to finite-size effects and we propose an analytic expression aln( bL) for the behavior of {\\varDelta S}/{\\varDelta E} when L→ ∞. To test this idea, we use three distinct two-dimensional square lattice models presenting second-order phase transitions. We calculated by exact means the parameters a and b for the two-states Ising model and for the q = 3 and 4 states Potts model and compared with the results obtained by entropic sampling simulations. We found an excellent agreement between exact and numerical values. We argue that this new set of parameters a and b represents an interesting novel issue of investigation in entropic sampling studies for different models.

  11. Thermal modeling of core sampling in flammable gas waste tanks. Part 2: Rotary-mode sampling

    SciTech Connect

    Unal, C.; Poston, D.; Pasamehmetoglu, K.O.; Witwer, K.S.

    1997-08-01

    The radioactive waste stored in underground storage tanks at Hanford site includes mixtures of sodium nitrate and sodium nitrite with organic compounds. The waste can produce undesired violent exothermic reactions when heated locally during the rotary-mode sampling. Experiments are performed varying the downward force at a maximum rotational speed of 55 rpm and minimum nitrogen purge flow of 30 scfm. The rotary drill bit teeth-face temperatures are measured. The waste is simulated with a low thermal conductivity hard material, pumice blocks. A torque meter is used to determine the energy provided to the drill string. The exhaust air-chip temperature as well as drill string and drill bit temperatures and other key operating parameters were recorded. A two-dimensional thermal model is developed. The safe operating conditions were determined for normal operating conditions. A downward force of 750 at 55 rpm and 30 scfm nitrogen purge flow was found to yield acceptable substrate temperatures. The model predicted experimental results reasonably well. Therefore, it could be used to simulate abnormal conditions to develop procedures for safe operations.

  12. Determining Curie temperature of (Ga,Mn)As samples based on electrical transport measurements: Low Curie temperature case

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Adam; Gryglas-Borysiewicz, Marta; Juszyński, Piotr; Przybytek, Jacek; Sawicki, Maciej; Sadowski, Janusz; Wasik, Dariusz; Baj, Michał

    2016-06-01

    In this paper, we show that the widely accepted method of the determination of Curie temperature (TC) in (Ga,Mn)As samples, based on the position of the peak in the temperature derivative of the resistivity, completely fails in the case of non-metallic and low-TC unannealed samples. In this case, we propose an alternative method, also based on electric transport measurements, which exploits temperature dependence of the second derivative of the resistivity upon magnetic field.

  13. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  14. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    PubMed Central

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  15. Estimation of surface heat flux and surface temperature during inverse heat conduction under varying spray parameters and sample initial temperature.

    PubMed

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong; Zubair, Muhammad

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m(2) was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  16. LAKE WATER TEMPERATURE SIMULATION MODEL

    EPA Science Inventory

    Functional relationships to describe surface wind mixing, vertical turbulent diffusion, convective heat transfer, and radiation penetration based on data from lakes in Minnesota have been developed. hese relationships have been introduced by regressing model parameters found eith...

  17. Modeling monthly mean air temperature for Brazil

    NASA Astrophysics Data System (ADS)

    Alvares, Clayton Alcarde; Stape, José Luiz; Sentelhas, Paulo Cesar; de Moraes Gonçalves, José Leonardo

    2013-08-01

    Air temperature is one of the main weather variables influencing agriculture around the world. Its availability, however, is a concern, mainly in Brazil where the weather stations are more concentrated on the coastal regions of the country. Therefore, the present study had as an objective to develop models for estimating monthly and annual mean air temperature for the Brazilian territory using multiple regression and geographic information system techniques. Temperature data from 2,400 stations distributed across the Brazilian territory were used, 1,800 to develop the equations and 600 for validating them, as well as their geographical coordinates and altitude as independent variables for the models. A total of 39 models were developed, relating the dependent variables maximum, mean, and minimum air temperatures (monthly and annual) to the independent variables latitude, longitude, altitude, and their combinations. All regression models were statistically significant ( α ≤ 0.01). The monthly and annual temperature models presented determination coefficients between 0.54 and 0.96. We obtained an overall spatial correlation higher than 0.9 between the models proposed and the 16 major models already published for some Brazilian regions, considering a total of 3.67 × 108 pixels evaluated. Our national temperature models are recommended to predict air temperature in all Brazilian territories.

  18. Effects of High-frequency Wind Sampling on Simulated Mixed Layer Depth and Upper Ocean Temperature

    NASA Technical Reports Server (NTRS)

    Lee, Tong; Liu, W. Timothy

    2005-01-01

    Effects of high-frequency wind sampling on a near-global ocean model are studied by forcing the model with a 12 hourly averaged wind product and its 24 hourly subsamples in separate experiments. The differences in mixed layer depth and sea surface temperature resulting from these experiments are examined, and the underlying physical processes are investigated. The 24 hourly subsampling not only reduces the high-frequency variability of the wind but also affects the annual mean wind because of aliasing. While the former effect largely impacts mid- to high-latitude oceans, the latter primarily affects tropical and coastal oceans. At mid- to high-latitude regions the subsampled wind results in a shallower mixed layer and higher sea surface temperature because of reduced vertical mixing associated with weaker high-frequency wind. In tropical and coastal regions, however, the change in upper ocean structure due to the wind subsampling is primarily caused by the difference in advection resulting from aliased annual mean wind, which varies with the subsampling time. The results of the study indicate a need for more frequent sampling of satellite wind measurement and have implications for data assimilation in terms of identifying the nature of model errors.

  19. On high-resolution sampling of short ice cores: Dating and temperature information recovery from Antarctic Peninsula virtual cores

    NASA Astrophysics Data System (ADS)

    Sime, Louise C.; Lang, Nicola; Thomas, Elizabeth R.; Benton, Ailsa K.; Mulvaney, Robert

    2011-10-01

    Recent developments in ice melter systems and continuous flow analysis (CFA) techniques now allow higher-resolution ice core analysis. Here, we present a new method to aid interpretation of high-resolution ice core stable water isotope records. Using a set of simple isotopic recording and postdepositional assumptions, the European Centre for Medium-Range Weather Forecasts' 40 year reanalysis time series of temperature and precipitation are converted to "virtual core" depth series across the Antarctic Peninsula, helping us to understand what information can be gleaned from the CFA high-resolution observations. Virtual core temperatures are transferred onto time using three different depth-age transfer assumptions: (1) a perfect depth-age model, (2) a depth-age model constructed from single or dual annual photochemical tie points, and (3) a cross-dated depth-age model. Comparing the sampled temperatures on the various depth-age models with the original time series allows quantification of the effect of ice core sample resolution and dating. We show that accurate annual layer count depth-age models should allow some subseasonal temperature anomalies to be recovered using a sample resolution of around 40 mm, or 10-20 samples per year. Seasonal temperature anomalies may be recovered using sample lengths closer to 60 mm, or about 7-14 samples per year. These results tend to confirm the value of current CFA ice core sampling strategies and indicate that it should be possible to recover about a third of subannual (but not synoptic) temperature anomaly information from annually "layer-counted" peninsula ice cores.

  20. Modeling daily average stream temperature from air temperature and watershed area

    NASA Astrophysics Data System (ADS)

    Butler, N. L.; Hunt, J. R.

    2012-12-01

    Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7

  1. Effect of the Target Motion Sampling Temperature Treatment Method on the Statistics and Performance

    NASA Astrophysics Data System (ADS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2014-06-01

    Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very strong resonances. This effect is actually not related to the usage of sampled responses, but is instead an inherent property of the TMS tracking method and concerns both EBT and 0 K calculations.

  2. The X-ray luminosity temperature relation of a complete sample of low mass galaxy clusters

    NASA Astrophysics Data System (ADS)

    Zou, S.; Maughan, B. J.; Giles, P. A.; Vikhlinin, A.; Pacaud, F.; Burenin, R.; Hornstrup, A.

    2016-08-01

    We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 < z < 0.15 with a median temperature of ˜2 KeV. The sample is a statistically complete flux-limited subset of the 400 deg2 survey. We investigated the scaling relation between X-ray luminosity (L) and temperature (T), taking selection biases fully into account. The logarithmic slope of the bolometric L - T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L - T relation we show that there is no evidence for the slope, normalisation, or scatter of the L - T relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised L - T relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups.

  3. Method and apparatus for transport, introduction, atomization and excitation of emission spectrum for quantitative analysis of high temperature gas sample streams containing vapor and particulates without degradation of sample stream temperature

    DOEpatents

    Eckels, David E.; Hass, William J.

    1989-05-30

    A sample transport, sample introduction, and flame excitation system for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses.

  4. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L. (d... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test for filter temperature...

  5. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L. (d... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test for filter temperature...

  6. AN EVALUATION OF PERSONAL SAMPLING PUMPS IN SUB-ZERO TEMPERATURES

    EPA Science Inventory

    Personal sampling pumps suitable for industrial hygiene surveys were evaluated to discover their characteristics as a function of temperature for temperatures between 25 and -50C. The pumps evaluated were significantly influenced by low temperatures. In general, most provided a s...

  7. Modulated photothermal radiometry applied to semitransparent samples: Models and experiments

    NASA Astrophysics Data System (ADS)

    André, S.; Rémy, B.; Maillet, D.; Degiovanni, A.; Serra, J.-J.

    2004-09-01

    Mathematical modeling is presented of the combined conductive and radiative heat transfer occurring in a semitransparent material (STM) subjected to a periodic heat flux. The models rely on the quadrupole method, which is a very powerful tool to obtain analytical solutions in the Fourier or Laplace domain. Photoacoustic or photothermal radiometry techniques are reviewed. Two groups of methods are discussed depending on whether the sample has natural or opaque interfaces to simulate radiative exchanges with the surroundings. The metrological problem of measuring the phonic thermal diffusivity of semitransparent materials is investigated. Theoretical simulations are given. They explain some typical features of the phase-lag signal of temperature responses. Experimental measurements on pure silica validate the results and prove that these methods are efficient for the thermal characterization of STM.

  8. SAMPLING SYSTEM EVALUATION FOR HIGH-TEMPERATURE, HIGH-PRESSURE PROCESSES

    EPA Science Inventory

    The report describes a sampling system designed for the high temperatures and high pressures found in pressurized fluidized-bed combustors (PFBC). The system uses an extractive sampling approach, withdrawing samples from the process stream for complete analysis of particulate siz...

  9. Measurement of temperature and temperature gradient in millimeter samples by chlorine NQR

    NASA Astrophysics Data System (ADS)

    Lužnik, Janko; Pirnat, Janez; Trontelj, Zvonko

    2009-09-01

    A mini-thermometer based on the 35Cl nuclear quadrupole resonance (NQR) frequency temperature dependence in the chlorates KClO3 and NaClO3 was built and successfully tested by measuring temperature and temperature gradient at 77 K and higher in about 100 mm3 active volume of a mini Joule-Thomson refrigerator. In the design of the tank-circuit coil, an array of small coils connected in series enabled us (a) to achieve a suitable ratio of inductance to capacity in the NQR spectrometer input tank circuit, (b) to use a single crystal of KClO3 or NaClO3 (of 1-2 mm3 size) in one coil as a mini-thermometer with a resolution of 0.03 K and (c) to construct a system for measuring temperature gradients when the spatial coordinates of each chlorate single crystal within an individual coil are known.

  10. Factors affecting quality of temperature models for the pre-appearance interval of forensically useful insects.

    PubMed

    Matuszewski, Szymon; Mądra, Anna

    2015-02-01

    In the case of many forensically important insects an interval preceding appearance of an insect stage on a corpse (called the pre-appearance interval or PAI) is strongly temperature-dependent. Accordingly, it was proposed to estimate PAI from temperature by using temperature models for PAI of particular insect species and temperature data specific for a given case. The quality of temperature models for PAI depends on the protocols for PAI field studies. In this article we analyze effects of sampling frequency and techniques, temperature data, as well as the size of a sample on the quality of PAI models. Models were created by using data from a largely replicated PAI field study, and their performance in estimation was tested with external body of PAI data. It was found that low frequency of insect sampling distinctly deteriorated temperature models for PAI. The effect of sampling techniques was clearly smaller. Temperature data from local weather station gave models of poor quality, however their retrospective correction clearly improved the models. Most importantly, current results demonstrate that sample size in PAI field studies may be substantially reduced, with no model deterioration. Samples consisting of 11-14 carcasses gave models of high quality, as long as the whole range of relevant temperatures was studied. Moreover, it was found that carcasses exposed in forests and carcasses exposed in early spring are particularly important, as they ensure that PAI data is collected at low temperatures. A preliminary best practice model for PAI field studies is given. PMID:25541074

  11. Temperature dependence of standard model CP violation.

    PubMed

    Brauner, Tomáš; Taanila, Olli; Tranberg, Anders; Vuorinen, Aleksi

    2012-01-27

    We analyze the temperature dependence of CP violation effects in the standard model by determining the effective action of its bosonic fields, obtained after integrating out the fermions from the theory and performing a covariant gradient expansion. We find nonvanishing CP violating terms starting at the sixth order of the expansion, albeit only in the C-odd-P-even sector, with coefficients that depend on quark masses, Cabibbo-Kobayashi-Maskawa matrix elements, temperature and the magnitude of the Higgs field. The CP violating effects are observed to decrease rapidly with temperature, which has important implications for the generation of a matter-antimatter asymmetry in the early Universe. Our results suggest that the cold electroweak baryogenesis scenario may be viable within the standard model, provided the electroweak transition temperature is at most of order 1 GeV. PMID:22400822

  12. Model for superconductivity at any temperature

    NASA Astrophysics Data System (ADS)

    Anber, Mohamed M.; Burnier, Yannis; Sabancilar, Eray; Shaposhnikov, Mikhail

    2016-01-01

    We construct a 2 +1 dimensional model that sustains superconductivity at all temperatures. This is achieved by introducing a Chern-Simons mixing term between two Abelian gauge fields A and Z . The superfluid is described by a complex scalar charged under Z , whereas a sufficiently strong magnetic field of A forces the superconducting condensate to form at all temperatures. In fact, at finite temperature, the theory exhibits Berezinsky-Kosterlitz-Thouless phase transition due to proliferation of topological vortices admitted by our construction. However, the critical temperature is proportional to the magnetic field of A , and thus, the phase transition can be postponed to high temperatures by increasing the strength of the magnetic field.

  13. Pre-analytical sample quality: metabolite ratios as an intrinsic marker for prolonged room temperature exposure of serum samples.

    PubMed

    Anton, Gabriele; Wilson, Rory; Yu, Zhong-Hao; Prehn, Cornelia; Zukunft, Sven; Adamski, Jerzy; Heier, Margit; Meisinger, Christa; Römisch-Margl, Werner; Wang-Sattler, Rui; Hveem, Kristian; Wolfenbuttel, Bruce; Peters, Annette; Kastenmüller, Gabi; Waldenberger, Melanie

    2015-01-01

    Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT) and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation. PMID:25823017

  14. Pre-Analytical Sample Quality: Metabolite Ratios as an Intrinsic Marker for Prolonged Room Temperature Exposure of Serum Samples

    PubMed Central

    Anton, Gabriele; Wilson, Rory; Yu, Zhong-hao; Prehn, Cornelia; Zukunft, Sven; Adamski, Jerzy; Heier, Margit; Meisinger, Christa; Römisch-Margl, Werner; Wang-Sattler, Rui; Hveem, Kristian; Wolfenbuttel, Bruce; Peters, Annette; Kastenmüller, Gabi; Waldenberger, Melanie

    2015-01-01

    Advances in the “omics” field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between ‘good’ and ‘bad’ quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT) and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between ‘good’ and ‘bad’ quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation. PMID:25823017

  15. Temperature dependence on the pesticide sampling rate of polar organic chemical integrative samplers (POCIS).

    PubMed

    Yabuki, Yoshinori; Nagai, Takashi; Inao, Keiya; Ono, Junko; Aiko, Nobuyuki; Ohtsuka, Nobutoshi; Tanaka, Hitoshi; Tanimori, Shinji

    2016-10-01

    Laboratory experiments were performed to determine the sampling rates of pesticides for the polar organic chemical integrative samplers (POCIS) used in Japan. The concentrations of pesticides in aquatic environments were estimated from the accumulated amounts of pesticide on POCIS, and the effect of water temperature on the pesticide sampling rates was evaluated. The sampling rates of 48 pesticides at 18, 24, and 30 °C were obtained, and this study confirmed that increasing trend of sampling rates was resulted with increasing water temperature for many pesticides. PMID:27305429

  16. Global modeling of fresh surface water temperature

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Eikelboom, T.; van Vliet, M. T.; Van Beek, L. P.

    2011-12-01

    Temperature determines a range of water physical properties, the solubility of oxygen and other gases and acts as a strong control on fresh water biogeochemistry, influencing chemical reaction rates, phytoplankton and zooplankton composition and the presence or absence of pathogens. Thus, in freshwater ecosystems the thermal regime affects the geographical distribution of aquatic species through their growth and metabolism, tolerance to parasites, diseases and pollution and life history. Compared to statistical approaches, physically-based models of surface water temperature have the advantage that they are robust in light of changes in flow regime, river morphology, radiation balance and upstream hydrology. Such models are therefore better suited for projecting the effects of global change on water temperature. Till now, physically-based models have only been applied to well-defined fresh water bodies of limited size (e.g., lakes or stream segments), where the numerous parameters can be measured or otherwise established, whereas attempts to model water temperature over larger scales has thus far been limited to regression type of models. Here, we present a first attempt to apply a physically-based model of global fresh surface water temperature. The model adds a surface water energy balance to river discharge modelled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by short and long-wave radiation and sensible and latent heat fluxes. Also included are ice-formation and its effect on heat storage and river hydraulics. We used the coupled surface water and energy balance model to simulate global fresh surface water temperature at daily time steps on a 0.5x0.5 degree grid for the period 1970-2000. Meteorological forcing was obtained from the CRU data set, downscaled to daily values with ECMWF

  17. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1991-01-01

    A two dimensional conduction/radiation problem for an alumina crucible in a zirconia heater/muffle tube enclosing a liquid iron sample was solved numerically. Variations in the crucible wall thickness were numerically examined. The results showed that the temperature profiles within the liquid iron sample were significantly affected by the crucible wall thicknesses. New zirconia heating elements are under development that will permit continued experimental investigations of the zirconia furnace. These elements have been designed to work with the existing furnace and have been shown to have longer lifetimes than commercially available zirconia heating elements. The first element has been constructed and tested successfully.

  18. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  19. Determination of Thermal-Diffusivity Dependence on Temperature of Transparent Samples by Thermal Wave Method

    NASA Astrophysics Data System (ADS)

    Kaźmierczak-Bałata, Anna; Bodzenta, Jerzy; Trefon-Radziejewska, Dominika

    2010-01-01

    The use of a typical measuring cryostat with a standard temperature controller was proposed for investigation of the temperature dependence of the thermal diffusivity of transparent samples. The basic idea is to use the cryostat heater to control the mean sample temperature and to generate the thermal wave in it, simultaneously. Because of the relatively high thermal inertia of the system, the measurements are carried out at frequencies not exceeding 50 mHz. The periodic temperature disturbance in the sample was detected optically by the use of the mirage effect. The proposed method was used for determination of the thermal diffusivity of yttrium aluminum garnet single crystals in a temperature range from 20 °C to 200 °C.

  20. Sample Size Determination for Rasch Model Tests

    ERIC Educational Resources Information Center

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  1. Modeling complexometric titrations of natural water samples.

    PubMed

    Hudson, Robert J M; Rue, Eden L; Bruland, Kenneth W

    2003-04-15

    Complexometric titrations are the primary source of metal speciation data for aquatic systems, yet their interpretation in waters containing humic and fulvic acids remains problematic. In particular, the accuracy of inferred ambient free metal ion concentrations and parameters quantifying metal complexation by natural ligands has been challenged because of the difficulties inherent in calibrating common analytical methods and in modeling the diverse array of ligands present. This work tests and applies a new method of modeling titration data that combines calibration of analytical sensitivity (S) and estimation of concentrations and stability constants for discrete natural ligand classes ([Li]T and Ki) into a single step using nonlinear regression and a new analytical solution to the one-metal/two-ligand equilibrium problem. When applied to jointly model data from multiple titrations conducted at different analytical windows, it yields accurate estimates of S, [Li]T, Ki, and [Cu2+] plus Monte Carlo-based estimates of the uncertainty in [Cu2+]. Jointly modeling titration data at low-and high-analytical windows leads to an efficient adaptation of the recently proposed "overload" approach to calibrating ACSV/CLE measurements. Application of the method to published data sets yields model results with greater accuracy and precision than originally obtained. The discrete ligand-class model is also re-parametrized, using humic and fulvic acids, L1 class (K1 = 10(13) M(-1)), and strong ligands (L(S)) with K(S) > K1 as "natural components". This approach suggests that Cu complexation in NW Mediterranean Sea water can be well represented as 0.8 +/- 0.3/0.2 mg humic equiv/L, 13 +/- 1 nM L1, and 2.5 +/- 0.1 nM L(S) with [CU]T = 3 nM. In coastal seawater from Narragansett Bay, RI, Cu speciation can be modeled as 0.6 +/- 0.1 mg humic equiv/L and 22 +/- 1 nM L1 or approximately 12 nM L1 and approximately 9 nM L(S), with [CU]T = 13 nM. In both waters, the large excess

  2. The XXL Survey . IV. Mass-temperature relation of the bright cluster sample

    NASA Astrophysics Data System (ADS)

    Lieu, M.; Smith, G. P.; Giles, P. A.; Ziparo, F.; Maughan, B. J.; Démoclès, J.; Pacaud, F.; Pierre, M.; Adami, C.; Bahé, Y. M.; Clerc, N.; Chiappetti, L.; Eckert, D.; Ettori, S.; Lavoie, S.; Le Fevre, J. P.; McCarthy, I. G.; Kilbinger, M.; Ponman, T. J.; Sadibekova, T.; Willis, J. P.

    2016-06-01

    Context. The XXL Survey is the largest survey carried out by XMM-Newton. Covering an area of 50 deg2, the survey contains ~450 galaxy clusters out to a redshift ~2 and to an X-ray flux limit of ~ 5 × 10-15 erg s-1 cm-2. This paper is part of the first release of XXL results focussed on the bright cluster sample. Aims: We investigate the scaling relation between weak-lensing mass and X-ray temperature for the brightest clusters in XXL. The scaling relation discussed in this article is used to estimate the mass of all 100 clusters in XXL-100-GC. Methods: Based on a subsample of 38 objects that lie within the intersection of the northern XXL field and the publicly available CFHTLenS shear catalog, we derive the weak-lensing mass of each system with careful considerations of the systematics. The clusters lie at 0.1 temperature range of T ≃ 1-5 keV. We combine our sample with an additional 58 clusters from the literature, increasing the range to T ≃ 1-10 keV. To date, this is the largest sample of clusters with weak-lensing mass measurements that has been used to study the mass-temperature relation. Results: The mass-temperature relation fit (M ∝ Tb) to the XXL clusters returns a slope and intrinsic scatter σlnM|T≃ 0.53; the scatter is dominated by disturbed clusters. The fit to the combined sample of 96 clusters is in tension with self-similarity, b = 1.67 ± 0.12 and σlnM|T ≃ 0.41. Conclusions: Overall our results demonstrate the feasibility of ground-based weak-lensing scaling relation studies down to cool systems of ~1 keV temperature and highlight that the current data and samples are a limit to our statistical precision. As such we are unable to determine whether the validity of hydrostatic equilibrium is a function of halo mass. An enlarged sample of cool systems, deeper weak-lensing data, and robust modelling of the selection function will help to explore these issues further. Based on observations obtained with XMM-Newton, an ESA

  3. Simple method for highlighting the temperature distribution into a liquid sample heated by microwave power field

    SciTech Connect

    Surducan, V.; Surducan, E.; Dadarlat, D.

    2013-11-13

    Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved.

  4. Apparatus Measures Thermal Conductance Through a Thin Sample from Cryogenic to Room Temperature

    NASA Technical Reports Server (NTRS)

    Tuttle, James G.

    2009-01-01

    An apparatus allows the measurement of the thermal conductance across a thin sample clamped between metal plates, including thermal boundary resistances. It allows in-situ variation of the clamping force from zero to 30 lb (133.4 N), and variation of the sample temperature between 40 and 300 K. It has a special design feature that minimizes the effect of thermal radiation on this measurement. The apparatus includes a heater plate sandwiched between two identical thin samples. On the side of each sample opposite the heater plate is a cold plate. In order to take data, the heater plate is controlled at a slightly higher temperature than the two cold plates, which are controlled at a single lower temperature. The steady-state controlling power supplied to the hot plate, the area and thickness of samples, and the temperature drop across the samples are then used in a simple calculation of the thermal conductance. The conductance measurements can be taken at arbitrary temperatures down to about 40 K, as the entire setup is cooled by a mechanical cryocooler. The specific geometry combined with the pneumatic clamping force control system and the steady-state temperature control approach make this a unique apparatus.

  5. The ice nucleation temperature determines the primary drying rate of lyophilization for samples frozen on a temperature-controlled shelf.

    PubMed

    Searles, J A; Carpenter, J F; Randolph, T W

    2001-07-01

    The objective of this study was to determine the influence of ice nucleation temperature on the primary drying rate during lyophilization for samples in vials that were frozen on a lyophilizer shelf. Aqueous solutions of 10% (w/v) hydroxyethyl starch were frozen in vials with externally mounted thermocouples and then partially lyophilized to determine the primary drying rate. Low- and high-particulate-containing samples, ice-nucleating additives silver iodide and Pseudomonas syringae, and other methods were used to obtain a wide range of nucleation temperatures. In cases where the supercooling exceeded 5 degrees C, freezing took place in the following three steps: (1) primary nucleation, (2) secondary nucleation encompassing the entire liquid volume, and (3) final solidification. The primary drying rate was dependent on the ice nucleation temperature, which is stochastic in nature but is affected by particulate content and the presence of ice nucleators. Sample cooling rates of 0.05 to 1 degrees C/min had no effect on nucleation temperatures and drying rate. We found that the ice nucleation temperature is the primary determinant of the primary drying rate. However, the nucleation temperature is not under direct control, and its stochastic nature and sensitivity to difficult-to-control parameters result in drying rate heterogeneity. Nucleation temperature heterogeneity may also result in variation in other morphology-related parameters such as surface area and secondary drying rate. Overall, these results document that factors such as particulate content and vial condition, which influence ice nucleation temperature, must be carefully controlled to avoid, for example, lot-to-lot variability during cGMP production. In addition, if these factors are not controlled and/or are inadvertently changed during process development and scaleup, a lyophilization cycle that was successful on the research scale may fail during large-scale production. PMID:11458335

  6. Surface Temperature Assimilation in Land Surface Models

    NASA Technical Reports Server (NTRS)

    Lakshmi, Venkataraman

    1999-01-01

    This paper examines the utilization of surface temperature as a variable to be assimilated in offline land surface hydrological models. Comparisons between the model computed and satellite observed surface temperatures have been carried out. The assimilation of surface temperature is carried out twice a day (corresponding to the AM and PM overpass of the NOAA10) over the Red-Arkansas basin in the Southwestern United States (31 degs 50 sec N - 36 degrees N, 94 degrees 30 seconds W - 104 degrees 3 seconds W) for a period of one year (August 1987 to July 1988). The effect of assimilation is to reduce the difference between the surface soil moisture computed for the precipitation and/or shortwave radiation perturbed case and the unperturbed case compared to no assimilation.

  7. Surface Temperature Assimilation in Land Surface Models

    NASA Technical Reports Server (NTRS)

    Lakshmi, Venkataraman

    1997-01-01

    This paper examines the utilization of surface temperature as a variable to be assimilated in offline land surface hydrological models. Comparisons between the model computed and satellite observed surface temperatures have been carried out. The assimilation of surface temperature is carried out twice a day (corresponding to the AM and PM overpass of the NOAA10) over the Red- Arkansas basin in the Southwestern United States (31 deg 50 min N - 36 deg N, 94 deg 30 min W - 104 deg 30 min W) for a period of one year (August 1987 to July 1988). The effect of assimilation is to reduce the difference between the surface soil moisture computed for the precipitation and/or shortwave radiation perturbed case and the unperturbed case compared to no assimilation.

  8. Automated sample exchange and tracking system for neutron research at cryogenic temperatures.

    PubMed

    Rix, J E; Weber, J K R; Santodonato, L J; Hill, B; Walker, L M; McPherson, R; Wenzel, J; Hammons, S E; Hodges, J; Rennich, M; Volin, K J

    2007-01-01

    An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10 mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of approximately 25 K, after several minutes, it was moved onto a "landing pad" at approximately 10 K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2 min. The time to cool the sample from ambient temperature to approximately 10 K was approximately 7 min including precooling time. The cooling time increases to approximately 12 min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to approximately 350 K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to approximately 10 K after it has been heated to approximately 240 K was approximately 20 min. PMID:17503933

  9. Evaluation of the Validity of Crystallization Temperature Measurements Using Thermography with Different Sample Configurations

    NASA Astrophysics Data System (ADS)

    Yuko Aono,; Junpei Sakurai,; Akira Shimokohbe,; Seiichi Hata,

    2010-07-01

    We describe further progress of a previously reported novel crystallization temperature (Tx) measurement method applicable for small sample sizes. The method uses thermography and detects Tx as a change in emissivity of thin film amorphous alloy samples. We applied this method to various sample configurations of Pd-Cu-Si thin film metallic glass (TFMG). The validity of the detected Tx was determined by electrical resistivity monitoring and differential scanning calorimetry (DSC). Crystallization temperature can be detected in all sample configurations; however, it was found that the magnitude of the detected change of emissivity at Tx depended on the sample configuration. This emissivity change was clear in the absence of a higher emissivity material. The results suggest that this method can achieve high-throughput characterization of Tx for integrated small samples such as in a thin film library.

  10. Metamorphism during temperature gradient with undersaturated advective airflow in a snow sample

    NASA Astrophysics Data System (ADS)

    Ebner, Pirmin Philipp; Schneebeli, Martin; Steinfeld, Aldo

    2016-04-01

    Snow at or close to the surface commonly undergoes temperature gradient metamorphism under advective flow, which alters its microstructure and physical properties. Time-lapse X-ray microtomography is applied to investigate the structural dynamics of temperature gradient snow metamorphism exposed to an advective airflow in controlled laboratory conditions. Cold saturated air at the inlet was blown into the snow samples and warmed up while flowing across the sample with a temperature gradient of around 50 K m-1. Changes of the porous ice structure were observed at mid-height of the snow sample. Sublimation occurred due to the slight undersaturation of the incoming air into the warmer ice matrix. Diffusion of water vapor opposite to the direction of the temperature gradient counteracted the mass transport of advection. Therefore, the total net ice change was negligible leading to a constant porosity profile. However, the strong recrystallization of water molecules in snow may impact its isotopic or chemical content.

  11. Two-temperature models for nitrogen dissociation

    NASA Astrophysics Data System (ADS)

    da Silva, M. Lino; Guerra, V.; Loureiro, J.

    2007-12-01

    Accurate sets of nitrogen state-resolved dissociation rates have been reduced to two-temperature (translational T and vibrational Tv) dissociation rates. The analysis of such two-temperature dissociation rates shows evidence of two different dissociation behaviors. For Tv < 0.3 T dissociation proceeds predominantly from the lower-lying vibrational levels, whereas for Tv > 0.3 T dissociation proceeds predominantly form the near-dissociative vibrational levels, with an abrupt change of behavior at Tv = 0.3 T. These two-temperature sets have then been utilized as a benchmark for the comparison against popular multitemperature dissociation models (Park, Hansen, Marrone-Treanor, Hammerling, Losev-Shatalov, Gordiets, Kuznetsov, and Macheret-Fridman). This has allowed verifying the accuracy of each theoretical model, and additionally proposing adequate values for any semi-empirical parameters present in the different theories. The Macheret-Fridman model, who acknowledges the existence of the two aforementioned dissociation regimes, has been found to provide significantly more accurate results than the other models. Although these different theoretical approaches have been tested and validated solely for nitrogen dissociation processes, it is reasonable to expect that the general conclusions of this work, regarding the adequacy of the different dissociation models, could be extended to the description of arbitrary diatomic dissociation processes.

  12. Flight summaries and temperature climatology at airliner cruise altitudes from GASP (Global Atmospheric Sampling Program) data

    NASA Technical Reports Server (NTRS)

    Nastrom, G. D.; Jasperson, W. H.

    1983-01-01

    Temperature data obtained by the Global Atmospheric Sampling Program (GASP) during the period March 1975 to July 1979 are compiled to form flight summaries of static air temperature and a geographic temperature climatology. The flight summaries include the height and location of the coldest observed temperature and the mean flight level, temperature and the standard deviation of temperature for each flight as well as for flight segments. These summaries are ordered by route and month. The temperature climatology was computed for all statistically independent temperture data for each flight. The grid used consists of 5 deg latitude, 30 deg longitude and 2000 feet vertical resolution from FL270 to FL430 for each month of the year. The number of statistically independent observations, their mean, standard deviation and the empirical 98, 50, 16, 2 and .3 probability percentiles are presented.

  13. Measurement of Crystallization Temperature Using Thermography for Thin Film Amorphous Alloy Samples

    NASA Astrophysics Data System (ADS)

    Hata, Seiichi; Aono, Yuko; Sakurai, Junpei; Shimokohbe, Akira

    2009-03-01

    This report describes a new non-contact measurement method for the crystallization temperature (Tx) of a thin film amorphous alloy. The thermal emissivity of the amorphous alloy sample is predicted to be modified when it crystallizes. It was attempted to relate this modification to changes in the apparent temperature by thermography. Thin film amorphous alloys of Pt67Si33 and Pt73Si27 were sputtered onto an Al2O3 substrate and then heated at 20 K/min in vacuum, and the film temperature was monitored by thermography. The Tx indicated by the proposed method coincided with the temperature measured by conventional differential scanning calorimeter within 8 K.

  14. The XXL Survey. III. Luminosity-temperature relation of the bright cluster sample

    NASA Astrophysics Data System (ADS)

    Giles, P. A.; Maughan, B. J.; Pacaud, F.; Lieu, M.; Clerc, N.; Pierre, M.; Adami, C.; Chiappetti, L.; Démoclés, J.; Ettori, S.; Le Févre, J. P.; Ponman, T.; Sadibekova, T.; Smith, G. P.; Willis, J. P.; Ziparo, F.

    2016-06-01

    Context. The XXL Survey is the largest homogeneous survey carried out with XMM-Newton. Covering an area of 50 deg2, the survey contains several hundred galaxy clusters out to a redshift of ~2 above an X-ray flux limit of ~5 × 10-15 erg cm-2 s-1. This paper belongs to the first series of XXL papers focusing on the bright cluster sample. Aims: We investigate the luminosity-temperature (LT) relation for the brightest clusters detected in the XXL Survey, taking fully into account the selection biases. We investigate the form of the LT relation, placing constraints on its evolution. Methods: We have classified the 100 brightest clusters in the XXL Survey based on their measured X-ray flux. These 100 clusters have been analysed to determine their luminosity and temperature to evaluate the LT relation. We used three methods to fit the form of the LT relation, with two of these methods providing a prescription to fully take into account the selection effects of the survey. We measure the evolution of the LT relation internally using the broad redshift range of the sample. Results: Taking fully into account selection effects, we find a slope of the bolometric LT relation of BLT = 3.08 ± 0.15, steeper than the self-similar expectation (BLT = 2). Our best-fit result for the evolution factor is E(z)1.64 ± 0.77, fully consistent with "strong self-similar" evolution where clusters scale self-similarly with both mass and redshift. However, this result is marginally stronger than "weak self-similar" evolution, where clusters scale with redshift alone. We investigate the sensitivity of our results to the assumptions made in our fitting model, finding that using an external LT relation as a low-z baseline can have a profound effect on the measured evolution. However, more clusters are needed in order to break the degeneracy between the choice of likelihood model and mass-temperature relation on the derived evolution. Based on observations obtained with XMM-Newton, an ESA science

  15. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. PMID:27566774

  16. Modeling quantum fluid dynamics at nonzero temperatures

    PubMed Central

    Berloff, Natalia G.; Brachet, Marc; Proukakis, Nick P.

    2014-01-01

    The detailed understanding of the intricate dynamics of quantum fluids, in particular in the rapidly growing subfield of quantum turbulence which elucidates the evolution of a vortex tangle in a superfluid, requires an in-depth understanding of the role of finite temperature in such systems. The Landau two-fluid model is the most successful hydrodynamical theory of superfluid helium, but by the nature of the scale separations it cannot give an adequate description of the processes involving vortex dynamics and interactions. In our contribution we introduce a framework based on a nonlinear classical-field equation that is mathematically identical to the Landau model and provides a mechanism for severing and coalescence of vortex lines, so that the questions related to the behavior of quantized vortices can be addressed self-consistently. The correct equation of state as well as nonlocality of interactions that leads to the existence of the roton minimum can also be introduced in such description. We review and apply the ideas developed for finite-temperature description of weakly interacting Bose gases as possible extensions and numerical refinements of the proposed method. We apply this method to elucidate the behavior of the vortices during expansion and contraction following the change in applied pressure. We show that at low temperatures, during the contraction of the vortex core as the negative pressure grows back to positive values, the vortex line density grows through a mechanism of vortex multiplication. This mechanism is suppressed at high temperatures. PMID:24704874

  17. An environmental sampling model for combining judgment and randomly placed samples

    SciTech Connect

    Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.

    2007-08-23

    In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.

  18. Thermal Response Modeling System for a Mars Sample Return Vehicle

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.

    2002-01-01

    A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas generation and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.

  19. Thermal Response Modeling System for a Mars Sample Return Vehicle

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Miles, Frank S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite-element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas eneration and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.

  20. Modeling forces in high-temperature superconductors

    SciTech Connect

    Turner, L. R.; Foster, M. W.

    1997-11-18

    We have developed a simple model that uses computed shielding currents to determine the forces acting on a high-temperature superconductor (HTS). The model has been applied to measurements of the force between HTS and permanent magnets (PM). Results show the expected hysteretic variation of force as the HTS moves first toward and then away from a permanent magnet, including the reversal of the sign of the force. Optimization of the shielding currents is carried out through a simulated annealing algorithm in a C++ program that repeatedly calls a commercial electromagnetic software code. Agreement with measured forces is encouraging.

  1. Method and apparatus for transport, introduction, atomization and excitation of emission spectrum for quantitative analysis of high temperature gas sample streams containing vapor and particulates without degradation of sample stream temperature

    SciTech Connect

    Eckels, D.E.; Hass, W.J.

    1989-05-30

    A sample transport, sample introduction, and flame excitation system is described for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses. 4 figs.

  2. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088

  3. The use of ESR technique for assessment of heating temperatures of archaeological lentil samples

    NASA Astrophysics Data System (ADS)

    Aydaş, Canan; Engin, Birol; Dönmez, Emel Oybak; Belli, Oktay

    2010-01-01

    Heat-induced paramagnetic centers in modern and archaeological lentils ( Lens culinaris, Medik.) were studied by X-band (9.3 GHz) electron spin resonance (ESR) technique. The modern red lentil samples were heated in an electrical furnace at increasing temperatures in the range 70-500 °C. The ESR spectral parameters (the intensity, g-value and peak-to-peak line width) of the heat-induced organic radicals were investigated for modern red lentil ( Lens culinaris, Medik.) samples. The obtained ESR spectra indicate that the relative number of heat-induced paramagnetic species and peak-to-peak line widths depends on the temperature and heating time of the modern lentil. The g-values also depend on the heating temperature but not heating time. Heated modern red lentils produced a range of organic radicals with g-values from g = 2.0062 to 2.0035. ESR signals of carbonised archaeological lentil samples from two archaeological deposits of the Van province in Turkey were studied and g-values, peak-to-peak line widths, intensities and elemental compositions were compared with those obtained for modern samples in order to assess at which temperature these archaeological lentils were heated in prehistoric sites. The maximum temperatures of the previous heating of carbonised UA5 and Y11 lentil seeds are as follows about 500 °C and above 500 °C, respectively.

  4. The use of variable temperature and magic-angle sample spinning in studies of fulvic acids

    USGS Publications Warehouse

    Earl, W.L.; Wershaw, R. L.; Thorn, K.A.

    1987-01-01

    Intensity distortions and poor signal to noise in the cross-polarization magic-angle sample spinning NMR of fulvic acids were investigated and attributed to molecular mobility in these ostensibly "solid" materials. We have shown that inefficiencies in cross polarization can be overcome by lowering the sample temperature to about -60??C. These difficulties can be generalized to many other synthetic and natural products. The use of variable temperature and cross-polarization intensity as a function of contact time can yield valuable qualitative information which can aid in the characterization of many materials. ?? 1987.

  5. Temperature-controlled neutron reflectometry sample cell suitable for study of photoactive thin films

    SciTech Connect

    Yager, Kevin G.; Tanchak, Oleh M.; Barrett, Christopher J.; Watson, Mike J.; Fritzsche, Helmut

    2006-04-15

    We describe a novel cell design intended for the study of photoactive materials using neutron reflectometry. The cell can maintain sample temperature and control of ambient atmospheric environment. Critically, the cell is built with an optical port, enabling light irradiation or light probing of the sample, simultaneous with neutron reflectivity measurements. The ability to measure neutron reflectivity with simultaneous temperature ramping and/or light illumination presents unique opportunities for measuring photoactive materials. To validate the cell design, we present preliminary results measuring the photoexpansion of thin films of azobenzene polymer.

  6. Geothermal fluid equilibrium modeling: a comparison of wellhead fluid samples to deep samples in the Reykjanes system Iceland

    NASA Astrophysics Data System (ADS)

    Seward, R. J.; Reed, M. H.; Fridriksson, T.

    2013-12-01

    Single phase geothermal fluids sampled at depth (Hardardottir et al. 2007) from the Reykjanes geothermal system in Iceland show large differences in dissolved copper, zinc, and iron concentrations when compared with fluid sampled from the same well at the surface. Equilibrium modeling of the samples taken at depth indicate that the fluid was supersaturated in sulfide minerals even at moderately acidic pH values, suggesting that the deep samples, as collected, are out of equilibrium. One possibility for this discrepancy is that the down-well mechanical sampler trapped suspended particles of sulfide minerals that were treated as part of the dissolved constituents of the fluid when it was analyzed, thus inflating the concentrations of Cu, Zn and Fe. In addition to possible entrained solids, techniques used to take in-situ fluid samples at depth in these wells do not provide a complete picture of dissolved species within the fluid because gases are lost when samples are brought to the surface. This precludes meaningful pH measurements and therefore requires chemical modeling of surface samples to understand the state of fluids at depth. In this study geothermal fluids are modeled from surface sample analyses and compared with results from models of fluids collected at depth in the same geothermal wells by calculating a full chemical speciation of geothermal fluids as they boil with decreasing pressure and temperature using programs SOLVEQ-xpt and CHIM-xpt. One of the wells examined for this study was well RN-12. In-situ down-well samples were collected at 1500m, within the single phase region as indicated by pre-sampling pressure and temperature logging in the well which showed that boiling starts at 1300m, and 295 degrees C. Fluid and gas samples which were collected at the well head are recomputed as a single phase fluid to be compared with the down-well sampler. These surface fluids reached a maximum temperature of 300 to 320 degrees C, determined by computing the

  7. Effects of different temperature treatments on biological ice nuclei in snow samples

    NASA Astrophysics Data System (ADS)

    Hara, Kazutaka; Maki, Teruya; Kakikawa, Makiko; Kobayashi, Fumihisa; Matsuki, Atsushi

    2016-09-01

    The heat tolerance of biological ice nucleation activity (INA) depends on their types. Different temperature treatments may cause varying degrees of inactivation on biological ice nuclei (IN) in precipitation samples. In this study, we measured IN concentration and bacterial INA in snow samples using a drop freezing assay, and compared the results for unheated snow and snow treated at 40 °C and 90 °C. At a measured temperature of -7 °C, the concentration of IN in untreated snow was 100-570 L-1, whereas the concentration in snow treated at 40 °C and 90 °C was 31-270 L-1 and 2.5-14 L-1, respectively. In the present study, heat sensitive IN inactivated by heating at 40 °C were predominant, and ranged 23-78% of IN at -7 °C compared with untreated samples. Ice nucleation active Pseudomonas strains were also isolated from the snow samples, and heating at 40 °C and 90 °C inactivated these microorganisms. Consequently, different temperature treatments induced varying degrees of inactivation on IN in snow samples. Differences in the concentration of IN across a range of treatment temperatures might reflect the abundance of different heat sensitive biological IN components.

  8. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2016-06-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  9. Temperature influences in receiver clock modelling

    NASA Astrophysics Data System (ADS)

    Wang, Kan; Meindl, Michael; Rothacher, Markus; Schoenemann, Erik; Enderle, Werner

    2016-04-01

    In Precise Point Positioning (PPP), hardware delays at the receiver site (receiver, cables, antenna, …) are always difficult to be separated from the estimated receiver clock parameters. As a result, they are partially or fully contained in the estimated "apparent" clocks and will influence the deterministic and stochastic modelling of the receiver clock behaviour. In this contribution, using three years of data, the receiver clock corrections of a set of high-precision Hydrogen Masers (H-Masers) connected to stations of the ESA/ESOC network and the International GNSS Service (IGS) are firstly characterized concerning clock offsets, drifts, modified Allan deviations and stochastic parameters. In a second step, the apparent behaviour of the clocks is modelled with the help of a low-order polynomial and a known temperature coefficient (Weinbach, 2013). The correlations between the temperature and the hardware delays generated by different types of antennae are then analysed looking at daily, 3-day and weekly time intervals. The outcome of these analyses is crucial, if we intend to model the receiver clocks in the ground station network to improve the estimation of station-related parameters like coordinates, troposphere zenith delays and ambiguities. References: Weinbach, U. (2013) Feasibility and impact of receiver clock modeling in precise GPS data analysis. Dissertation, Leibniz Universität Hannover, Germany.

  10. Electric transport measurements on bulk, polycrystalline MgB2 samples prepared at various reaction temperatures

    NASA Astrophysics Data System (ADS)

    Wiederhold, A.; Koblischka, M. R.; Inoue, K.; Muralidhar, M.; Murakami, M.; Hartmann, U.

    2016-03-01

    A series of disk-shaped, bulk MgB2 superconductors (sample diameter up to 4 cm) was prepared in order to improve the performance for superconducting super-magnets. Several samples were fabricated using a solid state reaction in pure Ar atmosphere from 750 to 950oC in order to determine the optimum processing parameters to obtain the highest critical current density as well as large trapped field values. Additional samples were prepared with added silver (up to 10 wt.-%) to the Mg and B powder. Magneto-resistance data and I/V-characteristics were recorded using an Oxford Instruments Teslatron system. From Arrhenius plots, we determine the TAFF pinning potential, U 0. The I/V-characteristics yield detailed information on the current flow through the polycrystalline samples. The current flow is influenced by the presence of pores in the samples. Our analysis of the achieved critical currents together with a thorough microstructure investigation reveals that the samples prepared at temperatures between 775°C and 805°C exhibit the smallest grains and the best connectivity between them, while the samples fabricated at higher reaction temperatures show a reduced connectivity and lower pinning potential. Doping the samples with silver leads to a considerable increase of the pinning potential and hence, the critical current densities.

  11. Stratospheric Temperature Changes: Observations and Model Simulations

    NASA Technical Reports Server (NTRS)

    Ramaswamy, V.; Chanin, M.-L.; Angell, J.; Barnett, J.; Gaffen, D.; Gelman, M.; Keckhut, P.; Koshelkov, Y.; Labitzke, K.; Lin, J.-J. R.

    1999-01-01

    This paper reviews observations of stratospheric temperatures that have been made over a period of several decades. Those observed temperatures have been used to assess variations and trends in stratospheric temperatures. A wide range of observation datasets have been used, comprising measurements by radiosonde (1940s to the present), satellite (1979 - present), lidar (1979 - present) and rocketsonde (periods varying with location, but most terminating by about the mid-1990s). In addition, trends have also been assessed from meteorological analyses, based on radiosonde and/or satellite data, and products based on assimilating observations into a general circulation model. Radiosonde and satellite data indicate a cooling trend of the annual-mean lower stratosphere since about 1980. Over the period 1979-1994, the trend is 0.6K/decade. For the period prior to 1980, the radiosonde data exhibit a substantially weaker long-term cooling trend. In the northern hemisphere, the cooling trend is about 0.75K/decade in the lower stratosphere, with a reduction in the cooling in mid-stratosphere (near 35 km), and increased cooling in the upper stratosphere (approximately 2 K per decade at 50 km). Model simulations indicate that the depletion of lower stratospheric ozone is the dominant factor in the observed lower stratospheric cooling. In the middle and upper stratosphere both the well-mixed greenhouse gases (such as CO) and ozone changes contribute in an important manner to the cooling.

  12. TEMPERATURE HISTORY AND DYNAMICAL EVOLUTION OF (101955) 1999 RQ 36: A POTENTIAL TARGET FOR SAMPLE RETURN FROM A PRIMITIVE ASTEROID

    SciTech Connect

    Delbo, Marco; Michel, Patrick

    2011-02-20

    It has been recently shown that near-Earth objects (NEOs) have a temperature history-due to the radiative heating by the Sun-non-trivially correlated to their present orbits. This is because the perihelion distance of NEOs varies as a consequence of dynamical mechanisms, such as resonances and close encounters with planets. Thus, it is worth investigating the temperature history of NEOs that are potential targets of space missions devoted to return samples of prebiotic organic compounds. Some of these compounds, expected to be found on NEOs of primitive composition, break up at moderate temperatures, e.g., 300-670 K. Using a model of the orbital evolution of NEOs and thermal models, we studied the temperature history of (101955) 1999 RQ{sub 36} (the primary target of the mission OSIRIS-REx, proposed in the program New Frontiers of NASA). Assuming that the same material always lies on the surface (i.e., there is no regolith turnover), our results suggest that the temperatures reached during its past evolution affected the stability of some organic compounds at the surface (e.g., there is 50% probability that the surface of 1999 RQ{sub 36} was heated at temperatures {>=}500 K). However, the temperature drops rapidly with depth: the regolith at a depth of 3-5 cm, which is not considered difficult to reach with the current designs of sampling devices, has experienced temperatures about 100 K below those at the surface. This is sufficient to protect some subsurface organics from thermal breakup.

  13. Modelling nanofluidic field amplified sample stacking with inhomogeneous surface charge

    NASA Astrophysics Data System (ADS)

    McCallum, Christopher; Pennathur, Sumita

    2015-11-01

    Nanofluidic technology has exceptional applications as a platform for biological sample preconcentration, which will allow for an effective electronic detection method of low concentration analytes. One such preconcentration method is field amplified sample stacking, a capillary electrophoresis technique that utilizes large concentration differences to generate high electric field gradients, causing the sample of interest to form a narrow, concentrated band. Field amplified sample stacking has been shown to work well at the microscale, with models and experiments confirming expected behavior. However, nanofluidics allows for further concentration enhancement due to focusing of the sample ions toward the channel center by the electric double layer. We have developed a two-dimensional model that can be used for both micro- and nanofluidics, fully accounting for the electric double layer. This model has been used to investigate even more complex physics such as the role of inhomogeneous surface charge.

  14. Estimating peak and solidification temperatures for anatectic pelitic migmatites using phase diagrams: sampling heterogeneous migmatites and confronting melt loss

    NASA Astrophysics Data System (ADS)

    Hamilton, Brett M.; Pattison, David R. M.

    2016-04-01

    Calculating a pressure-temperature phase diagram relevant to an anatectic pelitic migmatite sampled in outcrop is challenging because it is unclear what constitutes a meaningful bulk composition. Melt loss during metamorphism may have changed the bulk composition. The heterogeneous nature of migmatites, with light and dark coloured domains (leucosome and melanosome), means a choice must be made regarding how a migmatitic outcrop should be sampled. To address these issues, migmatites were simulated using thermodynamic modelling techniques for different melting and crystallization scenarios and bulk compositions. Using phase diagrams calculated for varying proportions of simulated melanosome and leucosome, temperatures of interest were estimated and compared with known values. Our modelling suggests: (1) It is generally possible to constrain the peak temperature using phase diagrams calculated with the composition of the melanosome; the more leucosome that is incorporated, the more innaccurate the estimate. For phase diagrams calculated using a combination of leucosome and melanosome material, peak temperature estimates differ from actual peak conditions by ‑25 to +50°C. In certain of these cases, such as those involving high proportions of leucosome to melanosome, or in which solid K-feldspar was absent at peak conditions, but is now present in the rock due to later crystallization from melt, it is not possible to estimate peak temperature. (2) The solidification temperature, whether due to crystallization of the last melt or physical loss of the melt during crystallization, will fall between the peak temperature and the water-saturated solidus (~660°C) if the melt and solids chemically interacted during cooling. This temperature can be accurately constrained from the phase diagram. If the melt crystallized in chemical isolation from the melanosome, the solidification temperature is the water-saturated solidus (625-645°C); however, physical melt loss during

  15. Slice sampling technique in Bayesian extreme of gold price modelling

    NASA Astrophysics Data System (ADS)

    Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham

    2013-09-01

    In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.

  16. Long-term storage of salivary cortisol samples at room temperature

    NASA Technical Reports Server (NTRS)

    Chen, Yu-Ming; Cintron, Nitza M.; Whitson, Peggy A.

    1992-01-01

    Collection of saliva samples for the measurement of cortisol during space flights provides a simple technique for studying changes in adrenal function due microgravity. In the present work, several methods for preserving saliva cortisol at room temperature were investigated using radioimmunoassays for determining cortisol in saliva samples collected on a saliva-collection device called Salivettes. It was found that a pretreatment of Salivettes with citric acid resulted in preserving more than 85 percent of the salivary cortisol for as long as six weeks. The results correlated well with those for a sample stored in a freezer on an untreated Salivette.

  17. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  18. Graphite sample preparation for AMS in a high pressure and temperature press

    USGS Publications Warehouse

    Rubin, M.; Mysen, B.O.; Polach, H.

    1984-01-01

    A high pressure-high temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical 14C samples including oxalic acid and carbonates. Beam strength of 12C was generally adequate, but random radioactive contamination by 14C made age measurements impractical. ?? 1984.

  19. Graphite sample preparation for AMS in a high pressure and temperature press

    USGS Publications Warehouse

    Rubin, Meyer; Mysen, Bjorn O.; Polach, Henry

    1984-01-01

    A high pressure-temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical **1**4C samples including oxalic acid and carbonates. Beam strength of **1**2C was generally adequate, but random radioactive contamination by **1**4C made age measurements impractical.

  20. Slow and rapid response: The temperature memory of smectite in JFAST's Tohoku earthquake core samples (Japan)

    NASA Astrophysics Data System (ADS)

    Schleicher, Anja M.; Boles, Austin; van der Pluijm, Ben A.

    2014-05-01

    The ability of clay minerals to absorb and retain interlayer water during large slip events can be limited because of locally high frictional heating temperatures. Core samples from JFAST (Japan Trench Fast Drilling Project) Expedition 343 provide a unique opportunity to characterize smectitic clay minerals in fault rocks of an active plate-boundary fault that produced a displacement of ~50 meters during the Tohoku earthquake of 2011. Smectite is abundant in the fault zone identified at 820 mbsf. Chemical compositions analyzed by ICP-OES show a significant amount of Fe, and lesser Mg and K. In order to analyze the swelling capacity of smectite during slow and rapid temperature changes, we heated and cooled samples in steps of 25 °C from 25 to 225 °C at different rates, using a temperature stage and humidity chamber attached to an X-ray diffractometer. Rapid heating is represented by 5 min and slow heating by 5 hours for each sample. Cooling back to ~25 °C was achieved within 15-40 min, depending on the maximum heating temperature. X-ray analyses of randomly-oriented and oriented samples were conducted on air-dried and glycolated samples. Illite and smectite are the most abundant clay mineral types detected. Mineralogic characterization of illite shows 1Md and 2M1 polytypes, with authigenically formed 1Md slightly more abundant in the finer grained material. Clay size fractions 0.05 - 0.5 microns show pure smectite with a characteristic interlayer distance of 1.2 nm that increases to 1.7 nm after ethylene glycolization, indicating up to 3 water layers. Based on slow and rapid heating and cooling experiments of these samples, we observe that (i) both slow and fast heating causes similar reduction of water layers in smectite, (ii) smectite recovers faster to the original hydration state after quick heating than slow heating, and (iii) non-recoverable collapse of all smectite occurs at temperatures > 200 °C, regardless of the heating rate. Based on these results

  1. Calorimeters for Precision Power Dissipation Measurements on Controlled-Temperature Superconducting Radiofrequency Samples

    SciTech Connect

    Xiao, Binping P.; Kelley, Michael J.; Reece, Charles E.; Phillips, H. L.

    2012-12-01

    Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the surface impedance characterization (SIC) system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm dia. disk sample which is thermally isolated from the RF portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency (SRF) materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analysed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al{sub 2}O{sub 3}, Cu, MgO, Nb and Si.

  2. Calorimeters for precision power dissipation measurements on controlled-temperature superconducting radiofrequency samples.

    PubMed

    Xiao, B P; Reece, C E; Phillips, H L; Kelley, M J

    2012-12-01

    Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the 7.5 GHz surface impedance characterization system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm diameter disk sample which is thermally isolated from the radiofrequency (RF) portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analyzed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al(2)O(3), Cu, MgO, Nb, and Si. PMID:23278016

  3. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    NASA Astrophysics Data System (ADS)

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-01

    High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components.

  4. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    SciTech Connect

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-15

    High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components.

  5. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    SciTech Connect

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-17

    In high material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. Our paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 degrees C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. Our work covers a broad field of research from fundamental to technological investigations of various types of materials and components.

  6. Temperature programmed desorption studies of water interactions with Apollo lunar samples 12001 and 72501

    NASA Astrophysics Data System (ADS)

    Poston, Michael J.; Grieves, Gregory A.; Aleksandrov, Alexandr B.; Hibbitts, Charles A.; Dyar, M. Darby; Orlando, Thomas M.

    2015-07-01

    The desorption activation energies for water molecules chemisorbed on Apollo lunar samples 72501 (highlands soil) and 12001 (mare soil) were determined by temperature programmed desorption experiments in ultra-high vacuum. A significant difference in both the energies and abundance of chemisorption sites was observed, with 72501 retaining up to 40 times more water (by mass) and with much stronger adsorption interactions, possibly approaching 1.5 eV. The dramatic difference between the samples may be due to differences in mineralogy and surface exposure age. The distribution function of water desorption activation energies for sample 72501 was used as an initial condition to simulate water persistence through a temperature profile matching the lunar day.

  7. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere

    DOE PAGESBeta

    Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; et al

    2015-12-17

    In high material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. Our paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 degrees C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition,more » examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. Our work covers a broad field of research from fundamental to technological investigations of various types of materials and components.« less

  8. Flexible sample environment for high resolution neutron imaging at high temperatures in controlled atmosphere.

    PubMed

    Makowska, Małgorzata G; Theil Kuhn, Luise; Cleemann, Lars N; Lauridsen, Erik M; Bilheux, Hassina Z; Molaison, Jamie J; Santodonato, Louis J; Tremsin, Anton S; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus

    2015-12-01

    High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components. PMID:26724075

  9. Thermal mapping and trends of Mars analog materials in sample acquisition operations using experimentation and models

    NASA Astrophysics Data System (ADS)

    Szwarc, Timothy; Hubbard, Scott

    2014-09-01

    The effects of atmosphere, ambient temperature, and geologic material were studied experimentally and using a computer model to predict the heating undergone by Mars rocks during rover sampling operations. Tests were performed on five well-characterized and/or Mars analog materials: Indiana limestone, Saddleback basalt, kaolinite, travertine, and water ice. Eighteen tests were conducted to 55 mm depth using a Mars Sample Return prototype coring drill, with each sample containing six thermal sensors. A thermal simulation was written to predict the complete thermal profile within each sample during coring and this model was shown to be capable of predicting temperature increases with an average error of about 7%. This model may be used to schedule power levels and periods of rest during actual sample acquisition processes to avoid damaging samples or freezing the bit into icy formations. Maximum rock temperature increase is found to be modeled by a power law incorporating rock and operational parameters. Energy transmission efficiency in coring is found to increase linearly with rock hardness and decrease by 31% at Mars pressure.

  10. Effects and Mitigation of Clear Sky Sampling on Recorded Trends in Land Surface Temperature

    NASA Astrophysics Data System (ADS)

    Holmes, T. R.; Hain, C.; de Jeu, R.; Anderson, M. C.; Crow, W. T.

    2015-12-01

    Land surface temperature (LST) is a key input for physically-based retrieval algorithms of hydrological states and fluxes. Yet, it remains a poorly constrained parameter for global scale studies. The main two observational methods to remotely measure T are based on thermal infrared (TIR) observations and passive microwave observations (MW). TIR is the most commonly used approach and the method of choice to provide standard LST products for various satellite missions. MW-based LST retrievals on the other hand are not as widely adopted for land applications; currently their principle use is in soil moisture retrieval algorithms. MW and TIR technologies present two highly complementary and independent means of measuring LST. MW observations have a high tolerance to clouds but a low spatial resolution, and TIR has a high spatial resolution with temporal sampling restricted to clear skies. This paper builds on recent progress in characterizing the main structural differences between TIR LST and MW Ka-band observations, the MW frequency that is most suitable for LST sensing. By accounting for differences in diurnal timing (phase lag with solar noon), amplitude, and emissivity we construct a MW-based LST dataset that matches the diurnal characteristics of the TIR-based LSA SAF LST record. This new global dataset of MW-based LST currently spans the period of 2003-2013. In this paper we will present results of a validation of MW LST with in situ data with special emphasis on the effect of cloudiness on the performance. The ability to remotely sense the temperature of cloud covered land is what sets this MW-LST datasets apart from existing (much higher resolution) TIR-based products. As an example of this we will therefore explore how MW LST can mitigate the effect of clear-sky sampling in the context of trend and anomaly detection. We do this by contrasting monthly means of TIR-LST with its clear-sky and all-sky equivalent from an MW-LST and an NWP model.

  11. A Unimodal Model for Double Observer Distance Sampling Surveys

    PubMed Central

    Becker, Earl F.; Christ, Aaron M.

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984

  12. Characterization of Wafer-Level Au-In-Bonded Samples at Elevated Temperatures

    NASA Astrophysics Data System (ADS)

    Luu, Thi-Thuy; Hoivik, Nils; Wang, Kaiying; Aasmundtveit, Knut E.; Vardøy, Astrid-Sofie B.

    2015-06-01

    Wafer-level bonding using Au-In solid liquid interdiffusion (SLID) bonding is a promising approach to enable low-temperature assembly and MEMS packaging/encapsulation. Due to the low-melting point of In, wafer-level bonding can be performed at considerably lower temperatures than Sn-based bonding; this work treats bonds performed at 453 K (180 °C). Following bonding, the die shear strength at elevated temperatures was investigated from room temperature to 573 K (300 °C), revealing excellent mechanical integrity at these temperatures well above the bonding temperature. For shear test temperatures from room temperature to 473 K (200 °C), the measured shear strength was stable at 30 MPa, whereas it increased to 40 MPa at shear test temperature of 573 K (300 °C). The fracture surfaces of Au-In-bonded samples revealed brittle fracture modes (at the original bond interface and at the adhesion layers) for shear test temperatures up to 473 K (200 °C), but ductile fracture mode for shear test temperature of 573 K (300 °C). The as-bonded samples have a layered structure consisting of the two intermetallic phases AuIn and γ', as shown by cross section microscopy and predicted from the phase diagram. The change in behavior for the tests at 573 K (300 °C) is attributed to a solid-state phase transition occurring at 497 K (224 °C), where the phase diagram predicts a AuIn/ψ structure and a phase boundary moving across the initial bond interface. The associated interdiffusion of Au and In will strengthen the initial bond interface and, as a consequence, the measured shear strength. This work provides experimental evidence for the high-temperature stability of wafer-level, low-temperature bonded, Au-In SLID bonds. The high bond strength obtained is limited by the strength at the initial bond interface and at the adhesion layers, showing that the Au-In SLID system itself is capable of even higher bond strength.

  13. Dynamical models of a sample of Population II stars

    NASA Astrophysics Data System (ADS)

    Levison, H. F.; Richstone, D. O.

    1986-09-01

    Dynamical models are constructed in order to investigate the implications of recent kinematic data of distant Population II stars on the emissivity distribution of those stars. Models are constructed using a modified Schwarzschild method in two extreme scale-free potentials, spherical and E6 elliptical. Both potentials produce flat rotation curves and velocity dispersion profiles. In all models, the distribution of stars in this sample is flat. Moreover, it is not possible to construct a model with a strictly spheroidal emissivity distribution. Most models have dimples at the poles. The dynamics of the models indicate that the system is supported by both the third integral and z angular momentum.

  14. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  15. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  16. Sample size calculation for the proportional hazards cure model.

    PubMed

    Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin

    2012-12-20

    In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805

  17. Headspace-programmed temperature vaporizer-mass spectrometry and pattern recognition techniques for the analysis of volatiles in saliva samples.

    PubMed

    Pérez Antón, Ana; Del Nogal Sánchez, Miguel; Crisolino Pozas, Ángel Pedro; Pérez Pavón, José Luis; Moreno Cordero, Bernardo

    2016-11-01

    A rapid method for the analysis of volatiles in saliva samples is proposed. The method is based on direct coupling of three components: a headspace sampler (HS), a programmable temperature vaporizer (PTV) and a quadrupole mass spectrometer (qMS). Several applications in the biomedical field have been proposed with electronic noses based on different sensors. However, few contributions have been developed using a mass spectrometry-based electronic nose in this field up to date. Samples of 23 patients with some type of cancer and 32 healthy volunteers were analyzed with HS-PTV-MS and the profile signals obtained were subjected to pattern recognition techniques with the aim of studying the possibilities of the methodology to differentiate patients with cancer from healthy controls. An initial inspection of the contained information in the data by means of principal components analysis (PCA) revealed a complex situation were an overlapped distribution of samples in the score plot was visualized instead of two groups of separated samples. Models using K-nearest neighbors (KNN) and Soft Independent Modeling of Class Analogy (SIMCA) showed poor discrimination, specially using SIMCA where a small distance between classes was obtained and no satisfactory results in the classification of the external validation samples were achieved. Good results were obtained when Mahalanobis discriminant analysis (DA) and support vector machines (SVM) were used obtaining 2 (false positives) and 0 samples misclassified in the external validation set, respectively. No false negatives were found using these techniques. PMID:27591583

  18. Importance of sample form and surface temperature for analysis by ambient plasma mass spectrometry (PADI).

    PubMed

    Salter, Tara La Roche; Bunch, Josephine; Gilmore, Ian S

    2014-09-16

    Many different types of samples have been analyzed in the literature using plasma-based ambient mass spectrometry sources; however, comprehensive studies of the important parameters for analysis are only just beginning. Here, we investigate the effect of the sample form and surface temperature on the signal intensities in plasma-assisted desorption ionization (PADI). The form of the sample is very important, with powders of all volatilities effectively analyzed. However, for the analysis of thin films at room temperature and using a low plasma power, a vapor pressure of greater than 10(-4) Pa is required to achieve a sufficiently good quality spectrum. Using thermal desorption, we are able to increase the signal intensity of less volatile materials with vapor pressures less than 10(-4) Pa, in thin film form, by between 4 and 7 orders of magnitude. This is achieved by increasing the temperature of the sample up to a maximum of 200 °C. Thermal desorption can also increase the signal intensity for the analysis of powders. PMID:25137443

  19. Development of an Integrated Thermocouple for the Accurate Sample Temperature Measurement During High Temperature Environmental Scanning Electron Microscopy (HT-ESEM) Experiments.

    PubMed

    Podor, Renaud; Pailhon, Damien; Ravaux, Johann; Brau, Henri-Pierre

    2015-04-01

    We have developed two integrated thermocouple (TC) crucible systems that allow precise measurement of sample temperature when using a furnace associated with an environmental scanning electron microscope (ESEM). Sample temperatures measured with these systems are precise (±5°C) and reliable. The TC crucible systems allow working with solids and liquids (silicate melts or ionic liquids), independent of the gas composition and pressure. These sample holder designs will allow end users to perform experiments at high temperature in the ESEM chamber with high precision control of the sample temperature. PMID:25898837

  20. Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Pengjie; Zheng, Yi; Jing, Yipeng

    2015-02-01

    Cosmology based on large scale peculiar velocity prefers volume weighted velocity statistics. However, measuring the volume weighted velocity statistics from inhomogeneously distributed galaxies (simulation particles/halos) suffers from an inevitable and significant sampling artifact. We study this sampling artifact in the velocity power spectrum measured by the nearest particle velocity assignment method by Zheng et al., [Phys. Rev. D 88, 103510 (2013).]. We derive the analytical expression of leading and higher order terms. We find that the sampling artifact suppresses the z =0 E -mode velocity power spectrum by ˜10 % at k =0.1 h /Mpc , for samples with number density 10-3 (Mpc /h )-3 . This suppression becomes larger for larger k and for sparser samples. We argue that this source of systematic errors in peculiar velocity cosmology, albeit severe, can be self-calibrated in the framework of our theoretical modelling. We also work out the sampling artifact in the density-velocity cross power spectrum measurement. A more robust evaluation of related statistics through simulations will be presented in a companion paper by Zheng et al., [Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modelling, arXiv:1409.6809.]. We also argue that similar sampling artifact exists in other velocity assignment methods and hence must be carefully corrected to avoid systematic bias in peculiar velocity cosmology.

  1. Sampling theory applied to measurement and analysis of temperature for climate studies

    NASA Technical Reports Server (NTRS)

    Edwards, Howard B.

    1987-01-01

    Of all the errors discussed in climatology literature, aliasing errors caused by undersampling of unsmoothed or improperly smoothed temperature data seem to be completely overlooked. This is a serious oversight in view of long-term trends of 1 K or less. Adequate sampling of properly smoothed data is demonstrated with a Hamming digital filter. It is also demonstrated that hourly temperatures, daily averages, and annual averages free of aliasing errors can be obtained by use of a microprocessor added to standard weather sensors and recorders.

  2. Integrated research in constitutive modelling at elevated temperatures, part 1

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Allen, D. H.

    1986-01-01

    Topics covered include: numerical integration techniques; thermodynamics and internal state variables; experimental lab development; comparison of models at room temperature; comparison of models at elevated temperature; and integrated software development.

  3. Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System

    NASA Astrophysics Data System (ADS)

    Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.

    2011-09-01

    Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.

  4. Characterization of Decommissioned PWR Vessel Internals Materials Samples: Material Certification, Fluence, and Temperature (Nonproprietary Version)

    SciTech Connect

    M. Krug; R. Shogan; A. Fero; M. Snyder

    2004-11-01

    Pressurized water reactor (PWR) cores, operate under extreme environmental conditions due to coolant chemistry, operating temperature, and neutron exposure. Extending the life of PWRs require detailed knowledge of the changes in mechanical and corrosion properties of the structural austenitic stainless steel components adjacent to the fuel. This report contains basic material characterization information of the as-installed samples of reactor internals material which were harvested from a decommissioned PWR.

  5. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  6. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner; Richard B Chandler; Royle, J. Andrew; T Scott Sillett

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  7. Colour matching of isoluminant samples and backgrounds: a model.

    PubMed

    Stanikunas, Rytis; Vaitkevicius, Henrikas; Kulikowski, Janus J; Murray, Ian J; Daugirdiene, Avsra

    2005-01-01

    A cone-opponent-based vector model is used to derive the activity in the red-green, yellow-blue, and achromatic channels during a sequential asymmetric colour-matching experiment. Forty Munsell samples, simulated under illuminant C, were matched with their appearance under eight test illuminants. The test samples and backgrounds were photometrically isoluminant with each other. According to the model, the orthogonality of the channels is revealed when test illuminants lie along either red-green or yellow blue cardinal axes. The red green and yellow-blue outputs of the channels are described in terms of the hue of the sample. The fact that the three-channel model explains the data in a colour-matching experiment indicates that an early form of colour processing is mediated at a site where the three channels converge, probably the input layer of V1. PMID:16178154

  8. Operating parameters of liquid helium transfer lines used with continuous flow cryostats at low sample temperatures

    NASA Astrophysics Data System (ADS)

    Dittmar, N.; Welker, D.; Haberstroh, Ch; Hesse, U.; Krzyzowski, M.

    2015-12-01

    Continuous flow cryostats are used to cool samples to a variable temperature level by evaporating a cryogen, e.g. liquid helium (LHe). For this purpose LHe is usually stored outside the cryostat in a mobile dewar and supplied through a transfer line. In general, the complete setup has to be characterised by the lowest possible consumption of LHe. Additionally, a minimum sample temperature can be favourable from an experimental point of view. The achievement of both requirements is determined by the respective cryostat design as well as by the transfer line. In the presented work operating data, e.g. the LHe consumption during cooldown and steady state, the minimum sample temperature, and the outlet quality are analysed to characterise the performance of a reference transfer line. In addition, an experimental transfer line with built-in pressure sensors has been commissioned to examine the pressure drop along the transfer line, too. During the tests LHe impurities occurred which restricted a steady operation.

  9. Effect of vacuum packing and temperature on survival and hatching of strongyle eggs in faecal samples.

    PubMed

    Sengupta, Mita E; Thapa, Sundar; Thamsborg, Stig M; Mejer, Helena

    2016-02-15

    Strongyle eggs of helminths of livestock usually hatch within a few hours or days after deposition with faeces. This poses a problem when faecal sampling is performed in the field. As oxygen is needed for embryonic development, it is recommended to reduce air supply during transport and refrigerate. The present study therefore investigated the combined effect of vacuum packing and temperature on survival of strongyle eggs and their subsequent ability to hatch and develop into L3. Fresh faecal samples were collected from calves infected with Cooperia oncophora, pigs infected with Oesophagostomum dentatum, and horses infected with Strongylus vulgaris and cyathostomins. The samples were allocated into four treatments: vacuum packing and storage at 5 °C or 20 °C (5 V and 20 V); normal packing in plastic gloves closed with a loose knot and storage at 5 °C or 20 °C (5 N and 20 N). The number of eggs per gram faeces (EPG) was estimated every fourth day until day 28 post set up (p.s.) by a concentration McMaster-method. Larval cultures were prepared on day 0, 12 and 28 p.s. and the larval yield determined. For C. oncophora, the EPG was significantly higher in vacuum packed samples after 28 days as compared to normal storage, regardless of temperature. However, O. dentatum EPG was significantly higher in samples kept at 5 °C as compared to 20 °C, irrespective of packing. For the horse strongyles, vacuum packed samples at 5 °C had a significantly higher EPG compared to the other treatments after 28 days. The highest larval yield of O. dentatum and horse strongyles were obtained from fresh faecal samples, however, if storage is necessary prior to setting up larval cultures O. dentatum should be kept at room temperature (aerobic or anaerobic). However, horse strongyle coprocultures should ideally be set up on the day of collection to ensure maximum yield. Eggs of C. oncophora should be kept vacuum packed at room temperature for the highest larval yield. PMID:26827855

  10. The effect of low-temperature demagnetization on paleointensity determinations from samples with different domain states

    NASA Astrophysics Data System (ADS)

    Kulakov, E.; Smirnov, A. V.

    2013-05-01

    It has been recently proposed that incorporation of low-temperature demagnetization (LTD) into the Thellier double-heating method increases the accuracy and success rate of paleointensity experiments by reducing the effects of magnetic remanence carried by large pseudo-singledomain (PSD) and multidomain (MD) grains (e.g., Celino et al., Geophysical Research Letters, 34, L12306, 2007). However, it has been unclear to what degree the LTD affects the remanence carried by single-domain (SD) and small PSD. To investigate this problem, we carried out paleointensity experiments on synthetic magnetite-bearing samples containing nearly SD, PSD, and multidomain MD grains as well as mixtures of MD and SD grains. Before the experiments, a thermal remanent magnetization was imparted to the samples in a known laboratory field. Paleointensities were determined using both the LTD-Thellier and multi-specimen parallel pTRM methods. The samples were subjected to a series of three LTD treatments in liquid nitrogen after each heating. LTD significantly improved the quality of paleointensity determinations from the samples containing large PSD and MD magnetite as well as SD-MD mixtures. In particular, LTD resulted in a significant increase of the paleointensity quality factor, producing more linear Arai plots and reducing data scatter. In addition, field intensities calculated after LTD fell within 2-4% of the known laboratory field. On the other hand, the effect of LTD on paleointensity determinations from samples with nearly SD magnetite is negligible. Paleointensity values based on both pre- and post-LTD data were statistically indistinguishable of the laboratory field. LTD treatment significantly reduced the systematic paleofield overestimation using the multi-specimen method from samples containing PSD and MD grains, as well as SD-MD mixtures. The results of multi-specimen paleointensity experiments performed on the PSD and MD samples using different heating temperatures suggest

  11. Fluorescence of commercial Pluronic F127 samples: Temperature-dependent micellization.

    PubMed

    Perry, Christopher C; Sabir, Theodore S; Livingston, Wesley J; Milligan, Jamie R; Chen, Qiao; Maskiewicz, Victoria; Boskovic, Danilo S

    2011-02-15

    We present a novel approach of using the butylated hydroxytoluene (BHT) antioxidant found in commercial Pluronic F127 samples as a marker of polymer aggregation. The BHT marker was compared to the pyrene dye and static light scattering methods as a way to measure the critical micelle concentration (CMC) and critical micelle temperature (CMT). The n→π(∗) transitions of BHT are sensitive to the microenvironment as demonstrated by plotting the fractional intensities of its excitation (≈280nm) and emission (≈325nm) peaks. BHT is more sensitive to changes in temperature than concentration. The partition coefficient increases ≈40-fold for pyrene compared to ≈2-fold for BHT when the temperature is increased from 25 to 37°C. CMT values determined using the BHT fluorescence decrease with increasing F127 concentration. Our results show that BHT can be used as a reliable marker of changes in the microenvironment of Pluronic F127. PMID:21087773

  12. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGESBeta

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  13. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  14. Analytic Models of High-Temperature Hohlraums

    SciTech Connect

    Stygar, W.A.; Olson, R.E.; Spielman, R.B.; Leeper, R.J.

    2000-11-29

    A unified set of high-temperature-hohlraum models has been developed. For a simple hohlraum, P{sub s} = [A{sub s}+(1{minus}{alpha}{sub W})A{sub W}+A{sub H}]{sigma}T{sub R}{sup 4} + (4V{sigma}/c)(dT{sub R}{sup r}/dt) where P{sub S} is the total power radiated by the source, A{sub s} is the source area, A{sub W} is the area of the cavity wall excluding the source and holes in the wall, A{sub H} is the area of the holes, {sigma} is the Stefan-Boltzmann constant, T{sub R} is the radiation brightness temperature, V is the hohlraum volume, and c is the speed of light. The wall albedo {alpha}{sub W} {triple_bond} (T{sub W}/T{sub R}){sup 4} where T{sub W} is the brightness temperature of area A{sub W}. The net power radiated by the source P{sub N} = P{sub S}-A{sub S}{sigma}T{sub R}{sup 4}, which suggests that for laser-driven hohlraums the conversion efficiency {eta}{sub CE} be defined as P{sub N}/P{sub LASER}. The characteristic time required to change T{sub R}{sup 4} in response to a change in P{sub N} is 4V/C[(l{minus}{alpha}{sub W})A{sub W}+A{sub H}]. Using this model, T{sub R}, {alpha}{sub W}, and {eta}{sub CE} can be expressed in terms of quantities directly measurable in a hohlraum experiment. For a steady-state hohlraum that encloses a convex capsule, P{sub N} = {l_brace}(1{minus}{alpha}{sub W})A{sub W}+A{sub H}+[(1{minus}{alpha}{sub C})(A{sub S}+A{sub W}{alpha}{sub W})A{sub C}/A{sub T}]{r_brace}{sigma}T{sub RC}{sup 4} where {alpha}{sub C} is the capsule albedo, A{sub C} is the capsule area, A{sub T} {triple_bond} (A{sub S}+A{sub W}+A{sub H}), and T{sub RC} is the brightness temperature of the radiation that drives the capsule. According to this relation, the capsule-coupling efficiency of the baseline National-Ignition-Facility (NIF) hohlraum is 15% higher than predicted by previous analytic expressions. A model of a hohlraum that encloses a z pinch is also presented.

  15. Helium flow and temperatures in a heated sample of a final ITER TF cable-in-conduit conductor

    NASA Astrophysics Data System (ADS)

    Herzog, Robert; Lewandowska, Monika; Calvi, Marco; Bessette, Denis

    2010-06-01

    The quest for a detailed understanding of the thermo-hydraulic behaviour of the helium flow in the dual-channel cable-in-conduit conductor (CICC) for the ITER toroidal-field coils led to a series of experiments in the SULTAN test facility on a dedicated sample made according to the final conductor design. With helium flowing through the conductor as expected during ITER operation, the sample was heated by eddy-current losses induced in the strands by an applied AC magnetic field as well as by foil heaters mounted on the outside of the conductor jacket. Temperature sensors mounted on the jacket surface, in the central channel and at different radii in the sub-cable region showed the longitudinal as well as radial temperature distribution at different mass flow rates and heat loads. Spot heaters in the bundle and the central channel created small heated helium regions, which were detected downstream by a series of temperature sensors. With a time-of-flight method the helium velocity could thus be determined independently of any flow model. The temperature and velocity distributions in bundle and central channel under different mass-flow and heat load conditions thus led to a detailed picture of the helium flow in the final ITER TF CICCs.

  16. On species sampling sequences induced by residual allocation models.

    PubMed

    Rodríguez, Abel; Quintana, Fernando A

    2015-02-01

    We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling. PMID:25477705

  17. On species sampling sequences induced by residual allocation models

    PubMed Central

    Rodríguez, Abel; Quintana, Fernando A.

    2014-01-01

    We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling. PMID:25477705

  18. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  19. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability

    NASA Astrophysics Data System (ADS)

    Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang

    2015-09-01

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  20. Effect of short-term room temperature storage on the microbial community in infant fecal samples

    PubMed Central

    Guo, Yong; Li, Sheng-Hui; Kuang, Ya-Shu; He, Jian-Rong; Lu, Jin-Hua; Luo, Bei-Jun; Jiang, Feng-Ju; Liu, Yao-Zhong; Papasian, Christopher J.; Xia, Hui-Min; Deng, Hong-Wen; Qiu, Xiu

    2016-01-01

    Sample storage conditions are important for unbiased analysis of microbial communities in metagenomic studies. Specifically, for infant gut microbiota studies, stool specimens are often exposed to room temperature (RT) conditions prior to analysis. This could lead to variations in structural and quantitative assessment of bacterial communities. To estimate such effects of RT storage, we collected feces from 29 healthy infants (0–3 months) and partitioned each sample into 5 portions to be stored for different lengths of time at RT before freezing at −80 °C. Alpha diversity did not differ between samples with storage time from 0 to 2 hours. The UniFrac distances and microbial composition analysis showed significant differences by testing among individuals, but not by testing between different time points at RT. Changes in the relative abundance of some specific (less common, minor) taxa were still found during storage at room temperature. Our results support previous studies in children and adults, and provided useful information for accurate characterization of infant gut microbiomes. In particular, our study furnished a solid foundation and justification for using fecal samples exposed to RT for less than 2 hours for comparative analyses between various medical conditions. PMID:27226242

  1. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability.

    PubMed

    Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang

    2015-09-01

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening. PMID:26429448

  2. Effect of short-term room temperature storage on the microbial community in infant fecal samples.

    PubMed

    Guo, Yong; Li, Sheng-Hui; Kuang, Ya-Shu; He, Jian-Rong; Lu, Jin-Hua; Luo, Bei-Jun; Jiang, Feng-Ju; Liu, Yao-Zhong; Papasian, Christopher J; Xia, Hui-Min; Deng, Hong-Wen; Qiu, Xiu

    2016-01-01

    Sample storage conditions are important for unbiased analysis of microbial communities in metagenomic studies. Specifically, for infant gut microbiota studies, stool specimens are often exposed to room temperature (RT) conditions prior to analysis. This could lead to variations in structural and quantitative assessment of bacterial communities. To estimate such effects of RT storage, we collected feces from 29 healthy infants (0-3 months) and partitioned each sample into 5 portions to be stored for different lengths of time at RT before freezing at -80 °C. Alpha diversity did not differ between samples with storage time from 0 to 2 hours. The UniFrac distances and microbial composition analysis showed significant differences by testing among individuals, but not by testing between different time points at RT. Changes in the relative abundance of some specific (less common, minor) taxa were still found during storage at room temperature. Our results support previous studies in children and adults, and provided useful information for accurate characterization of infant gut microbiomes. In particular, our study furnished a solid foundation and justification for using fecal samples exposed to RT for less than 2 hours for comparative analyses between various medical conditions. PMID:27226242

  3. Compact low temperature scanning tunneling microscope with in-situ sample preparation capability

    SciTech Connect

    Kim, Jungdae; Nam, Hyoungdo; Schroeder, Allan; Shih, Chih-Kang; Qin, Shengyong; Kim, Sang-ui; Eom, Daejin

    2015-09-15

    We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.

  4. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry

    SciTech Connect

    Montaser, A.

    1990-01-01

    In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, biomedicine and nutrition. Emphasis is being placed on: generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; diagnostic studies of high-temperature plasmas to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; development and characterization of new sample introduction systems that consume microliter or microgram quantities of samples, and investigation of new membrane separators for striping solvent from sample aerosol to reduce various interferences and to enhance sensitivity in plasma spectrometry.

  5. Multiple sample characterization of coals and other substances by controlled-atmosphere programmed temperature oxidation

    DOEpatents

    LaCount, Robert B.

    1993-01-01

    A furnace with two hot zones holds multiple analysis tubes. Each tube has a separable sample-packing section positioned in the first hot zone and a catalyst-packing section positioned in the second hot zone. A mass flow controller is connected to an inlet of each sample tube, and gas is supplied to the mass flow controller. Oxygen is supplied through a mass flow controller to each tube to either or both of an inlet of the first tube and an intermediate portion between the tube sections to intermingle with and oxidize the entrained gases evolved from the sample. Oxidation of those gases is completed in the catalyst in each second tube section. A thermocouple within a sample reduces furnace temperature when an exothermic condition is sensed within the sample. Oxidized gases flow from outlets of the tubes to individual gas cells. The cells are sequentially aligned with an infrared detector, which senses the composition and quantities of the gas components. Each elongated cell is tapered inward toward the center from cell windows at the ends. Volume is reduced from a conventional cell, while permitting maximum interaction of gas with the light beam. Reduced volume and angulation of the cell inlets provide rapid purgings of the cell, providing shorter cycles between detections. For coal and other high molecular weight samples, from 50% to 100% oxygen is introduced to the tubes.

  6. Modelling LARES temperature distribution and thermal drag

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuc H.; Matzner, Richard

    2015-10-01

    The LARES satellite, a laser-ranged space experiment to contribute to geophysics observation, and to measure the general relativistic Lense-Thirring effect, has been observed to undergo an anomalous along-track orbital acceleration of -0.4 pm/s2 (pm : = picometer). This thermal "drag" is not surprising; along-track thermal drag has previously been observed with the related LAGEOS satellites (-3.4 pm/s2). It is hypothesized that the thermal drag is principally due to anisotropic thermal radiation from the satellite's exterior. We report the results of numerical computations of the along-track orbital decay of the LARES satellite during the first 126 days after launch. The results depend to a significant degree on the visual and IR absorbance α and emissivity ɛ of the fused silica Cube Corner Reflectors. We present results for two values of α IR = ɛ IR : 0.82, a standard number for "clean" fused silica; and 0.60, a possible value for silica with slight surface contamination subjected to the space environment. The heating and the resultant along-track acceleration depend on the plane of the orbit, the sun position, and, in particular, on the occurrence of eclipses, all of which are functions of time. Thus we compute the thermal drag for specific days. We compare our model to observational data, available for a 120 day period starting with the 7th day after launch, which shows the average acceleration of -0.4 pm/s2. With our model the average along-track thermal drag over this 120 day period for CCR α IR = ɛ IR = 0.82 was computed to be -0.59 pm/s2. For CCR α IR = ɛ IR = 0.60 we compute -0.36 pm/s2. LARES consists of a solid spherical tungsten sphere, into which the CCRs are set in colatitude circles. Our calculation models the satellite as 93 isothermal elements: the tungsten part, and each of the 92 Cube Corner Reflectors. The satellite is heated from two sources: sunlight and Earth's infrared (IR) radiation. We work in the fast-spin regime, where CCRs with

  7. Canopy temperature depression sampling to assess grain yield and genotypic differentiation in winter wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Canopy temperature depression (CTD = Ta - Tc) has been used to model crop yield, heat, and drought tolerance; but when to measure CTD for breeding selection has seldom been addressed. Our objective was to determine optimal measurement times in relation to growth stage, time of day, and environment. ...

  8. A Nonlinear Viscoelastic Model for Ceramics at High Temperatures

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.

    2002-01-01

    High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.

  9. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  10. Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses

    SciTech Connect

    Bastos, Rodrigo O.; Appoloni, Carlos R.

    2008-08-07

    In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.

  11. Long-term room temperature preservation of corpse soft tissue: an approach for tissue sample storage

    PubMed Central

    2011-01-01

    Background Disaster victim identification (DVI) represents one of the most difficult challenges in forensic sciences, and subsequent DNA typing is essential. Collected samples for DNA-based human identification are usually stored at low temperature to halt the degradation processes of human remains. We have developed a simple and reliable procedure for soft tissue storage and preservation for DNA extraction. It ensures high quality DNA suitable for PCR-based DNA typing after at least 1 year of room temperature storage. Methods Fragments of human psoas muscle were exposed to three different environmental conditions for diverse time periods at room temperature. Storage conditions included: (a) a preserving medium consisting of solid sodium chloride (salt), (b) no additional substances and (c) garden soil. DNA was extracted with proteinase K/SDS followed by organic solvent treatment and concentration by centrifugal filter devices. Quantification was carried out by real-time PCR using commercial kits. Short tandem repeat (STR) typing profiles were analysed with 'expert software'. Results DNA quantities recovered from samples stored in salt were similar up to the complete storage time and underscored the effectiveness of the preservation method. It was possible to reliably and accurately type different genetic systems including autosomal STRs and mitochondrial and Y-chromosome haplogroups. Autosomal STR typing quality was evaluated by expert software, denoting high quality profiles from DNA samples obtained from corpse tissue stored in salt for up to 365 days. Conclusions The procedure proposed herein is a cost efficient alternative for storage of human remains in challenging environmental areas, such as mass disaster locations, mass graves and exhumations. This technique should be considered as an additional method for sample storage when preservation of DNA integrity is required for PCR-based DNA typing. PMID:21846338

  12. Accelerated failure time model under general biased sampling scheme.

    PubMed

    Kim, Jane Paik; Sit, Tony; Ying, Zhiliang

    2016-07-01

    Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. PMID:26941240

  13. A three stage sampling model for remote sensing applications

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A conceptual model and an empirical application of the relationship between the manner of selecting observations and its effect on the precision of estimates from remote sensing are reported. This three stage sampling scheme considers flightlines, segments within flightlines, and units within these segments. The error of estimate is dependent on the number of observations in each of the stages.

  14. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  15. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 1.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas State Language Arts Framework, this sample curriculum model for grade one language arts is divided into sections focusing on writing; listening, speaking, and viewing; and reading. Each section lists standards; benchmarks; assessments; and strategies/activities. The reading section itself is divided into print awareness;…

  16. Language Arts Curriculum Framework: Sample Curriculum Model, Grade K.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas State Language Arts Framework, this sample curriculum model for kindergarten language arts is divided into sections focusing on writing; listening, speaking, and viewing; and reading. Each section lists standards; benchmarks; assessments; and strategies/activities. The reading section itself is divided into print…

  17. NERVE AS MODEL TEMPERATURE END ORGAN

    PubMed Central

    Bernhard, C. G.; Granit, Ragnar

    1946-01-01

    Rapid local cooling of mammalian nerve sets up a discharge which is preceded by a local temperature potential, the cooled region being electronegative relative to a normal portion of the nerve. Heating the nerve locally above its normal temperature similarly makes the heated region electronegative relative to a region at normal temperature, and again a discharge is set up from the heated region. These local temperature potentials, set up by the nerve itself, are held to serve as "generator potentials" and the mechanism found is regarded as the prototype for temperature end organs. PMID:19873460

  18. Water adsorption at high temperature on core samples from The Geysers geothermal field

    SciTech Connect

    Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.

    1998-06-01

    The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal field, California, was measured at 150, 200, and 250 C as a function of steam pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption and desorption runs were made in order to investigate the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were made on the same rock samples. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there was no direct correlation between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The hysteresis decreased significantly at 250 C. The results indicate that multilayer adsorption, rather than capillary condensation, is the dominant water storage mechanism at high temperatures.

  19. Water adsorption at high temperature on core samples from The Geysers geothermal field

    SciTech Connect

    Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.

    1998-06-01

    The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal reservoir, California, was measured at 150, 200, and 250 C as a function of pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption (increasing pressure) and desorption (decreasing pressure) runs were made in order to investigate the nature and the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were performed on the same rock samples. Nitrogen or krypton adsorption and desorption isotherms at 77 K were used to obtain BET specific surface areas, pore volumes and their distributions with respect to pore sizes. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there is in general no proportionality between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The results indicate that multilayer adsorption rather than capillary condensation is the dominant water storage mechanism at high temperatures.

  20. A temperature dependent SPICE macro-model for power MOSFETs

    SciTech Connect

    Pierce, D.G.

    1991-01-01

    The power MOSFET SPICE Macro-Model has been developed suitable for use over the temperature range {minus}55 to 125 {degrees}C. The model is comprised of a single parameter set with temperature dependence accessed through the SPICE .TEMP card. SPICE parameter extraction techniques for the model and model predictive accuracy are discussed. 7 refs., 8 figs., 1 tab.

  1. Data augmentation for models based on rejection sampling

    PubMed Central

    Rao, Vinayak; Lin, Lizhen; Dunson, David B.

    2016-01-01

    We present a data augmentation scheme to perform Markov chain Monte Carlo inference for models where data generation involves a rejection sampling algorithm. Our idea is a simple scheme to instantiate the rejected proposals preceding each data point. The resulting joint probability over observed and rejected variables can be much simpler than the marginal distribution over the observed variables, which often involves intractable integrals. We consider three problems: modelling flow-cytometry measurements subject to truncation; the Bayesian analysis of the matrix Langevin distribution on the Stiefel manifold; and Bayesian inference for a nonparametric Gaussian process density model. The latter two are instances of doubly-intractable Markov chain Monte Carlo problems, where evaluating the likelihood is intractable. Our experiments demonstrate superior performance over state-of-the-art sampling algorithms for such problems. PMID:27279660

  2. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  3. Simulating canopy temperature for modelling heat stress in cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop models must be improved to account for the large effects of heat stress effects on crop yields. To date, most approaches in crop models use air temperature despite evidence that crop canopy temperature better explains yield reductions associated with high temperature events. This study presents...

  4. Modeling of temperature sensor built on GaN nanostructures

    NASA Astrophysics Data System (ADS)

    Asgari, A.; Taheri, S.

    2011-03-01

    A GaN nanostructure based temperature sensor has been modeled using the minority-carrier exclusion theory. The model takes into account the effects of temperature, carrier concentrations and electric field on carrier mobilities. The model also consists of different carrier scattering mechanisms such as phonon and natural ionized scattering. The calculation results show that the resistance of modeled GaN nanostructure based temperature sensor is strongly dependent on the sensor structural parameters such as doping density and device size.

  5. Ellipsoidal nested sampling, expression of the model uncertainty and measurement

    NASA Astrophysics Data System (ADS)

    Palmisano, C.; Mana, G.; Gervino, G.

    2015-07-01

    The measurand value, the conclusions, and the decisions inferred from measurements may depend on the models used to explain and to analyze the results. In this paper, the problems of identifying the most appropriate model and of assessing the model contribution to the uncertainty are formulated and solved in terms of Bayesian model selection and model averaging. As computational cost of this approach increases with the dimensionality of the problem, a numerical strategy, based on multimodal ellipsoidal nested sampling, to integrate over the nuisance parameters and to compute the measurand post-data distribution is outlined. In order to illustrate the numerical strategy, by use of MATHEMATICA an elementary example concerning a bimodal, two-dimensional distribution has also been studied.

  6. Study of Low Temperature Baking Effect on Field Emission on Nb Samples Treated by BEP, EP, and BCP

    SciTech Connect

    Andy Wu, Song Jin, Robert Rimmer, Xiang Yang Lu, K. Zhao, Laura MacIntyre, Robert Ike

    2010-05-01

    Field emission is still one of the major obstacles facing Nb superconducting radio frequency (SRF) community for allowing Nb SRF cavities to reach routinely accelerating gradient of 35 MV/m that is required for the international linear collider. Nowadays, the well know low temperature backing at 120 oC for 48 hours is a common procedure used in the SRF community to improve the high field Q slope. However, some cavity production data have showed that the low temperature baking may induce field emission for cavities treated by EP. On the other hand, an earlier study of field emission on Nb flat samples treated by BCP showed an opposite conclusion. In this presentation, the preliminary measurements of Nb flat samples treated by BEP, EP, and BCP via our unique home-made scanning field emission microscope before and after the low temperature baking are reported. Some correlations between surface smoothness and the number of the observed field emitters were found. The observed experimental results can be understood, at least partially, by a simple model that involves the change of the thickness of the pent-oxide layer on Nb surfaces.

  7. Temperature distributions in the laser-heated diamond anvil cell from 3-D numerical modeling

    SciTech Connect

    Rainey, E. S. G.; Kavner, A.; Hernlund, J. W.

    2013-11-28

    We present TempDAC, a 3-D numerical model for calculating the steady-state temperature distribution for continuous wave laser-heated experiments in the diamond anvil cell. TempDAC solves the steady heat conduction equation in three dimensions over the sample chamber, gasket, and diamond anvils and includes material-, temperature-, and direction-dependent thermal conductivity, while allowing for flexible sample geometries, laser beam intensity profile, and laser absorption properties. The model has been validated against an axisymmetric analytic solution for the temperature distribution within a laser-heated sample. Example calculations illustrate the importance of considering heat flow in three dimensions for the laser-heated diamond anvil cell. In particular, we show that a “flat top” input laser beam profile does not lead to a more uniform temperature distribution or flatter temperature gradients than a wide Gaussian laser beam.

  8. Sampling and specimens: potential application of a general model in geoscience sample registration

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Habermann, T.; Duclaux, G.

    2011-12-01

    Sampling is a key element of observational science. Specimens are a particular class of sample, in which material is retrieved from its original location and used for ex-situ observations and analysis. Specimens retrieved from difficult locations (e.g. deep ocean sampling, extra-terrestrial sampling) or of rare phenomena, have special scientific value. Material from these may be distributed to multiple laboratories for observation. For maximum utility, reports from the different studies must be recognized and compared. This has been a challenge as the original specimens are often not clearly identified or existing ids are not reported. To mitigate this, the International Geologic Specimen Number (IGSN) provides universal, project-neutral identifiers for geoscience specimens, and SESAR a system for registering those identifiers. Standard descriptive information required for specimen registration was proposed during a SESAR meeting held in February 2011. The standard ISO 19156 'Observations and Measurements' (O&M) includes an information model for basic description of specimens. The specimen model was designed to accommodate a variety of scenarios in chemistry, geochemistry, field geology, and life-sciences, and is believed to be applicable to a wide variety of application domains. O&M is implemented in XML (as a GML Schema) for OGC services and we have recently developed a complementary semantic-web compatible RDF/OWL representation. The GML form is used in several services deployed through AuScope, and for water quality information in WIRADA. The model has underpinned the redevelopment of a large geochemistry database in CSIRO. Capturing the preparation chain is the particular challenge in (geo-) chemistry, so the flexible and scalable model provided by the specimen model in O&M has been critical to its success in this context. This standard model for specimen metadata appears to satisfy all SESAR requirements, so might serve as the basic schema in the SESAR

  9. Far-infrared Dust Temperatures and Column Densities of the MALT90 Molecular Clump Sample

    NASA Astrophysics Data System (ADS)

    Guzmán, Andrés E.; Sanhueza, Patricio; Contreras, Yanett; Smith, Howard A.; Jackson, James M.; Hoq, Sadia; Rathborne, Jill M.

    2015-12-01

    We present dust column densities and dust temperatures for ˜3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm-2, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.

  10. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    SciTech Connect

    Tuomas, V.; Jaakko, L.

    2013-07-01

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)

  11. The Genealogy of Samples in Models with Selection

    PubMed Central

    Neuhauser, C.; Krone, S. M.

    1997-01-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604

  12. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  13. Exposure Of NIF Relevant Polymeric Samples To Deuterium-Tritium Gas At Elevated Temperature And Pressure

    SciTech Connect

    Ebey, P S; Dole, J M; Nobile, A; Schoonover, J R; Burmann, J; Cook, B; Letts, S; Sanchez, J; Nikroo, A

    2005-06-24

    The purpose of the experiments described in this paper was to expose samples of polymeric materials to a mixture of deuterium-tritium (DT) gas at elevated temperature and pressure to investigate the effects (i.e. damage) on the materials. The materials and exposure parameters were chosen with to be relevant to proposed uses of similar materials in inertial fusion ignition experiments at the National Ignition Facility. Two types of samples were exposed and tested. The first type consisted of 10 4-lead ribbon cables of fine manganin wire insulated with polyimide. Wires of this type are proposed for use in thermal shimming of hohlraums and the goal of this experiment was to measure the change in electrical resistance of the insulation due to tritium exposure. The second type of sample consisted of 20 planar polymer samples that may be used as ignition capsule materials. The exposure was at 34.5 GPa (5010 psia) and 70 C for 48 hours. The change in electrical resistance of the wire insulation will be presented. The results for capsule materials will be presented in a separate paper in this issue.

  14. Modeling the Freezing of SN in High Temperature Furnaces

    NASA Technical Reports Server (NTRS)

    Brush, Lucien

    1999-01-01

    Presently, crystal growth furnaces are being designed that will be used to monitor the crystal melt interface shape and the solutal and thermal fields in its vicinity during the directional freezing of dilute binary alloys, To monitor the thermal field within the solidifying materials, thermocouple arrays (AMITA) are inserted into the sample. Intrusive thermocouple monitoring devices can affect the experimental data being measured. Therefore, one objective of this work is to minimize the effect of the thermocouples on the data generated. To aid in accomplishing this objective, two models of solidification have been developed. Model A is a fully transient, one dimensional model for the freezing of a dilute binary alloy that is used to compute temperature profiles for comparison with measurements taken from the thermocouples. Model B is a fully transient two dimensional model of the solidification of a pure metal. It will be used to uncover the manner in which thermocouple placement and orientation within the ampoule breaks the longitudinal axis of symmetry of the thermal field and the crystal-melt interface. Results and conclusions are based on the comparison of the models with experimental results taken during the freezing of pure Sn.

  15. Model Based Unsupervised Learning Guided by Abundant Background Samples

    PubMed Central

    Mahdi, Rami N.; Rouchka, Eric C.

    2010-01-01

    Many data sets contain an abundance of background data or samples belonging to classes not currently under consideration. We present a new unsupervised learning method based on Fuzzy C-Means to learn sub models of a class using background samples to guide cluster split and merge operations. The proposed method demonstrates how background samples can be used to guide and improve the clustering process. The proposed method results in more accurate clusters and helps to escape locally minimum solutions. In addition, the number of clusters is determined for the class under consideration. The method demonstrates remarkable performance on both synthetic 2D and real world data from the MNIST dataset of hand written digits. PMID:20436793

  16. Sample temperature profile during the excimer laser annealing of silicon nanoparticles

    NASA Astrophysics Data System (ADS)

    Caninenberg, M.; Verheyen, E.; Kiesler, D.; Stoib, B.; Brandt, M. S.; Benson, N.; Schmechel, R.

    2015-11-01

    Based on the heat diffusion equation we describe the temperature profile of a silicon nanoparticle thin film on silicon during excimer laser annealing using COMSOL Multiphysics. For this purpose system specific material parameters are determined such as the silicon nanoparticle melting point at 1683 K, the surface reflectivity at 248 nm of 20% and the nanoparticle thermal conductivity between 0.3 and 1.2 W/m K. To validate our model, the simulation results are compared to experimental data obtained by Raman spectroscopy, SEM microscopy and electrochemical capacitance-voltage measurements (ECV). The experimental data are in good agreement with our theoretical findings and support the validity of the model.

  17. Automation of sample plan creation for process model calibration

    NASA Astrophysics Data System (ADS)

    Oberschmidt, James; Abdo, Amr; Desouky, Tamer; Al-Imam, Mohamed; Krasnoperova, Azalia; Viswanathan, Ramya

    2010-04-01

    The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.

  18. Imputation for semiparametric transformation models with biased-sampling data

    PubMed Central

    Liu, Hao; Qin, Jing; Shen, Yu

    2012-01-01

    Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example. PMID:22903245

  19. Temperature and electron density distributions of laser-induced plasmas generated with an iron sample at different ambient gas pressures

    NASA Astrophysics Data System (ADS)

    Aguilera, J. A.; Aragón, C.

    2002-09-01

    Intensity, temperature and electron density distributions of laser-induced plasmas (LIPs) have been measured by emission spectroscopy with two-dimensional spatial resolution and temporal resolution. The plasmas have been generated with an iron sample at different pressures of air, in the range 10-1000 mbar. An experimental system based in an imaging spectrometer equipped with an intensified CCD detector has been used to obtain the spectra with two-dimensional spatial resolution. The evolution of the intensity distributions is described by the blast wave model only at initial times. The temperature distributions are shown to correspond to a slight difference between the intensity distributions of two Fe I emission lines that have a high difference of their upper energy levels (3.38 eV). The electron density distributions have similar features to those of the temperature distributions. The features of the intensity and temperature distributions show a significant change with the ambient gas pressure: they have separated maxima in the plasmas generated at pressures below 100 mbar, whereas at higher pressures, the maxima of the two distributions coincide.

  20. Simple determination of the herbicide napropamide in water and soil samples by room temperature phosphorescence.

    PubMed

    Salinas-Castillo, Alfonso; Fernández-Sanchez, Jorge Fernando; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2005-08-01

    A new, simple, rapid and selective phosphorimetric method for determining napropamide is proposed which demonstrates the applicability of heavy-atom-induced room-temperature phosphorescence for analyzing pesticides in real samples. The phosphorescence signals are a consequence of intermolecular protection and are found exclusively with analytes in the presence of heavy atom salts. Sodium sulfite was used as an oxygen scavenger to minimize room-temperature phosphorescence quenching. The determination was performed in 1 M potassium iodide and 6 mM sodium sulfite at 20 degrees C. The phosphorescence intensity was measured at 520 nm with excitation at 290 nm. Phosphorescence was easily developed, with a linear relation to concentration between 3.2 and 600.0 ng ml(-1) and a detection limit of 3.2 ng ml(-1). The method has been successfully applied to the analysis of napropamide in water and soil samples and an exhaustive interference study was also carried out to display the selectivity of the proposed method. PMID:15838936

  1. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  2. Ambient temperature modelling with soft computing techniques

    SciTech Connect

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  3. A NEW SAMPLE CELL DESIGN FOR STUDYING SOLID-MATRIX ROOM TEMPERATURE PHOSPHORESCENCE MOISTURE QUENCHING. (R824100)

    EPA Science Inventory

    A new sample chamber was developed that can be used in the measurement of the effects of moisture on the room-temperature solid-matrix phosphorescence of phosphors adsorbed onto filter paper. The sample chamber consists of a sealed quartz cell that contains a special teflon sampl...

  4. Physical Models of Seismic-Attenuation Measurements on Lab Samples

    NASA Astrophysics Data System (ADS)

    Coulman, T. J.; Morozov, I. B.

    2012-12-01

    Seismic attenuation in Earth materials is often measured in the lab by using low-frequency forced oscillations or static creep experiments. The usual assumption in interpreting and even designing such experiments is the "viscoelastic" behavior of materials, i.e., their description by the notions of a Q-factor and material memory. However, this is not the only theoretical approach to internal friction, and it also involves several contradictions with conventional mechanics. From the viewpoint of mechanics, the frequency-dependent Q becomes a particularly enigmatic property attributed to the material. At the same time, the behavior of rock samples in seismic-attenuation experiments can be explained by a strictly mechanical approach. We use this approach to simulate such experiments analytically and numerically for a system of two cylinders consisting of a rock sample and elastic standard undergoing forced oscillations, and also for a single rock sample cylinder undergoing static creep. The system is subject to oscillatory compression or torsion, and the phase-lag between the sample and standard is measured. Unlike in the viscoelastic approach, a full Lagrangian formulation is considered, in which material anelasticity is described by parameters of "solid viscosity" and a dissipation function from which the constitutive equation is derived. Results show that this physical model of anelasticity predicts creep results very close to those obtained by using empirical Burger's bodies or Andrade laws. With nonlinear (non-Newtonian) solid viscosity, the system shows an almost instantaneous initial deformation followed by slow creep towards an equilibrium. For Aheim Dunite, the "rheologic" parameters of nonlinear viscosity are υ=0.79 and η=2.4 GPa-s. Phase-lag results for nonlinear viscosity show Q's slowly decreasing with frequency. To explain a Q increasing with frequency (which is often observed in the lab and in the field), one has to consider nonlinear viscosity with

  5. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part II: Applications.

    PubMed

    Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J

    2016-01-15

    This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934

  6. Two-Temperature Model of Nonequilibrium Electron Relaxation:. a Review

    NASA Astrophysics Data System (ADS)

    Singh, Navinder

    The present paper is a review of the phenomena related to nonequilibrium electron relaxation in bulk and nano-scale metallic samples. The workable Two-Temperature Model (TTM) based on Boltzmann-Bloch-Peierls kinetic equation has been applied to study the ultra-fast (femto-second) electronic relaxation in various metallic systems. The advent of new ultra-fast (femto-second) laser technology and pump-probe spectroscopy has produced wealth of new results for micro- and nano-scale electronic technology. The aim of this paper is to clarify the TTM, conditions of its validity and nonvalidity, its modifications for nano-systems, to sum-up the progress, and to point out open problems in this field. We also give a phenomenological integro-differential equation for the kinetics of nondegenerate electrons that goes beyond the TTM.

  7. Volcanic Aerosol Evolution: Model vs. In Situ Sampling

    NASA Astrophysics Data System (ADS)

    Pfeffer, M. A.; Rietmeijer, F. J.; Brearley, A. J.; Fischer, T. P.

    2002-12-01

    Volcanoes are the most significant non-anthropogenic source of tropospheric aerosols. Aerosol samples were collected at different distances from 92°C fumarolic source at Poás Volcano. Aerosols were captured on TEM grids coated by a thin C-film using a specially designed collector. In the sampling, grids were exposed to the plume for 30-second intervals then sealed and frozen to prevent reaction before ATEM analysis to determine aerosol size and chemistry. Gas composition was established using gas chromatography, wet chemistry techniques, AAS and Ion Chromatography on samples collected directly from a fumarolic vent. SO2 flux was measured remotely by COSPEC. A Gaussian plume dispersion model was used to model concentrations of the gases at different distances down-wind. Calculated mixing ratios of air and the initial gas species were used as input to the thermo-chemical model GASWORKS (Symonds and Reed, Am. Jour. Sci., 1993). Modeled products were compared with measured aerosol compositions. Aerosols predicted to precipitate out of the plume one meter above the fumarole are [CaSO4, Fe2.3SO4, H2SO4, MgF2. Na2SO4, silica, water]. Where the plume leaves the confines of the crater, 380 meters distant, the predicted aerosols are the same, excepting FeF3 replacing Fe2.3SO4. Collected aerosols show considerable compositional differences between the sampling locations and are more complex than those predicted. Aerosols from the fumarole consist of [Fe +/- Si,S,Cl], [S +/- O] and [Si +/- O]. Aerosols collected on the crater rim consist of the same plus [O,Na,Mg,Ca], [O,Si,Cl +/- Fe], [Fe,O,F] and [S,O +/- Mg,Ca]. The comparison between results obtained by the equilibrium gas model and the actual aerosol compositions shows that an assumption of chemical and thermal equilibrium evolution is invalid. The complex aerosols collected contrast the simple formulae predicted. These findings show that complex, non-equilibrium chemical reactions take place immediately upon volcanic

  8. New application of temperature-dependent modelling of high temperature superconductors: Quench propagation and pulse magnetization

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Matsuda, Koichi; Coombs, T. A.

    2012-08-01

    We present temperature-dependent modeling of high-temperature superconductors (HTS) to understand HTS electromagnetic phenomena where temperature fluctuation plays a nontrivial role. Thermal physics is introduced into the well-developed H-formulation model, and the effect of temperature-dependent parameters is considered. Based on the model, we perform extensive studies on two important HTS applications: quench propagation and pulse magnetization. A micrometer-scale quench model of HTS coil is developed, which can be used to estimate minimum quench energy and normal zone propagation velocity inside the coil. In addition, we study the influence of inhomogeneity of HTS bulk during pulse magnetization. We demonstrate how the inhomogeneous distribution of critical current inside the bulk results in varying degrees of heat dissipation and uniformity of final trapped field. The temperature-dependent model is proven to be a powerful tool to study the thermally coupled electromagnetic phenomena of HTS.

  9. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  10. Approaches to retrospective sampling for longitudinal transition regression models

    PubMed Central

    Hunsberger, Sally; Albert, Paul S.; Thoma, Marie

    2016-01-01

    For binary diseases that relapse and remit, it is often of interest to estimate the effect of covariates on the transition process between disease states over time. The transition process can be characterized by modeling the probability of the binary event given the individual’s history. Designing studies that examine the impact of time varying covariates over time can lead to collection of extensive amounts of data. Sometimes it may be possible to collect and store tissue, blood or images and retrospectively analyze this covariate information. In this paper we consider efficient sampling designs that do not require biomarker measurements on all subjects. We describe appropriate estimation methods for transition probabilities and functions of these probabilities, and evaluate efficiency of the estimates from the proposed sampling designs. These new methods are illustrated with data from a longitudinal study of bacterial vaginosis, a common relapsing-remitting vaginal infection of women of child bearing age.

  11. De novo protein conformational sampling using a probabilistic graphical model

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-01-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/. PMID:26541939

  12. THE TWO-LEVEL MODEL AT FINITE-TEMPERATURE

    SciTech Connect

    Goodman, A.L.

    1980-07-01

    The finite-temperature HFB cranking equations are solved for the two-level model. The pair gap, moment of inertia and internal energy are determined as functions of spin and temperature. Thermal excitations and rotations collaborate to destroy the pair correlations. Raising the temperature eliminates the backbending effect and improves the HFB approximation.

  13. Research on Temperature Modeling of Strapdown Inertial Navigation System

    NASA Astrophysics Data System (ADS)

    Huang, XiaoJuan; Zhao, LiJian; Xu, RuXiang; Yang, Heng

    2016-02-01

    Strapdown inertial navigation system with laser gyro has been deployed in space tracking ship and compared with the conventional platform inertial navigation system, it has substantial advantage in performance, accuracy and stabilization. Environmental and internal temperature affects the gyro, accelerator, electrical circuits and mechanical structure significantly but the existing temperature compensation model is not accurate enough especially when there is a big temperature change.

  14. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  15. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  16. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  17. Determining permeability of tight rock samples using inverse modeling

    NASA Astrophysics Data System (ADS)

    Finsterle, Stefan; Persoff, Peter

    1997-08-01

    Data from gas-pressure-pulse-decay experiments have been analyzed by means of numerical simulation in combination with automatic model calibration techniques to determine hydrologie properties of low-permeability, low-porosity rock samples. Porosity, permeability, and Klinkenberg slip factor have been estimated for a core plug from The Geysers geothermal field, California. The experiments were conducted using a specially designed permeameter with small gas reservoirs. Pressure changes were measured as gas flowed from the pressurized upstream reservoir through the sample to the downstream reservoir. A simultaneous inversion of data from three experiments performed on different pressure levels allows for independent estimation of absolute permeability and gas permeability which is pressure-dependent due to enhanced slip flow. With this measurement and analysis technique we can determine matrix properties with permeabilities as low as 10-21 m2. In this paper we discuss the procedure of parameter estimation by inverse modeling. We will focus on the error analysis, which reveals estimation uncertainty and parameter correlations. This information can also be used to evaluate and optimize the design of an experiment. The impact of systematic errors due to potential leakage and uncertainty in the initial conditions will also be addressed. The case studies clearly illustrate the need for a thorough error analysis of inverse modeling results.

  18. Modeling and Simulation of a Tethered Harpoon for Comet Sampling

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2014-01-01

    This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.

  19. Modelling of tandem cell temperature coefficients

    SciTech Connect

    Friedman, D.J.

    1996-05-01

    This paper discusses the temperature dependence of the basic solar-cell operating parameters for a GaInP/GaAs series-connected two-terminal tandem cell. The effects of series resistance and of different incident solar spectra are also discussed.

  20. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L....

  1. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L....

  2. 40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (40 CFR part 50, appendix L, figure L-30) or equivalent adaptor to facilitate measurement of sampler... recommended. (6) Sample filter or filters, as specified in section 6 of 40 CFR part 50, appendix L. (d... calibration, certification of calibration accuracy, and NIST-traceability (if required) of all...

  3. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    SciTech Connect

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.; Mahadevan, Sankaran

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  4. Activated sampling in complex materials at finite temperature: The properly obeying probability activation-relaxation technique

    NASA Astrophysics Data System (ADS)

    Vocks, Henk; Chubynsky, M. V.; Barkema, G. T.; Mousseau, Normand

    2005-12-01

    While the dynamics of many complex systems is dominated by activated events, there are very few simulation methods that take advantage of this fact. Most of these procedures are restricted to relatively simple systems or, as with the activation-relaxation technique (ART), sample the conformation space efficiently at the cost of a correct thermodynamical description. We present here an extension of ART, the properly obeying probability ART (POP-ART), that obeys detailed balance and samples correctly the thermodynamic ensemble. Testing POP-ART on two model systems, a vacancy and an interstitial in crystalline silicon, we show that this method recovers the proper thermodynamical weights associated with the various accessible states and is significantly faster than molecular dynamics in the simulations of a vacancy below 700 K.

  5. Activated sampling in complex materials at finite temperature: the properly obeying probability activation-relaxation technique.

    PubMed

    Vocks, Henk; Chubynsky, M V; Barkema, G T; Mousseau, Normand

    2005-12-22

    While the dynamics of many complex systems is dominated by activated events, there are very few simulation methods that take advantage of this fact. Most of these procedures are restricted to relatively simple systems or, as with the activation-relaxation technique (ART), sample the conformation space efficiently at the cost of a correct thermodynamical description. We present here an extension of ART, the properly obeying probability ART (POP-ART), that obeys detailed balance and samples correctly the thermodynamic ensemble. Testing POP-ART on two model systems, a vacancy and an interstitial in crystalline silicon, we show that this method recovers the proper thermodynamical weights associated with the various accessible states and is significantly faster than molecular dynamics in the simulations of a vacancy below 700 K. PMID:16396563

  6. New high temperature plasmas and sample introduction systems for analytical atomic emission and mass spectrometry. Progress report, January 1, 1990--December 31, 1992

    SciTech Connect

    Montaser, A.

    1992-09-01

    New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)

  7. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  8. Automated, low-temperature dielectric relaxation apparatus for measurement of air-sensitive, corrosive, hygroscopic, powdered samples

    NASA Astrophysics Data System (ADS)

    Bessonette, Paul W. R.; White, Mary Anne

    1999-07-01

    An automated apparatus for dielectric determinations on solid samples was designed to allow cryogenic measurements on air-sensitive, corrosive, hygroscopic, powdered samples, without determination of sample thickness, provided that it is uniform. A three-terminal design enabled measurements that were not affected by errors due to dimensional changes of the sample or the electrodes with changes in temperature. Meaningful dielectric data could be taken over the frequency range from 20 Hz to 1 MHz and the temperature range from 12 to 360 K. Tests with Teflon and with powdered NH4Cl gave results that were accurate within a few percent when compared with literature values.

  9. Spatiotemporal model or time series model for assessing city-wide temperature effects on mortality?

    PubMed

    Guo, Yuming; Barnett, Adrian G; Tong, Shilu

    2013-01-01

    Most studies examining the temperature-mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models. PMID:23026801

  10. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1988-01-01

    Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.

  11. Temperature-variable high-frequency dynamic modeling of PIN diode

    NASA Astrophysics Data System (ADS)

    Shangbin, Ye; Jiajia, Zhang; Yicheng, Zhang; Yongtao, Yao

    2016-04-01

    The PIN diode model for high frequency dynamic transient characteristic simulation is important in conducted EMI analysis. The model should take junction temperature into consideration since equipment usually works at a wide range of temperature. In this paper, a temperature-variable high frequency dynamic model for the PIN diode is built, which is based on the Laplace-transform analytical model at constant temperature. The relationship between model parameters and temperature is expressed as temperature functions by analyzing the physical principle of these parameters. A fast recovery power diode MUR1560 is chosen as the test sample and its dynamic performance is tested under inductive load by a temperature chamber experiment, which is used for model parameter extraction and model verification. Results show that the model proposed in this paper is accurate for reverse recovery simulation with relatively small errors at the temperature range from 25 to 120 °C. Project supported by the National High Technology and Development Program of China (No. 2011AA11A265).

  12. Modeling Climate Change Effects on Stream Temperatures in Regulated Rivers

    NASA Astrophysics Data System (ADS)

    Null, S. E.; Akhbari, M.; Ligare, S. T.; Rheinheimer, D. E.; Peek, R.; Yarnell, S. M.; Viers, J. H.

    2013-12-01

    We provide a method for examining mesoscale stream temperature objectives downstream of dams with anticipated climate change using an integrated multi-model approach. Changing hydroclimatic conditions will likely impact stream temperatures within reservoirs and below dams, and affect downstream ecology. We model hydrology and water temperature using a series of linked models that includes a hydrology model to predict natural unimpaired flows in upstream reaches, a reservoir temperature simulation model , an operations model to simulate reservoir releases, and a stream temperature simulation model to simulate downstream conditions . All models are 1-dimensional and operate on either a weekly or daily timestep. First, we model reservoir thermal dynamics and release operations of hypothetical reservoirs of different sizes, elevations, and latitudes with climate-forced inflow hydrologies to examine the potential to manage stream temperatures for coldwater habitat. Results are presented as stream temperature change from the historical time period and indicate that reservoir releases are cooler than upstream conditions, although the absolute temperatures of reaches below dams warm with climate change. We also apply our method to a case study in California's Yuba River watershed to evaluate water regulation and hydropower operation effects on stream temperatures with climate change. Catchments of the upper Yuba River are highly-engineered, with multiple, interconnected infrastructure to provide hydropower, water supply, flood control, environmental flows, and recreation. Results illustrate climate-driven versus operations-driven changes to stream temperatures. This work highlights the need for methods to consider reservoir regulation effects on stream temperatures with climate change, particularly for hydropower relicensing (which currently ignores climate change) such that impacts to other beneficial uses like coldwater habitat and instream ecosystems can be

  13. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  14. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  15. Modeling the Orbital Sampling Effect of Extrasolar Moons

    NASA Astrophysics Data System (ADS)

    Heller, René; Hippke, Michael; Jackson, Brian

    2016-04-01

    The orbital sampling effect (OSE) appears in phase-folded transit light curves of extrasolar planets with moons. Analytical OSE models have hitherto neglected stellar limb darkening and non-zero transit impact parameters and assumed that the moon is on a circular, co-planar orbit around the planet. Here, we present an analytical OSE model for eccentric moon orbits, which we implement in a numerical simulator with stellar limb darkening that allows for arbitrary transit impact parameters. We also describe and publicly release a fully numerical OSE simulator (PyOSE) that can model arbitrary inclinations of the transiting moon orbit. Both our analytical solution for the OSE and PyOSE can be used to search for exomoons in long-term stellar light curves such as those by Kepler and the upcoming PLATO mission. Our updated OSE model offers an independent method for the verification of possible future exomoon claims via transit timing variations and transit duration variations. Photometrically quiet K and M dwarf stars are particularly promising targets for an exomoon discovery using the OSE.

  16. A Simple Dewar/Cryostat for Thermally Equilibrating Samples at Known Temperatures for Accurate Cryogenic Luminescence Measurements.

    PubMed

    Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W

    2016-01-01

    The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature. PMID:27501355

  17. Wang-Landau sampling in face-centered-cubic hydrophobic-hydrophilic lattice model proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jingfa; Song, Beibei; Yao, Yonglei; Xue, Yu; Liu, Wenjie; Liu, Zhaoxia

    2014-10-01

    Finding the global minimum-energy structure is one of the main problems of protein structure prediction. The face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice model can reach high approximation ratios of real protein structures, so the fcc lattice model is a good choice to predict the protein structures. The lacking of an effective global optimization method is the key obstacle in solving this problem. The Wang-Landau sampling method is especially useful for complex systems with a rough energy landscape and has been successfully applied to solving many optimization problems. We apply the improved Wang-Landau (IWL) sampling method, which incorporates the generation of an initial conformation based on the greedy strategy and the neighborhood strategy based on pull moves into the Wang-Landau sampling method to predict the protein structures on the fcc HP lattice model. Unlike conventional Monte Carlo simulations that generate a probability distribution at a given temperature, the Wang-Landau sampling method can estimate the density of states accurately via a random walk, which produces a flat histogram in energy space. We test 12 general benchmark instances on both two-dimensional and three-dimensional (3D) fcc HP lattice models. The lowest energies by the IWL sampling method are as good as or better than those of other methods in the literature for all instances. We then test five sets of larger-scale instances, denoted by the S, R, F90, F180, and CASP target instances on the 3D fcc HP lattice model. The numerical results show that our algorithm performs better than the other five methods in the literature on both the lowest energies and the average lowest energies in all runs. The IWL sampling method turns out to be a powerful tool to study the structure prediction of the fcc HP lattice model proteins.

  18. Laser-induced breakdown spectroscopy on metallic samples at very low temperature in different ambient gas pressures

    NASA Astrophysics Data System (ADS)

    El-Saeid, R. H.; Abdelhamid, M.; Harith, M. A.

    2016-02-01

    Analysis of metals at very low temperature adopting laser-induced breakdown spectroscopy (LIBS) is greatly beneficial in space exploration expeditions and in some important industrial applications. In the present work, the effect of very low sample temperature on the spectral emission intensity of laser-induced plasma under both atmospheric pressure and vacuum has been studied for different bronze alloy samples. The sample was cooled down to liquid nitrogen (LN) temperature 77 K in a special vacuum chamber. Laser-induced plasma has been produced onto the sample surface using the fundamental wavelength of Nd:YAG laser. The optical emission from the plasma is collected by an optical fiber and analyzed by an echelle spectrometer combined with an intensified CCD camera. The integrated intensities of certain spectral emission lines of Cu, Pb, Sn, and Zn have been estimated from the obtained LIBS spectra and compared with that measured at room temperature. The laser-induced plasma parameters (electron number density Ne and electron temperature Te) were investigated at room and liquid nitrogen temperatures for both atmospheric pressure and vacuum ambient conditions. The results suggest that reducing the sample temperature leads to decrease in the emission line intensities under both environments. Plasma parameters were found to decrease at atmospheric pressure but increased under vacuum conditions.

  19. A thermocouple-based remote temperature controller of an electrically floated sample to study plasma CVD growth of carbon nanotube

    NASA Astrophysics Data System (ADS)

    Miura, Takuya; Xie, Wei; Yanase, Takashi; Nagahama, Taro; Shimada, Toshihiro

    2015-09-01

    Plasma chemical vapor deposition (CVD) is now gathering attention from a novel viewpoint, because it is easy to combine plasma processes and electrochemistry by applying a bias voltage to the sample. In order to explore electrochemistry during the plasma CVD, the temperature of the sample must be controlled precisely. In traditional equipment, the sample temperature is measured by a radiation thermometer. Since emissivity of the sample surface changes in the course of the CVD growth, it is difficult to measure the exact temperature using the radiation thermometer. In this work, we developed new equipment to control the temperature of electrically floated samples by thermocouple with Wi-Fi transmission. The growth of the CNT was investigated using our plasma CVD equipment. We examined the temperature accuracy and stability controlled by the thermocouple with monitoring the radiation thermometer. We noticed that the thermocouple readings were stable, whereas the readings of the radiation thermometer changes significantly (20 °C) during plasma CVD. This result clearly shows that the sample temperature should be measured with direct connection. On the result of CVD experiment, different structures of carbon including CNT were obtained by changing the bias voltages.

  20. Temperature Dependent Constitutive Modeling for Magnesium Alloy Sheet

    SciTech Connect

    Lee, Jong K.; Lee, June K.; Kim, Hyung S.; Kim, Heon Y.

    2010-06-15

    Magnesium alloys have been increasingly used in automotive and electronic industries because of their excellent strength to weight ratio and EMI shielding properties. However, magnesium alloys have low formability at room temperature due to their unique mechanical behavior (twinning and untwining), prompting for forming at an elevated temperature. In this study, a temperature dependent constitutive model for magnesium alloy (AZ31B) sheet is developed. A hardening law based on non linear kinematic hardening model is used to consider Bauschinger effect properly. Material parameters are determined from a series of uni-axial cyclic experiments (T-C-T or C-T-C) with the temperature ranging 150-250 deg. C. The influence of temperature on the constitutive equation is introduced by the material parameters assumed to be functions of temperature. Fitting process of the assumed model to measured data is presented and the results are compared.

  1. Temperature Dependent Constitutive Modeling for Magnesium Alloy Sheet

    NASA Astrophysics Data System (ADS)

    Lee, Jong K.; Lee, June K.; Kim, Hyung S.; Kim, Heon Y.

    2010-06-01

    Magnesium alloys have been increasingly used in automotive and electronic industries because of their excellent strength to weight ratio and EMI shielding properties. However, magnesium alloys have low formability at room temperature due to their unique mechanical behavior (twinning and untwining), prompting for forming at an elevated temperature. In this study, a temperature dependent constitutive model for magnesium alloy (AZ31B) sheet is developed. A hardening law based on non linear kinematic hardening model is used to consider Bauschinger effect properly. Material parameters are determined from a series of uni-axial cyclic experiments (T-C-T or C-T-C) with the temperature ranging 150-250° C. The influence of temperature on the constitutive equation is introduced by the material parameters assumed to be functions of temperature. Fitting process of the assumed model to measured data is presented and the results are compared.

  2. Simulation of soil temperature dynamics with models using different concepts.

    PubMed

    Sándor, Renáta; Fodor, Nándor

    2012-01-01

    This paper presents two soil temperature models with empirical and mechanistic concepts. At the test site (calcaric arenosol), meteorological parameters as well as soil moisture content and temperature at 5 different depths were measured in an experiment with 8 parcels realizing the combinations of the fertilized, nonfertilized, irrigated, nonirrigated treatments in two replicates. Leaf area dynamics was also monitored. Soil temperature was calculated with the original and a modified version of CERES as well as with the HYDRUS-1D model. The simulated soil temperature values were compared to the observed ones. The vegetation reduced both the average soil temperature and its diurnal amplitude; therefore, considering the leaf area dynamics is important in modeling. The models underestimated the actual soil temperature and overestimated the temperature oscillation within the winter period. All models failed to account for the insulation effect of snow cover. The modified CERES provided explicitly more accurate soil temperature values than the original one. Though HYDRUS-1D provided more accurate soil temperature estimations, its superiority to CERES is not unequivocal as it requires more detailed inputs. PMID:22792047

  3. Simulation of Soil Temperature Dynamics with Models Using Different Concepts

    PubMed Central

    Sándor, Renáta; Fodor, Nándor

    2012-01-01

    This paper presents two soil temperature models with empirical and mechanistic concepts. At the test site (calcaric arenosol), meteorological parameters as well as soil moisture content and temperature at 5 different depths were measured in an experiment with 8 parcels realizing the combinations of the fertilized, nonfertilized, irrigated, nonirrigated treatments in two replicates. Leaf area dynamics was also monitored. Soil temperature was calculated with the original and a modified version of CERES as well as with the HYDRUS-1D model. The simulated soil temperature values were compared to the observed ones. The vegetation reduced both the average soil temperature and its diurnal amplitude; therefore, considering the leaf area dynamics is important in modeling. The models underestimated the actual soil temperature and overestimated the temperature oscillation within the winter period. All models failed to account for the insulation effect of snow cover. The modified CERES provided explicitly more accurate soil temperature values than the original one. Though HYDRUS-1D provided more accurate soil temperature estimations, its superiority to CERES is not unequivocal as it requires more detailed inputs. PMID:22792047

  4. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  5. Advanced flight design systems subsystem performance models. Sample model: Environmental analysis routine library

    NASA Technical Reports Server (NTRS)

    Parker, K. C.; Torian, J. G.

    1980-01-01

    A sample environmental control and life support model performance analysis using the environmental analysis routines library is presented. An example of a complete model set up and execution is provided. The particular model was synthesized to utilize all of the component performance routines and most of the program options.

  6. Modelling temperature and concentration dependent solid/liquid interfacial energies

    NASA Astrophysics Data System (ADS)

    Lippmann, Stephanie; Jung, In-Ho; Paliwal, Manas; Rettenmayr, Markus

    2016-01-01

    Models for the prediction of the solid/liquid interfacial energy in pure substances and binary alloys, respectively, are reviewed and extended regarding the temperature and concentration dependence of the required thermodynamic entities. A CALPHAD-type thermodynamic database is used to introduce temperature and concentration dependent melting enthalpies and entropies for multicomponent alloys in the temperature range between liquidus and solidus. Several suitable models are extended and employed to calculate the temperature and concentration dependent interfacial energy for Al-FCC with their respective liquids and compared with experimental data.

  7. Sample Collection from Small Airless Bodies: Examination of Temperature Constraints for the TGIP Sample Collector for the Hera Near-Earth Asteroid Sample Return Mission

    NASA Technical Reports Server (NTRS)

    Franzen, M. A.; Roe, L. A.; Buffington, J. A.; Sears, D. W. G.

    2005-01-01

    There have been a number of missions that have explored the solar system with cameras and other instruments but profound questions remain that can only be addressed through the analysis of returned samples. However, due to lack of appropriate technology, high cost, and high risk, sample return has only recently become a feasible part of robotic solar system exploration. One specific objective of the President s new vision is that robotic exploration of the solar system should enhance human exploration as it discovers and understands the the solar system, and searches for life and resources [1]. Missions to small bodies, asteroids and comets, will partially fill the huge technological void between missions to the Moon and missions to Mars. However, such missions must be low cost and inherently simple, so they can be applied routinely to many missions. Sample return from asteroids, comets, Mars, and Jupiter s moons will be an important and natural part of the human exploration of space effort. Here we describe the collector designed for the Hera Near-Earth Asteroid Sample Return Mission. We have built a small prototype for preliminary evaluation, but expect the final collector to gather approx.100 g of sample of dust grains to centimeter sized clasts on each application to the surface of the asteroid.

  8. A time series model for influent temperature estimation: application to dynamic temperature modelling of an aerated lagoon.

    PubMed

    Escalas-Cañellas, Antoni; Abrego-Góngora, Carlos J; Barajas-López, María Guadalupe; Houweling, Dwight; Comeau, Yves

    2008-05-01

    Thirty-nine linear regression and time series models were built and calibrated for influent temperature (Ti) estimation at the primary aerated facultative lagoon in a municipal wastewater treatment plant. The models were based on mean daily ambient air temperature (Ta) and/or daily rainfall (P), and-optionally-wastewater temperature autoregression. The best fits were achieved with some time series models involving Ta and P, and Ti autoregression. The best-fit model was able to estimate influent temperature with a root-mean-square-error of 0.5 degrees C, and an R2 of 0.925, for the calibration period of 10.5 months. In addition, a dynamic lagoon-temperature (Tw) model from the literature was modified in its terms of solar radiation and aeration latent heat, and applied to the primary lagoon. The model was fed with the estimated influent temperature, and five model parameters were identified by calibration against 10.5-month Tw data. Dynamic lagoon-temperature estimation results were comparable to or better than other results of long-term simulations found in the literature. Sensitivity analyses were run on both models. Further validation with independent sets of data is needed for verification of the predictive capability of the models. PMID:18342909

  9. West Flank Coso, CA FORGE 3D temperature model

    DOE Data Explorer

    Doug Blankenship

    2016-03-01

    x,y,z data of the 3D temperature model for the West Flank Coso FORGE site. Model grid spacing is 250m. The temperature model for the Coso geothermal field used over 100 geothermal production sized wells and intermediate-depth temperature holes. At the near surface of this model, two boundary temperatures were assumed: (1) areas with surface manifestations, including fumaroles along the northeast striking normal faults and northwest striking dextral faults with the hydrothermal field, a temperature of ~104˚C was applied to datum at +1066 meters above sea level elevation, and (2) a near-surface temperature at about 10 meters depth, of 20˚C was applied below the diurnal and annual conductive temperature perturbations. These assumptions were based on heat flow studies conducted at the CVF and for the Mojave Desert. On the edges of the hydrothermal system, a 73˚C/km (4˚F/100’) temperature gradient contour was established using conductive gradient data from shallow and intermediate-depth temperature holes. This contour was continued to all elevation datums between the 20˚C surface and -1520 meters below mean sea level. Because the West Flank is outside of the geothermal field footprint, during Phase 1, the three wells inside the FORGE site were incorporated into the preexisting temperature model. To ensure a complete model was built based on all the available data sets, measured bottom-hole temperature gradients in certain wells were downward extrapolated to the next deepest elevation datum (or a maximum of about 25% of the well depth where conductive gradients are evident in the lower portions of the wells). After assuring that the margins of the geothermal field were going to be adequately modelled, the data was contoured using the Kriging method algorithm. Although the extrapolated temperatures and boundary conditions are not rigorous, the calculated temperatures are anticipated to be within ~6˚C (20˚F), or one contour interval, of the

  10. Multi-Relaxation Temperature-Dependent Dielectric Model of the Arctic Soil at Positive Temperatures

    NASA Astrophysics Data System (ADS)

    Savin, I. V.; Mironov, V. L.

    2014-11-01

    Frequency spectra of the dielectric permittivity of the Arctic soil of Alaska are investigated with allowance for the dipole and ionic relaxation of molecules of the soil moisture at frequencies from 40 MHz to 16 GHz and temperatures from -5 to +25°С. A generalized temperature-dependent multi-relaxation refraction dielectric model of the humid Arctic soil is suggested.

  11. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    SciTech Connect

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; Carroll, Susan A.

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution features including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.

  12. ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION

    SciTech Connect

    Martino, C.; Herman, D.; Pike, J.; Peters, T.

    2014-06-05

    Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.

  13. Models for predicting temperature dependence of material properties of aluminum

    NASA Astrophysics Data System (ADS)

    Marla, Deepak; Bhandarkar, Upendra V.; Joshi, Suhas S.

    2014-03-01

    A number of processes such as laser ablation, laser welding, electric discharge machining, etc involve high temperatures. Most of the processes involve temperatures much higher than the target melting and normal boiling point. Such large variation in target temperature causes a significant variation in its material properties. Due to the unavailability of experimental data on material properties at elevated temperatures, usually the data at lower temperatures is often erroneously extrapolated during modelling of these processes. Therefore, this paper attempts to evaluate the variation in material properties with temperature using some general and empirical theories, along with the available experimental data for aluminum. The evaluated properties of Al using the proposed models show a significant variation with temperature. Between room temperature and near-critical temperature (0.9Tc), surface reflectivity of Al varies from more than 90% to less than 50%, absorption coefficient decreases by a factor of 7, thermal conductivity decreases by a factor of 5, density decreases by a factor of 4, specific heat and latent heat of vapourization vary by a factor between 1.5 and 2. Applying these temperature-dependent material properties for modelling laser ablation suggest that optical properties have a greater influence on the process than thermophysical properties. The numerical predictions of the phase explosion threshold in laser ablation are within 5% of the experimental values.

  14. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  15. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  16. Ozone and temperature: A test of the consistency of models and observations in the middle atmosphere

    NASA Astrophysics Data System (ADS)

    Orris, Rebecca Lyn

    1997-08-01

    Several stratospheric monthly-, zonally-averaged satellite ozone and temperature datasets have been created, merged with other observational datasets, and extrapolated to form ozone climatologies, with coverage from the surface to 80km and from 90oS to 90oN. Equilibrium temperatures in the stratosphere for each ozone dataset are calculated using a fixed dynamical heating (FDH) model and are compared with measured temperatures. An extensive study is conducted of the sensitivity of the modeled temperatures to uncertainties of inputs, with emphasis on the accuracy of the radiative transfer models, the uncertainty of the ozone mixing ratios, and inter-annual variability. We examine the long-term variability of the temperature with 25 years of data from the 3o resolution SKYHI GCM and find evidence of low frequency variation of the 3o model temperatures with a time scale of about 10 years. This long-term variability creates a significant source of uncertainty in our study, since dynamical heating rates derived from only 1 year of 1o SKYHI data are used. Most measured datasets are only available for a few years, which is an inadequate sample for averaging purposes. The uncertainty introduced into the comparison of FDH-modeled temperatures and measurements near 1mb in the tropics due to interannual variability has a maximum of approximately ±8K. Global-mean calculations on isobaric surfaces are shown to eliminate most of the interannual variability of the modeled and measured temperatures. Multiple years of global-mean UARS MLS temperatures, as well as MLS and LIMS temperatures at pressures of 1mb and greater, agree to within ±2K. For most months studied, global-mean Barnett and Corney (BC) temperatures are found to be significantly warmer (3.5-5K) than either the MLS or LIMS temperatures between 2-10mb. Comparisons of global-mean FDH-modeled temperatures with measured LIMS and MLS temperatures show the model is colder than measurements by 3-7K. Consistency between

  17. Model of the magnetization of nanocrystalline materials at low temperatures

    NASA Astrophysics Data System (ADS)

    Bian, Q.; Niewczas, M.

    2014-07-01

    A theoretical model incorporating the material texture has been developed to simulate the magnetic properties of nanocrystalline materials at low temperatures where the effect of thermal energy on magnetization is neglected. The method is based on Landau-Lifshitz-Gilbert (LLG) theory and it describes the magnetization dynamics of individual grains in the effective field. The modified LLG equation incorporates the intrinsic fields from the intragrain magnetocrystalline and grain boundary anisotropies and the interacting fields from intergrain dipolar and exchange couplings between the neighbouring grains. The model is applied to study magnetic properties of textured nanocrystalline Ni samples at 2K and is capable to reproduce closely the hysteresis loop behaviour at different orientations of applied magnetic field. Nanocrystalline Ni shows the grain boundary anisotropy constant K 1 s = - 6.0 × 104 J / m 3 and the intergrain exchange coupling denoted by the effective exchange constant Ap = 2.16 × 10-11 J/m. Analytical expressions to estimate the intergrain exchange energy density and the effective exchange constant have been formulated.

  18. A temperature dependent SPICE macro-model for power MOSFETs

    SciTech Connect

    Pierce, D.G.

    1992-05-01

    A power MOSFET macro-model for use with the circuit simulator SPICE has been developed suitable for use over the temperature range of {minus}55 to 125{degrees}C. The model is comprised of a single parameter set with the temperature dependence accessed through the SPICE TEMP card. This report describes in detail the development of the model and the extraction algorithms used to obtain model parameters. The extraction algorithms are described in sufficient detail to allow for automated measurements which in turn allows for rapid and cost effective development of an accurate SPICE model for any power MOSFET. 22 refs.

  19. A physically based model of global freshwater surface temperature

    NASA Astrophysics Data System (ADS)

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  20. Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality

    NASA Astrophysics Data System (ADS)

    Caldwell, R. J.; Rajagopalan, B.

    2011-12-01

    Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other

  1. Assessing Model Structural Uncertainty Using a Split Sample Approach for a Distributed Water Quality Model

    NASA Astrophysics Data System (ADS)

    Meixner, T.; van Griensven, A.

    2003-12-01

    A method for assessing model structural uncertainty as opposed to the more commonly investigated parameter uncertainty is presented that should aid in the development of improved water quality models. Elsewhere (see van Griensven and Meixner abstract, this session) we have developed a methodology (ParaSol) to estimate model parameter uncertainty. Uncertainty is typically estimated with a specific time period of data. However from experience with model calibration problems we know that we need to employ split sample and other evaluation tests to estimate the confidence we should have in our models and our methods. Evaluation tests generally give us qualitative data about confidence in our models. Here we propose a method that uses the split sample approach to generate a quantitative estimate of model structural uncertainty. The Sources of Uncertainty Global Assessment using Split SamplES (SUNGLASSES) method is designed to assess predictive uncertainty that is not captured by parameter or physical input uncertainty. We assume that this additional uncertainty represents model structural error in how the model represents the physical, chemical, and biological processes incorporated into water quality models. This method operates by selecting a threshold for a sample statistic (bias in our case), when the sample statistic for a model simulation is below the threshold the simulation is acceptable. Where this methodology differs from others is that the threshold is determined by evaluating whether the chosen threshold will capture simulations during an evaluation time period (hence split sample) that was not used to initially calibrate the model and generate parameter estimates. Most existing methods rely solely on sample statistics during a calibration period. The new method thus captures an element of predictive error that originates in the structural conception of the processes controlling water quality. The described method is applied on a Soil Water Assessment Tool

  2. Assimilation of Surface Temperature in Land Surface Models

    NASA Technical Reports Server (NTRS)

    Lakshmi, Venkataraman

    1998-01-01

    Hydrological models have been calibrated and validated using catchment streamflows. However, using a point measurement does not guarantee correct spatial distribution of model computed heat fluxes, soil moisture and surface temperatures. With the advent of satellites in the late 70s, surface temperature is being measured two to four times a day from various satellite sensors and different platforms. The purpose of this paper is to demonstrate use of satellite surface temperature in (a) validation of model computed surface temperatures and (b) assimilation of satellite surface temperatures into a hydrological model in order to improve the prediction accuracy of soil moistures and heat fluxes. The assimilation is carried out by comparing the satellite and the model produced surface temperatures and setting the "true"temperature midway between the two values. Based on this "true" surface temperature, the physical relationships of water and energy balance are used to reset the other variables. This is a case of nudging the water and energy balance variables so that they are consistent with each other and the true" surface temperature. The potential of this assimilation scheme is demonstrated in the form of various experiments that highlight the various aspects. This study is carried over the Red-Arkansas basin in the southern United States (a 5 deg X 10 deg area) over a time period of a year (August 1987 - July 1988). The land surface hydrological model is run on an hourly time step. The results show that satellite surface temperature assimilation improves the accuracy of the computed surface soil moisture remarkably.

  3. A Model for Temperature Fluctuations in a Buoyant Plume

    NASA Astrophysics Data System (ADS)

    Bisignano, A.; Devenish, B. J.

    2015-11-01

    We present a hybrid Lagrangian stochastic model for buoyant plume rise from an isolated source that includes the effects of temperature fluctuations. The model is based on that of Webster and Thomson (Atmos Environ 36:5031-5042, 2002) in that it is a coupling of a classical plume model in a crossflow with stochastic differential equations for the vertical velocity and temperature (which are themselves coupled). The novelty lies in the addition of the latter stochastic differential equation. Parametrizations of the plume turbulence are presented that are used as inputs to the model. The root-mean-square temperature is assumed to be proportional to the difference between the centreline temperature of the plume and the ambient temperature. The constant of proportionality is tuned by comparison with equivalent statistics from large-eddy simulations (LES) of buoyant plumes in a uniform crossflow and linear stratification. We compare plume trajectories for a wide range of crossflow velocities and find that the model generally compares well with the equivalent LES results particularly when added mass is included in the model. The exception occurs when the crossflow velocity component becomes very small. Comparison of the scalar concentration, both in terms of the height of the maximum concentration and its vertical spread, shows similar behaviour. The model is extended to allow for realistic profiles of ambient wind and temperature and the results are compared with LES of the plume that emanated from the explosion and fire at the Buncefield oil depot in 2005.

  4. A physically based analytical spatial air temperature and humidity model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Endreny, Theodore A.; Nowak, David J.

    2013-09-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat storage based on semiempirical functions and generates spatially distributed estimates based on inputs of topography, land cover, and the weather data measured at a reference site. The model assumes that for all grids under the same mesoscale climate, grid air temperature and humidity are modified by local variation in absorbed solar radiation and the partitioning of sensible and latent heat. The model uses a reference grid site for time series meteorological data and the air temperature and humidity of any other grid can be obtained by solving the heat flux network equations. PASATH was coupled with the USDA iTree-Hydro water balance model to obtain evapotranspiration terms and run from 20 to 29 August 2010 at a 360 m by 360 m grid scale and hourly time step across a 285 km2 watershed including the urban area of Syracuse, NY. PASATH predictions were tested at nine urban weather stations representing variability in urban topography and land cover. The PASATH model predictive efficiency R2 ranged from 0.81 to 0.99 for air temperature and 0.77 to 0.97 for dew point temperature. PASATH is expected to have broad applications on environmental and ecological models.

  5. Temperature sensitivity of a numerical pollen forecast model

    NASA Astrophysics Data System (ADS)

    Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone

    2016-04-01

    Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.

  6. Modeling the Effect of Temperature on Ozone-Related Mortality.

    EPA Science Inventory

    Modeling the Effect of Temperature on Ozone-Related Mortality. Wilson, Ander, Reich, Brian J, Neas, Lucas M., Rappold, Ana G. Background: Previous studies show ozone and temperature are associated with increased mortality; however, the joint effect is not well explored. Underst...

  7. A generalized conditional heteroscedastic model for temperature downscaling

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  8. Note: A sample holder design for sensitive magnetic measurements at high temperatures in a magnetic properties measurement system

    SciTech Connect

    Arauzo, A.; Guerrero, E.; Urtizberea, A.; Stankiewicz, J.; Rillo, C.

    2012-06-15

    A sample holder design for high temperature measurements in a commercial MPMS SQUID magnetometer from Quantum Design is presented. It fulfills the requirements for the simultaneous use of the oven and reciprocating sample option (RSO) options, thus allowing sensitive magnetic measurements up to 800 K. Alternating current susceptibility can also be measured, since the holder does not induce any phase shift relative to the ac driven field. It is easily fabricated by twisting Constantan Copyright-Sign wires into a braid nesting the sample inside. This design ensures that the sample be placed tightly into a tough holder with its orientation fixed, and prevents any sample displacement during the fast movements of the RSO transport, up to high temperatures.

  9. Statistical Modeling of Methane Production from Landfill Samples

    PubMed Central

    Gurijala, K. R.; Sa, P.; Robinson, J. A.

    1997-01-01

    Multiple-regression analysis was conducted to evaluate the simultaneous effects of 10 environmental factors on the rate of methane production (MR) from 38 municipal solid-waste (MSW) samples collected from the Fresh Kills landfill, which is the world's largest landfill. The analyses showed that volatile solids (VS), moisture content (MO), sulfate (SO(inf4)(sup2-)), and the cellulose-to-lignin ratio (CLR) were significantly associated with MR from refuse. The remaining six factors did not show any significant effect on MR in the presence of the four significant factors. With the consideration of all possible linear, square, and cross-product terms of the four significant variables, a second-order statistical model was developed. This model incorporated linear terms of MO, VS, SO(inf4)(sup2-), and CLR, a square term of VS (VS(sup2)), and two cross-product terms, MO x CLR and VS x CLR. This model explained 95.85% of the total variability in MR as indicated by the coefficient of determination (R(sup2) value) and predicted 87% of the observed MR. Furthermore, the t statistics and their P values of least-squares parameter estimates and the coefficients of partial determination (R values) indicated that MO contributed the most (R = 0.7832, t = 7.60, and P = 0.0001), followed by VS, SO(inf4)(sup2-), VS(sup2), MO x CLR, and VS x CLR in that order, and that CLR contributed the least (R = 0.4050, t = -3.30, and P = 0.0045) to MR. The SO(inf4)(sup2-), VS(sup2), MO x CLR, and CLR showed an inhibitory effect on MR. The final fitted model captured the trends in the data by explaining vast majority of variation in MR and successfully predicted most of the observed MR. However, more analyses with data from other landfills around the world are needed to develop a generalized model to accurately predict MSW methanogenesis. PMID:16535704

  10. Evaluation of CIRA temperature model with lidar and future perspectives

    NASA Astrophysics Data System (ADS)

    Keckhut, Philippe; Hauchecorne, Alain

    The CIRA model is widely used for many atmospheric applications. Many comparisons with temperature lidar have all revealed similar bias and will be presented. The mean tempera-ture is today not sufficient and future models will require additional functionalities. The use of statistical temperature mean fields requires some information about the variability to esti-mate the significance of the comparisons with other sources. Some tentative to estimate such variability will be presented. Another crucial issue for temperature comparisons concerns the tidal variability. How this effect can be considered in a model will be discussed. Finally, the pertinence of statistical models in a changing atmosphere is also an issue that needs specific considerations.

  11. TEMPERATURE AND CONDUCTIVITY MODELING FOR THE BUFFALO RIVER

    EPA Science Inventory

    A hydrodynamic and water quality transport study of the Buffalo River has been conducted. sing a two-dimensional (laterally averaged) model and incorporating appropriate specification of boundary conditions, we simulated the transport of river water temperature and conductivity f...

  12. Application of a temperature-dependent fluorescent dye (Rhodamine B) to the measurement of radiofrequency radiation-induced temperature changes in biological samples.

    PubMed

    Chen, Yuen Y; Wood, Andrew W

    2009-10-01

    We have applied a non-contact method for studying the temperature changes produced by radiofrequency (RF) radiation specifically to small biological samples. A temperature-dependent fluorescent dye, Rhodamine B, as imaged by laser scanning confocal microscopy (LSCM) was used to do this. The results were calibrated against real-time temperature measurements from fiber optic probes, with a calibration factor of 3.4% intensity change degrees C(-1) and a reproducibility of +/-6%. This non-contact method provided two-dimensional and three-dimensional images of temperature change and distributions in biological samples, at a spatial resolution of a few micrometers and with an estimated absolute precision of around 1.5 degrees C, with a differential precision of 0.4 degree C. Temperature rise within tissue was found to be non-uniform. Estimates of specific absorption rate (SAR) from absorbed power measurements were greater than those estimated from rate of temperature rise, measured at 1 min intervals, probably because this interval is too long to permit accurate estimation of initial temperature rise following start of RF exposure. Future experiments will aim to explore this. PMID:19507188

  13. Experimental design applied to the optimization of pyrolysis and atomization temperatures for As measurement in water samples by GFAAS

    NASA Astrophysics Data System (ADS)

    Ávila, Akie K.; Araujo, Thiago O.; Couto, Paulo R. G.; Borges, Renata M. H.

    2005-10-01

    In general, research experimentation is often used mainly when new methodologies are being developed or existing ones are being improved. The characteristics of any method depend on its factors or components. The planning techniques and analysis of experiments are basically used to improve the analytical conditions of methods, to reduce experimental labour with the minimum of tests and to optimize the use of resources (reagents, time of analysis, availability of the equipment, operator time, etc). These techniques are applied by the identification of variables (control factors) of a process that have the most influence on the response of the parameters of interest, by attributing values to the influential variables of the process in order that the variability of response can be minimum, or the obtained value (quality parameter) be very close to the nominal value, and by attributing values to the influential variables of the process so that the effects of uncontrollable variables can be reduced. In this central composite design (CCD), four permanent modifiers (Pd, Ir, W and Rh) and one combined permanent modifier W+Ir were studied. The study selected two factors: pyrolysis and atomization temperatures at five different levels for all the possible combinations. The pyrolysis temperatures with different permanent modifiers varied from 600 °C to 1600 °C with hold times of 25 s, while atomization temperatures ranged between 1900 °C and 2280 °C. The characteristic masses for As were in the range of 31 pg to 81 pg. Assuming the best conditions obtained on CCD, it was possible to estimate the measurement uncertainty of As determination in water samples. The results showed that considering the main uncertainty sources such as the repetitivity of measurement inherent in the equipment, the calibration curve which evaluates the adjustment of the mathematical model to the results and the calibration standards concentrations, the values obtained were similar to international

  14. The influence of model resolution on temperature variability

    NASA Astrophysics Data System (ADS)

    Klavans, Jeremy M.; Poppick, Andrew; Sun, Shanshan; Moyer, Elisabeth J.

    2016-08-01

    Understanding future changes in climate variability, which can impact human activities, is a current research priority. It is often assumed that a key part of this effort involves improving the spatial resolution of climate models; however, few previous studies comprehensively evaluate the effects of model resolution on variability. In this study, we systematically examine the sensitivity of temperature variability to horizontal atmospheric resolution in a single model (CCSM3, the Community Climate System Model 3) at three different resolutions (T85, T42, and T31), using spectral analysis to describe the frequency dependence of differences. We find that in these runs, increased model resolution is associated with reduced temperature variability at all but the highest frequencies (2-5 day periods), though with strong regional differences. (In the tropics, where temperature fluctuations are smallest, increased resolution is associated with increased variability.) At all resolutions, temperature fluctuations in CCSM3 are highly spatially correlated, implying that the changes in variability with model resolution are driven by alterations in large-scale phenomena. Because CCSM3 generally overestimates temperature variability relative to reanalysis output, the reductions in variability associated with increased resolution tend to improve model fidelity. However, the resolution-related variability differences are relatively uniform with frequency, whereas the sign of model bias changes at interannual frequencies. This discrepancy raises questions about the mechanisms underlying the improvement at subannual frequencies. The consistent response across frequencies also implies that the atmosphere plays a significant role in interannual variability.

  15. Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2016-08-01

    An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an instance-dependent effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to k -step contrastive divergence (CD-k ) with k up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one-step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.

  16. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C., Jr.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  17. Measurement Model Quality, Sample Size, and Solution Propriety in Confirmatory Factor Models

    ERIC Educational Resources Information Center

    Gagne, Phill; Hancock, Gregory R.

    2006-01-01

    Sample size recommendations in confirmatory factor analysis (CFA) have recently shifted away from observations per variable or per parameter toward consideration of model quality. Extending research by Marsh, Hau, Balla, and Grayson (1998), simulations were conducted to determine the extent to which CFA model convergence and parameter estimation…

  18. High temperature spice modeling of partially depleted SOI MOSFETs

    SciTech Connect

    Osman, M.A.; Osman, A.A.

    1996-03-01

    Several partially depleted SOI N- and P-mosfets with dimensions ranging from W/L=30/10 to 15/3 were characterized from room temperature up to 300 C. The devices exhibited a well defined and sharp zero temperature coefficient biasing point up to 573 K in both linear and saturation regions. Simulation of the I-V characteristics using a temperature dependent SOI SPICE were in excellent agreement with measurements. Additionally, measured ZTC points agreed favorably with the predicted ZTC points using expressions derived from the temperature dependent SOI model for the ZTC {copyright} {ital 1996 American Institute of Physics.}

  19. Land and ocean surface temperature: data development and modeling

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Wang, A.; Brunke, M.

    2014-12-01

    Surface temperature (ST) plays a critical role in land-atmosphere-ocean interactions, and is one of the fundamental variables for Earth system research. ST includes surface air temperature (SAT), surface skin temperature (Ts), and subsurface water or soil temperature at a given depth [T(z)]. In this presentation, we will review our recent work on land and ocean ST. Over land, we have developed the first global 0.5 deg hourly SAT datasets from 1948-2009 by merging in situ CRU data with reanalysis data. Using these datasets, over high latitudes in winter the monthly averaged diurnal temperature range is found to be much larger than the range of monthly averaged hourly temperature diurnal cycle. The former primarily reflects the movement of synoptic weather systems, while the latter is primarily affected by the diurnal radiative forcing. We have also compared Ts from satellite remote sensing (MODIS) and land modeling (CLM) with in situ measurements. For instance, we have identified five factors contributing to the Ts differences between the model and MODIS. Over ocean, we have developed a prognostic Ts parameterization for modeling and data analysis. For instance, the inclusion of the Ts diurnal cycle affects atmospheric processes at diurnal, intraseasonal, and longer time scales. Furthermore, our parameterization provides the relationship between water temperature T(z) at different depths and Ts, and hence helps to merge temperature data from satellite infrared and microwave sensors and in situ buoy and ship measurements.

  20. Modeling of Temperature Conditions Between Temperature Artifact and Black Test Corner

    NASA Astrophysics Data System (ADS)

    Beges, G.

    2011-12-01

    The case study in this article is temperature condition modeling between a temperature artifact and a black test corner measuring instrument. The black test corner is an instrument which consists of two wooden walls and a floor, with build-in thermocouples fixed on the back side of the copper disks. The front of the disk is flush with the surface of the board. The black test corner is used for measuring how the temperature of a household appliance is influencing the surroundings in the real environment, e.g., in the kitchen, the living room, etc. The temperature artifact as presented in this article is a specially developed heating plate which is very stable and can be set to different temperatures. Technical standards for conformity assessment usually describe only what should be measured, in some cases also how accurate the measurement should be, but not what kind of measuring instrument should be used. Therefore, it sometimes happens that measurements are performed with improper equipment or in an improper way. For the same level of appliance conformance testing, laboratories shall use the same testing procedures and comparable measuring instruments. This article deals with the analysis of influencing parameters when measuring the temperature rise using the black test corner. Modeling of temperature conditions between a temperature artifact and a black test corner, using commercial modeling software, was performed to find out whether this modeling can be used for detailed evaluation of all possible influencing parameters of the mentioned testing procedure. A scheme and a list of influencing parameters that has to be modeled in the following research is prepared to arrange an optimal experiment.

  1. Modeling Near-Surface Temperatures at Martian Landing Sites

    NASA Technical Reports Server (NTRS)

    Martin, T. Z.; Bridges, N. T.; Murphy, J. R.

    2003-01-01

    We have developed a process for deriving near-surface (approx. 1m) temperatures for potential landing sites, based on observational parameters from MGS TES, Odyssey THEMIS, and a boundary layer model developed by Murphy for fitting Pathfinder meteorological measurements. Minimum nighttime temperatures at the MER landing sites can limit power available, and thus mission lifetime. Temperatures are derived based on thermal inertia, albedo, and opacity estimated for the Hematite site in Sinus Meridiani, using predictions of 1-m air temperatures from a one-dimensional atmospheric model. The Hematite site shows 9 % probability of landing at a location with nighttime temperatures below the 97 C value considered to be a practical limit for operations.

  2. Modeling the wet bulb globe temperature using standard meteorological measurements.

    SciTech Connect

    Liljegren, J. C.; Carhart, R. A.; Lawday, P.; Tschopp, S.; Sharp, R.; Decision and Information Sciences

    2008-10-01

    The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 C based on comparisons with wet bulb globe temperature measurements at all depots.

  3. Space Weathering of Olivine: Samples, Experiments and Modeling

    NASA Technical Reports Server (NTRS)

    Keller, L. P.; Berger, E. L.; Christoffersen, R.

    2016-01-01

    Olivine is a major constituent of chondritic bodies and its response to space weathering processes likely dominates the optical properties of asteroid regoliths (e.g. S- and many C-type asteroids). Analyses of olivine in returned samples and laboratory experiments provide details and insights regarding the mechanisms and rates of space weathering. Analyses of olivine grains from lunar soils and asteroid Itokawa reveal that they display solar wind damaged rims that are typically not amorphized despite long surface exposure ages, which are inferred from solar flare track densities (up to 10 (sup 7 y)). The olivine damaged rim width rapidly approaches approximately 120 nm in approximately 10 (sup 6 y) and then reaches steady-state with longer exposure times. The damaged rims are nanocrystalline with high dislocation densities, but crystalline order exists up to the outermost exposed surface. Sparse nanophase Fe metal inclusions occur in the damaged rims and are believed to be produced during irradiation through preferential sputtering of oxygen from the rims. The observed space weathering effects in lunar and Itokawa olivine grains are difficult to reconcile with laboratory irradiation studies and our numerical models that indicate that olivine surfaces should readily blister and amorphize on relatively short time scales (less than 10 (sup 3 y)). These results suggest that it is not just the ion fluence alone, but other variable, the ion flux that controls the type and extent of irradiation damage that develops in olivine. This flux dependence argues for caution in extrapolating between high flux laboratory experiments and the natural case. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind processing of olivine.

  4. Modeling the formation of some polycyclic aromatic hydrocarbons during the roasting of Arabica coffee samples.

    PubMed

    Houessou, Justin Koffi; Goujot, Daniel; Heyd, Bertrand; Camel, Valerie

    2008-05-28

    Roasting is a critical process in coffee production, as it enables the development of flavor and aroma. At the same time, roasting may lead to the formation of nondesirable compounds, such as polycyclic aromatic hydrocarbons (PAHs). In this study, Arabica green coffee beans from Cuba were roasted under controlled conditions to monitor PAH formation during the roasting process. Roasting was performed in a pilot-spouted bed roaster, with the inlet air temperature varying from 180 to 260 degrees C, for roasting conditions ranging from 5 to 20 min. Several PAHs were determined in both roasted coffee samples and green coffee samples. Different models were tested, with more or less assumptions on the chemical phenomena, with a view to predict the system global behavior. Two kinds of models were used and compared: kinetic models (based on Arrhenius law) and statistical models (neural networks). The numbers of parameters to adjust differed for the tested models, varying from three to nine for the kinetic models and from five to 13 for the neural networks. Interesting results are presented, with satisfactory correlations between experimental and predicted concentrations for some PAHs, such as pyrene, benz[a]anthracene, chrysene, and anthracene. PMID:18433138

  5. The stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures

    USGS Publications Warehouse

    Gordon, J.D.; Schroder, L.J.; Morden-Moore, A. L.; Bowersox, V.C.

    1995-01-01

    Separate experiments by the U.S. Geological Survey (USGS) and the Illinois State Water Survey Central Analytical Laboratory (CAL) independently assessed the stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures. The USGS experiment represented a test of sample stability under a diverse range of conditions, whereas the CAL experiment was a controlled test of sample stability. In the experiment by the USGS, a statistically significant (?? = 0.05) relation between [H+] and time was found for the composited filtered, natural, wet-deposition solution when all reported values are included in the analysis. However, if two outlying pH values most likely representing measurement error are excluded from the analysis, the change in [H+] over time was not statistically significant. In the experiment by the CAL, randomly selected samples were reanalyzed between July 1984 and February 1991. The original analysis and reanalysis pairs revealed that [H+] differences, although very small, were statistically different from zero, whereas specific-conductance differences were not. Nevertheless, the results of the CAL reanalysis project indicate there appears to be no consistent, chemically significant degradation in sample integrity with regard to [H+] and specific conductance while samples are stored at room temperature at the CAL. Based on the results of the CAL and USGS studies, short-term (45-60 day) stability of [H+] and specific conductance in natural filtered wet-deposition samples that are shipped and stored unchilled at ambient temperatures was satisfactory.

  6. Simple and compact optode for real-time in-situ temperature detection in very small samples

    PubMed Central

    Long, Feng; Shi, Hanchang

    2014-01-01

    Real-time in-situ temperature detection is essential in many applications. In this paper, a simple and robust optode, which uses Ruthenium (II) complex as a temperature indicator, has been developed for rapid and sensitive temperature detection in small volume samples (<5 μL). Transmission of excitation light and collection and transmission of fluorescence are performed by a homemade single-multi mode fiber coupler, which provides the entire system with a simple and robust structure. The photoluminescence intensity of Ruthenium (II) complex diminishes monotonically from 0°C to 80°C, and the response to temperature is rapid and completely reversible. When temperature is less than (or higher than) 50°C, a linear correlation exists between the fluorescence intensity and the temperature. Excellent agreement was also observed between the continuous and in situ measurements obtained by the presented optode and the discrete temperature values measured by a conventional thermometer. The proposed optode has high sensitivity, high photostability and chemical stability, a wide detection range, and thermal reversibility, and can be applied to real-time in-situ temperature detection of a very small volume biological, environmental, and chemical sample. PMID:24875420

  7. Simple and compact optode for real-time in-situ temperature detection in very small samples

    NASA Astrophysics Data System (ADS)

    Long, Feng; Shi, Hanchang

    2014-05-01

    Real-time in-situ temperature detection is essential in many applications. In this paper, a simple and robust optode, which uses Ruthenium (II) complex as a temperature indicator, has been developed for rapid and sensitive temperature detection in small volume samples (<5 μL). Transmission of excitation light and collection and transmission of fluorescence are performed by a homemade single-multi mode fiber coupler, which provides the entire system with a simple and robust structure. The photoluminescence intensity of Ruthenium (II) complex diminishes monotonically from 0°C to 80°C, and the response to temperature is rapid and completely reversible. When temperature is less than (or higher than) 50°C, a linear correlation exists between the fluorescence intensity and the temperature. Excellent agreement was also observed between the continuous and in situ measurements obtained by the presented optode and the discrete temperature values measured by a conventional thermometer. The proposed optode has high sensitivity, high photostability and chemical stability, a wide detection range, and thermal reversibility, and can be applied to real-time in-situ temperature detection of a very small volume biological, environmental, and chemical sample.

  8. Modelling the evolution of temperature in avalanche flow

    NASA Astrophysics Data System (ADS)

    Vera, Cesar; Christen, Marc; Funk, Martin; Bartelt, Perry

    2013-04-01

    Because the mechanical properties of snow are temperature dependent, snow temperature has a strong influence on avalanche flow behaviour. In fact, snow avalanche classification schemes implicitly account for the below-zero temperature regime, i.e. wet snow avalanches contain warm moist snow, whereas dry flowing or powder avalanches consist of colder snow. Although thermal effects are an important feature of avalanche flow behaviour, the temperature field is usually not considered in avalanche dynamics calculations. In this presentation we explicitly model the temperature evolution of avalanches by extending the basic set of depth-averaged differential equations of mass, momentum and fluctuation energy to include a depth-averaged internal energy equation. Two dissipative processes contribute to the irreversible rise in internal energy: the shear work and the dissipation of fluctuation energy due to random granular interactions. Snow entrainment is also an important source of thermal energy. As the temperature of the snow can vary between the release area and runout zone, we model the effect of snowcover temperature elevation gradients. Additionally we introduce a physical constraint on the temperature field to account for phase changes: when the temperature of the avalanche flow surpasses the melting point of ice, the surplus rise in internal energy is used to produce meltwater. We do not consider heat losses due to sensible heat exchanges between the atmosphere and the avalanche. Using numerical simulations we demonstrate how the temperature of the snow in the release area in relation to the temperature of the snowcover encountered by the avalanche at lower elevations can modify avalanche velocity and runout behaviour. We show how the production of turbulent fluctuation energy, which separates dense and dilute, fluidized flow regimes, can be controlled by temperature, creating a wide-range of avalanche deposition patterns. Finally, we investigate under what thermal

  9. Manufacturing and STA-investigation of witness-samples for the temperature monitoring of structural steels under irradiation

    NASA Astrophysics Data System (ADS)

    Sevryukov, O. N.; Fedotov, V. T.; Polyansky, A. A.; Pokrovski, S. A.; Kuzmin, R. S.

    2016-04-01

    The object of investigations was alloys based on lead and cadmium used as fuse monitors to control the maximum irradiation temperature (fuse temperature monitors, FTM) of samples from structural steels under irradiation in a research reactor IR-8. The result of the work was selected and tested initial materials for production of alloys. A technological scheme of the production of alloys for FTM has been developed and experimental studies of the properties of these alloys have been carried out.

  10. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  11. Sampling Biases in Datasets of Historical Mean Air Temperature over Land

    NASA Astrophysics Data System (ADS)

    Wang, K.

    2014-12-01

    Global mean surface air temperature have risen by 0.74 °C over the last 100 years. However, the definition of mean surface air temperature is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean temperatures (Td1) over land have been taken to be the average of the daily maximum and minimum temperature measurements. All existing principle global temperature analyses over land are primarily based on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean air temperature using hourly air temperature observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5°×5° grids. Therefore, caution should be taken when using mean air temperature datasets based on Td1 to examine spatial patterns of global warming.

  12. Variable temperature, relative humidity (0%-100%), and liquid neutron reflectometry sample cell suitable for polymeric and biomimetic materials

    NASA Astrophysics Data System (ADS)

    Harroun, T. A.; Fritzsche, H.; Watson, M. J.; Yager, K. G.; Tanchak, O. M.; Barrett, C. J.; Katsaras, J.

    2005-06-01

    We describe a variable temperature, relative humidity (0%-100% RH), and bulk liquid neutron reflectometry sample cell suitable for the study of polymeric and biomimetic materials (e.g., lipid bilayers). Compared to previous reflectometry cells, one of the advantages of the present sample environment is that it can accommodate ovens capable of handling either vapor or bulk liquid hydration media. Moreover, the design of the sample cell is such that temperature gradients are minimal over a large area (˜80cm2) allowing for the nontrivial 100% RH condition to be attained. This permits the study, by neutron reflectometry, of samples that are intrinsically unstable in bulk water conditions, and is demonstrated by the lamellar repeat spacing of lipid bilayers at 100% RH being indistinguishable from those same bilayers hydrated in liquid water.

  13. Estimating transient climate response using consistent temperature reconstruction methods in models and observations

    NASA Astrophysics Data System (ADS)

    Richardson, M.; Cowtan, K.; Hawkins, E.; Stolpe, M.

    2015-12-01

    Observational temperature records such as HadCRUT4 typically have incomplete geographical coverage and blend air temperature over land with sea surface temperatures over ocean, in contrast to model output which is commonly reported as global air temperature. This complicates estimation of properties such as the transient climate response (TCR). Observation-based estimates of TCR have been made using energy-budget constraints applied to time series of historical radiative forcing and surface temperature changes, while model TCR is formally derived from simulations where CO2 increases at 1% per year. We perform a like-with-like comparison using three published energy-budget methods to derive modelled TCR from historical CMIP5 temperature series sampled in a manner consistent with HadCRUT4. Observation-based TCR estimates agree to within 0.12 K of the multi-model mean in each case and for 2 of the 3 energy-budget methods the observation-based TCR is higher than the multi-model mean. For one energy-budget method, using the HadCRUT4 blending method leads to a TCR underestimate of 0.3±0.1 K, relative to that estimated using global near-surface air temperatures.

  14. Critical Behavior of the Spin-1/2 Baxter-Wu Model: Entropic Sampling Simulations

    NASA Astrophysics Data System (ADS)

    Jorge, L. N.; Ferreira, L. S.; Leão, S. A.; Caparica, A. A.

    2016-08-01

    In this work, we use a refined entropic sampling technique based on the Wang-Landau method to study the spin- 1/2 Baxter-Wu model. We adopt the total magnetization as the order parameter and, as a result, do not divide the system into three sub-lattices. The static critical exponents were determined as α = 0.6697(54), β = 0.0813(67), γ = 1.1772(33), and ν = 0.6574(61). The estimate for the critical temperature was T c = 2.26924(2). We compare the present results with those obtained from other well-established approaches, and we find a very good closeness with the exact values, besides the high precision reached for the critical temperature.

  15. Modeling sugarcane growth in response to age, insolation, and temperature

    SciTech Connect

    How, K.T.S.

    1986-01-01

    Modeling sugarcane growth in response to age of cane, insolation and air temperature using first-order multiple regression analysis and a nonlinear approach is investigated. Data are restricted to one variety from irrigated fields to eliminate the impact of varietal response and rainfall. Ten first-order models are investigated. The predictant is cane yield from 600 field tests. The predictors are cumulative values of insolation, maximum temperature, and minimum temperature for 3, 6, 12, and 18 months, or for each crop period derived from weather observations near the test plots. The low R-square values indicate that the selected predictor variables could not account for a substantial proportion of the variations of cane yield and the models have limited predictive values. The nonlinear model is based on known functional relationships between growth and age, growth and insolation, and growth and maximum temperature. A mathematical expression that integrates the effect of age, insolation and maximum temperature is developed. The constant terms and coefficients of the equation are determined from the requirement that the model must produce results that are reasonable when compared with observed monthly elongation data. The nonlinear model is validated and tested using another set of data.

  16. Corn blight review: Sampling model and ground data measurements program

    NASA Technical Reports Server (NTRS)

    Allen, R. D.

    1972-01-01

    The sampling plan involved the selection of the study area, determination of the flightline and segment sample design within the study area, and determination of a field sample design. Initial interview survey data consisting of crop species acreage and land use were collected. On all corn fields, additional information such as seed type, row direction, population, planting date, ect. were also collected. From this information, sample corn fields were selected to be observed through the growing season on a biweekly basis by county extension personnel.

  17. A model of the diurnal variation in lake surface temperature

    NASA Astrophysics Data System (ADS)

    Hodges, Jonathan L.

    Satellite measurements of water surface temperature can benefit several environmental applications such as predictions of lake evaporation, meteorological forecasts, and predictions of lake overturning events, among others. However, limitations on the temporal resolution of satellite measurements restrict these improvements. A model of the diurnal variation in lake surface temperature could potentially increase the effective temporal resolution of satellite measurements of surface temperature, thereby enhancing the utility of these measurements in the above applications. Herein, a one-dimensional transient thermal model of a lake is used in combination with surface temperature measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Aqua and Terra satellites, along with ambient atmospheric conditions from local weather stations, and bulk temperature measurements to calculate the diurnal surface temperature variation for the five major lakes in the Savannah River Basin in South Carolina: Lakes Jocassee, Keowee, Hartwell, Russell, and Thurmond. The calculated solutions are used to obtain a functional form for the diurnal surface temperature variation of these lakes. Differences in diurnal variation in surface temperature between each of these lakes are identified and potential explanations for these differences are presented.

  18. An analysis of the impact of pre‐analytical factors on the urine proteome: Sample processing time, temperature, and proteolysis

    PubMed Central

    Hepburn, Sophie; Cairns, David A.; Jackson, David; Craven, Rachel A.; Riley, Beverley; Hutchinson, Michelle; Wood, Steven; Smith, Matthew Welberry; Thompson, Douglas

    2015-01-01

    Purpose We have examined the impact of sample processing time delay, temperature, and the addition of protease inhibitors (PIs) on the urinary proteome and peptidome, an important aspect of biomarker studies. Experimental design Ten urine samples from patients with varying pathologies were each divided and PIs added to one‐half, with aliquots of each then processed and frozen immediately, or after a delay of 6 h at 4°C or room temperature (20–22°C), effectively yielding 60 samples in total. Samples were then analyzed by 2D‐PAGE, SELDI‐TOF‐MS, and immunoassay. Results Interindividual variability in profiles was the dominant feature in all analyses. Minimal changes were observed by 2D‐PAGE as a result of delay in processing, temperature, or PIs and no changes were seen in IgG, albumin, β2‐microglobulin, or α1‐microglobulin measured by immunoassay. Analysis of peptides showed clustering of some samples by presence/absence of PIs but the extent was very patient‐dependent with most samples showing minimal effects. Conclusions and clinical relevance The extent of processing‐induced changes and the benefit of PI addition are patient‐ and sample‐dependent. A consistent processing methodology is essential within a study to avoid any confounding of the results. PMID:25400092

  19. LOW TEMPERATURE X-RAY DIFFRACTION STUDIES OF NATURAL GAS HYDRATE SAMPLES FROM THE GULF OF MEXICO

    SciTech Connect

    Rawn, Claudia J; Sassen, Roger; Ulrich, Shannon M; Phelps, Tommy Joe; Chakoumakos, Bryan C; Payzant, E Andrew

    2008-01-01

    Clathrate hydrates of methane and other small alkanes occur widespread terrestrially in marine sediments of the continental margins and in permafrost sediments of the arctic. Quantitative study of natural clathrate hydrates is hampered by the difficulty in obtaining pristine samples, particularly from submarine environments. Bringing samples of clathrate hydrate from the seafloor at depths without compromising their integrity is not trivial. Most physical property measurements are based on studies of laboratory-synthesized samples. Here we report X-ray powder diffraction measurements of a natural gas hydrate sample from the Green Canyon, Gulf of Mexico. The first data were collected in 2002 and revealed ice and structure II gas hydrate. In the subsequent time the sample has been stored in liquid nitrogen. More recent X-ray powder diffraction data have been collected as functions of temperature and time. This new data indicates that the larger sample is heterogeneous in ice content and shows that the amount of sII hydrate decreases with increasing temperature and time as expected. However, the dissociation rate is higher at lower temperatures and earlier in the experiment.

  20. Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.

    PubMed

    Figura, Simon; Livingstone, David M; Kipfer, Rolf

    2015-01-01

    Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed. PMID:25412761

  1. An Analytic Function of Lunar Surface Temperature for Exospheric Modeling

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David

    2014-01-01

    We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.

  2. Phasic temperature control appraised with the Ceres-Wheat model.

    PubMed

    Volk, T; Bugbee, B; Tubiello, F

    1997-01-01

    Phasic control refers to the specification of a series of different environmental conditions during a crop's life cycle, with the goal of optimizing some aspect of productivity. Because of the enormous number of possible scenarios, phasic control is an ideal situation for modeling to provide guidance prior to experiments. Here we use the Ceres-Wheat model, modified for hydroponic growth chambers, to examine temperature effects. We first establish a baseline by running the model at constant temperatures from 10 degrees C to 30 degrees C. Grain yield per day peaks at 15 degrees C at a value that is 25% higher than the yield at the commonly used 23 degrees C. We then show results for phasic control limited to a single shift in temperature and, finally, we examine scenarios that allow each of the five phases of the life cycle to have a different temperature. Results indicate that grain yield might be increased by 15-20% over the best yield at constant temperature, primarily from a boosted harvest index, which has the additional advantage of less waste biomass. Such gains, if achievable, would help optimize food production for life support systems. Experimental work should first verify the relationship between yield and temperature, and then move to selected scenarios of phasic control, based on model predictions. PMID:11540452

  3. Measuring the mechanical efficiency of a working cardiac muscle sample at body temperature using a flow-through calorimeter.

    PubMed

    Taberner, Andrew J; Johnston, Callum M; Pham, Toan; June-Chiew Han; Ruddy, Bryan P; Loiselle, Denis S; Nielsen, Poul M F

    2015-08-01

    We have developed a new `work-loop calorimeter' that is capable of measuring, simultaneously, the work-done and heat production of isolated cardiac muscle samples at body temperature. Through the innovative use of thermoelectric modules as temperature sensors, the development of a low-noise fluid-flow system, and implementation of precise temperature control, the heat resolution of this device is 10 nW, an improvement by a factor of ten over previous designs. These advances have allowed us to conduct the first flow-through measurements of work output and heat dissipation from cardiac tissue at body temperature. The mechanical efficiency is found to vary with peak stress, and reaches a peak value of approximately 15 %, a figure similar to that observed in cardiac muscle at lower temperatures. PMID:26738140

  4. River water temperature and fish growth forecasting models

    NASA Astrophysics Data System (ADS)

    Danner, E.; Pike, A.; Lindley, S.; Mendelssohn, R.; Dewitt, L.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.

    2010-12-01

    Water is a valuable, limited, and highly regulated resource throughout the United States. When making decisions about water allocations, state and federal water project managers must consider the short-term and long-term needs of agriculture, urban users, hydroelectric production, flood control, and the ecosystems downstream. In the Central Valley of California, river water temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. There is consequently strong interest in modeling water temperature dynamics and the subsequent impacts on fish growth in such regulated rivers. However, the accuracy of current stream temperature models is limited by the lack of spatially detailed meteorological forecasts. To address these issues, we developed a high-resolution deterministic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) in a state-space framework, and applied this model to Upper Sacramento River. We then adapted salmon bioenergetics models to incorporate the temperature data at sub-hourly time steps to provide more realistic estimates of salmon growth. The temperature model uses physically-based heat budgets to calculate the rate of heat transfer to/from the river. We use variables provided by the TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) model—a high-resolution assimilation of satellite-derived meteorological observations and numerical weather simulations—as inputs. The TOPS-WRF framework allows us to improve the spatial and temporal resolution of stream temperature predictions. The salmon growth models are adapted from the Wisconsin bioenergetics model. We have made the output from both models available on an interactive website so that water and fisheries managers can determine the past, current and three day forecasted water temperatures at

  5. Ignition and temperature behavior of a single-wall carbon nanotube sample.

    PubMed

    Volotskova, O; Shashurin, A; Keidar, M; Raitses, Y; Demidov, V; Adams, S

    2010-03-01

    The electrical resistance of mats of single-wall carbon nanotubes (SWNTs) is measured as a function of mat temperature under various helium pressures, in vacuum and in atmospheric air. The objective of this paper is to study the thermal stability of SWNTs produced in a helium arc discharge in the experimental conditions close to natural conditions of SWNT growth in an arc, using a furnace instead of an arc discharge. For each tested condition, there is a temperature threshold at which the mat's resistance reaches its minimum. The threshold value depends on the helium pressure. An increase of the temperature above the temperature threshold leads to the destruction of SWNT bundles at a certain critical temperature. For instance, the critical temperature is about 1100 K in the case of helium background at a pressure of about 500 Torr. Based on experimental data on critical temperature it is suggested that SWNTs produced by an anodic arc discharge and collected in the web area outside the arc plasma most likely originate from the arc discharge peripheral region. PMID:20130346

  6. Modeling the melting temperature of nanoscaled bimetallic alloys.

    PubMed

    Li, Ming; Zhu, Tian-Shu

    2016-06-22

    The effect of size, composition and dimension on the melting temperature of nanoscaled bimetallic alloys was investigated by considering the interatomic interaction. The established thermodynamics model without any arbitrarily adjustable parameters can be used to predict the melting temperature of nanoscaled bimetallic alloys. It is found that, the melting temperature and interatomic interaction of nanoscaled bimetallic alloys decrease with the decrease in size and the increasing composition of the lower surface energy metal. Moreover, for the nanoscaled bimetallic alloys with the same size and composition, the dependence of the melting temperature on the dimension can be sequenced as follows: nanoparticles > nanowires > thin films. The accuracy of the developed model is verified by the recent experimental and computer simulation results. PMID:27292044

  7. Two-temperature channel model of a direct current arc

    NASA Astrophysics Data System (ADS)

    Kirpichnikov, A. P.

    1990-07-01

    A relatively simple method is proposed for computing the gas and electron temperatures in an arc plasmotron channel within the framework of the self-consistent two-temperature channel model of an arc discharge. This method affords the possibility of obtaining the gas and electron temperature distribution with good enough accuracy for given discharge parameters (current intensity in the discharge, power inserted in the discharge, etc.) as a function of the radial coordinate in both nonequilibrium (Te ≠ Tai) and quasi-equilibrium (Te = Tai within the current conducting channel) cases. The results obtained can be utilized in model computations to estimate the gas and electron temperatures as well, possibly, as in a number of engineering computations.

  8. Design and evaluation of a new Peltier-cooled laser ablation cell with on-sample temperature control.

    PubMed

    Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; Sanz-Medel, Alfredo

    2014-01-27

    A new custom-built Peltier-cooled laser ablation cell is described. The proposed cryogenic cell combines a small internal volume (20 cm(3)) with a unique and reliable on-sample temperature control. The use of a flexible temperature sensor, directly located on the sample surface, ensures a rigorous sample temperature control throughout the entire analysis time and allows instant response to any possible fluctuation. In this way sample integrity and, therefore, reproducibility can be guaranteed during the ablation. The refrigeration of the proposed cryogenic cell combines an internal refrigeration system, controlled by a sensitive thermocouple, with an external refrigeration system. Cooling of the sample is directly carried out by 8 small (1 cm×1 cm) Peltier elements placed in a circular arrangement in the base of the cell. These Peltier elements are located below a copper plate where the sample is placed. Due to the small size of the cooling electronics and their circular allocation it was possible to maintain a peephole under the sample for illumination allowing a much better visualization of the sample, a factor especially important when working with structurally complex tissue sections. The analytical performance of the cryogenic cell was studied using a glass reference material (SRM NIST 612) at room temperature and at -20°C. The proposed cell design shows a reasonable signal washout (signal decay within less than 10 s to background level), high sensitivity and good signal stability (in the range 6.6-11.7%). Furthermore, high precision (0.4-2.6%) and accuracy (0.3-3.9%) in the isotope ratio measurements were also observed operating the cell both at room temperature and at -20°C. Finally, experimental results obtained for the cell application to qualitative elemental imaging of structurally complex tissue samples (e.g. eye sections from a native frozen porcine eye and fresh flower leaves) demonstrate that working in cryogenic conditions is critical in such

  9. Heat Transfer Modeling for Rigid High-Temperature Fibrous Insulation

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Cunnington, George R.; Knutson, Jeffrey R.

    2012-01-01

    Combined radiation and conduction heat transfer through a high-temperature, high-porosity, rigid multiple-fiber fibrous insulation was modeled using a thermal model previously used to model heat transfer in flexible single-fiber fibrous insulation. The rigid insulation studied was alumina enhanced thermal barrier (AETB) at densities between 130 and 260 kilograms per cubic meter. The model consists of using the diffusion approximation for radiation heat transfer, a semi-empirical solid conduction model, and a standard gas conduction model. The relevant parameters needed for the heat transfer model were estimated from steady-state thermal measurements in nitrogen gas at various temperatures and environmental pressures. The heat transfer modeling methodology was evaluated by comparison with standard thermal conductivity measurements, and steady-state thermal measurements in helium and carbon dioxide gases. The heat transfer model is applicable over the temperature range of 300 to 1360 K, pressure range of 0.133 to 101.3 x 10(exp 3) Pa, and over the insulation density range of 130 to 260 kilograms per cubic meter in various gaseous environments.

  10. A Hierarchy of Snowmelt Models for Canadian Prairies: Temperature-Index, Modified Temperature Index and Energy-Balance Models

    NASA Astrophysics Data System (ADS)

    Gan, T. Y.

    2009-04-01

    Three semi-distributed snowmelt models were developed and applied to the Paddle River Basin (PRB) in the Canadian Prairies: (1) A physics-based, energy balance model (SDSM-EBM) that considers vertical energy exchange processes in open and forested areas, and snowmelt processes that include liquid and ice phases separately; (2) A modified temperature index model (SDSM-MTI) that uses both near surface soil temperature (Tg) and air temperature (Ta), and (3) A standard temperature index (SDSM-TI) method using Ta only. Other than the "regulatory" effects of beaver dams that affected the validation results on simulated runoff, both SDSM-MTI and SDSM-EBM simulated reasonably accurate snowmelt runoff, snow water equivalent and snow depth. For the PRB, where snowpack is shallow to moderately deep, and winter is relatively severe, the advantage of using both Ta and Tg is partly attributed to Tg showing a stronger correlation with solar radiation than Ta during the spring snowmelt season, and partly to the onset of major snowmelt which usually happens when Tg approaches 0oC. After re-setting model parameters so that SDSM-MTI degenerated to SDSM-TI (effect of Tg is completely removed), the model performance worsened, even after re-calibrating the melt factors using Ta alone. It seems that if reliable Tg data are available, they should be utilized to model the snowmelt processes in a Prairie environment particularly if the temperature-index approach is adopted.

  11. Low-temperature dynamic nuclear polarization with helium-cooled samples and nitrogen-driven magic-angle spinning

    NASA Astrophysics Data System (ADS)

    Thurber, Kent; Tycko, Robert

    2016-03-01

    We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized 13C NMR signals in the 100-200 range are demonstrated with DNP at 25 K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30 K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states.

  12. Low-temperature dynamic nuclear polarization with helium-cooled samples and nitrogen-driven magic-angle spinning.

    PubMed

    Thurber, Kent; Tycko, Robert

    2016-03-01

    We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized (13)C NMR signals in the 100-200 range are demonstrated with DNP at 25K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states. PMID:26920835

  13. Stream temperature response to three riparian vegetation scenarios by use of a distributed temperature validated model.

    PubMed

    Roth, T R; Westhoff, M C; Huwald, H; Huff, J A; Rubin, J F; Barrenetxea, G; Vetterli, M; Parriaux, A; Selkeer, J S; Parlange, M B

    2010-03-15

    Elevated in-stream temperature has led to a surge in the occurrence of parasitic intrusion proliferative kidney disease and has resulted in fish kills throughout Switzerland's waterways. Data from distributed temperature sensing (DTS) in-stream measurements for three cloud-free days in August 2007 over a 1260 m stretch of the Boiron de Merges River in southwest Switzerland were used to calibrate and validate a physically based one-dimensional stream temperature model. Stream temperature response to three distinct riparian conditions were then modeled: open, in-stream reeds, and forest cover. Simulation predicted a mean peak stream temperature increase of 0.7 °C if current vegetation was removed, an increase of 0.1 °C if dense reeds covered the entire stream reach, and a decrease of 1.2 °C if a mature riparian forest covered the entire reach. Understanding that full vegetation canopy cover is the optimal riparian management option for limiting stream temperature, in-stream reeds, which require no riparian set-aside and grow very quickly, appear to provide substantial thermal control, potentially useful for land-use management. PMID:20131784

  14. On the temperature model of CO{sub 2} lasers

    SciTech Connect

    Nevdakh, Vladimir V; Ganjali, Monireh; Arshinov, K I

    2007-03-31

    A refined temperature model of CO{sub 2} lasers is presented, which takes into account the fact that vibrational modes of the CO{sub 2} molecule have the common ground vibrational level. New formulas for the occupation numbers and the vibrational energy storage in individual modes are obtained as well as expressions relating the vibrational temperatures of the CO{sub 2} molecules with the excitation and relaxation rates of lower vibrational levels of modes upon excitation of the CO{sub 2}-N{sub 2}-He mixture in an electric discharge. The character of dependences of the vibrational temperatures on the discharge current is discussed. (active media)

  15. Modelling of rock temperatures for deep alpine tunnel projects

    NASA Astrophysics Data System (ADS)

    Goy, L.; Fabre, D.; Menard, G.

    1996-01-01

    The construction of deep railway tunnels requires the prediction of natural temperatures at depth. Geothermal data for the Alps are presented and principles of previously employed methods to predict temperatures, using Andreae's analytical approach, are discussed. We then use a finite element numerical model based on pure conduction to calculate temperatures at depth. This method allows rock heterogeneity and anisotropy to be taken into account. This model is applied to the Maurienne-Ambin tunnel project, a 55 km long tunnel between St-Jean-de-Maurienne (France) and Susa (Italy), which will be the longest tunnel for the planned TGV (high speed train) Lyon-Torino link. Data from several deep boreholes (10 total, with 3>1000 m) are used to provide essential parameters for the model, i.e.: - geological structure; - geothermal gradients; - rock conductivities from cores; - geothermal deep heat flow. Modelling is done in two dimensions, but the effect of surface topography (3 D) is considered. Results are given in the form of a geothermal cross-section along the tunnel axis that provides maximum temperatures and lengths of zones of high temperature encountered (for instance, zones where θ is ≥40°C). In general, differences between calculated and measured temperatures are less than 1°C at great depth. At shallow depth, differences are sometimes higher and probably best explained by water circulation connected to the surface. The modelling of temperatures, in relation to the geological structure, rock properties, and geothermal data for this area, appears to be a very useful tool for comparing alternative routes for deep tunnel projects and, during construction, to predict potential local geological or hydrological anomalies.

  16. Sampling Schemes and the Selection of Log-Linear Models for Longitudinal Data.

    ERIC Educational Resources Information Center

    von Eye, Alexander; Schuster, Christof; Kreppner, Kurt

    2001-01-01

    Discusses the effects of sampling scheme selection on the admissibility of log-linear models for multinomial and product multinomial sampling schemes for prospective and retrospective sampling. Notes that in multinomial sampling, marginal frequencies are not fixed, whereas for product multinomial sampling, uni- or multidimensional frequencies are…

  17. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    DOE PAGESBeta

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; Carroll, Susan A.

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution featuresmore » including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.« less

  18. An uncoupled viscoplastic constitutive model for metals at elevated temperature

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Cronenworth, J.

    1983-01-01

    An uncoupled constitutive model for predicting the transient response of thermal and rate dependent, inelastic material behavior is presented. The uncoupled model assumes that there is a temperature below which the total strain consists essentially of elastic and rate insensitive inelastic strains only. Above this temperature, the rate dependent inelastic strain (creep) dominates. The rate insensitive inelastic strain component is modeled in an incremental form with a yield function, flow rule and hardening law. Revisions to the hardening rule permit the model to predict temperature-dependent kinematic-isotropic hardening behavior, cyclic saturation, asymmetric stress-strain response upon stress reversal, and variable Bauschinger effect. The rate dependent inelastic strain component is modeled using a rate equation in terms of back stress, drag stress and exponent n as functions of temperature and strain. A sequence of hysteresis loops and relaxation tests are utilized to define the rate dependent inelastic strain rate. Evaluation of the model is performed by comparison with experiments involving various thermal and mechanical load histories on 5086 aluminum alloy, 304 stainless steel and Hastelloy-X.

  19. Automated biowaste sampling system, solids subsystem operating model, part 2

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Stauffer, R. E.

    1973-01-01

    The detail design and fabrication of the Solids Subsystem were implemented. The system's capacity for the collection, storage or sampling of feces and vomitus from six subjects was tested and verified.

  20. Understanding and quantifying foliar temperature acclimation for Earth System Models

    NASA Astrophysics Data System (ADS)

    Smith, N. G.; Dukes, J.

    2015-12-01

    Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly

  1. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan

    Structural equation modeling (SEM) attempts to remove the negative influence of measurement error and allows for investigation of relationships at the level of the underlying constructs of interest. SEM has been regarded as a "large sample" technique since its inception. Recent developments in SEM, some of which are currently available in popular…

  2. A Hierarchy of Snowmelt Models for Canadian Prairies: Temperature-Index, Modified Temperature Index and Energy-Balance Models

    NASA Astrophysics Data System (ADS)

    Yew Gan, Thian; Singh, Purushottam; Gobena, Adam

    2010-05-01

    Three semi-distributed snowmelt models were developed and applied to the Paddle River Basin (PRB) in the Canadian Prairies: (1) A physics-based, energy balance model (SDSM-EBM) that considers vertical energy exchange processes in open and forested areas, and snowmelt processes that include liquid and ice phases separately; (2) A modified temperature index model (SDSM-MTI) that uses both near surface soil temperature (Tg) and air temperature (Ta), and (3) A standard temperature index(SDSM-TI) method using Ta only. Other than the "regulatory" effects of beaver dams that affected the validation results on simulated runoff, both SDSM-MTI and SDSM EBM simulated reasonably accurate snowmelt runoff, snow water equivalent and snow depth. For the PRB, where snowpack is shallow to moderately deep, and winter is relatively severe, the advantage of using both Ta and Tg is partly attributed to Tg showing a stronger correlation with solar radiation than Ta during the spring snowmelt season, and partly to the onset of major snowmelt which usually happens when Tg approaches 0oC. After re-setting model parameters so that SDSM-MTI degenerated to SDSM-TI (effect of Tg is completely removed), the latter performed poorly, even after re-calibrating the melt factors using Ta alone. It seems that if reliable Tg data are available, they should be utilized to model the snowmelt processes in a Prairie environment particularly if the temperature-index approach is adopted.

  3. Temperature dependent stability model for graphene nanoribbon interconnects

    NASA Astrophysics Data System (ADS)

    Chanu, Waikhom Mona; Das, Debaprasad

    2016-04-01

    In this paper, a temperature dependent equivalent circuit model for graphene nanoribbon (GNR) interconnects is proposed. The stability analysis of GNR interconnects is performed using this proposed model and its performance is compared with respect to that of the copper based interconnects. The analysis is performed for different interconnect systems for 16nm ITRS technology node. With increase in the length of interconnects, the relative stability increases. GNR interconnect shows less increase of resistance with the increase in temperature as compared to Cu interconnects.

  4. Measuring and modeling hemoglobin aggregation below the freezing temperature.

    PubMed

    Rosa, Mónica; Lopes, Carlos; Melo, Eduardo P; Singh, Satish K; Geraldes, Vitor; Rodrigues, Miguel A

    2013-08-01

    Freezing of protein solutions is required for many applications such as storage, transport, or lyophilization; however, freezing has inherent risks for protein integrity. It is difficult to study protein stability below the freezing temperature because phase separation constrains solute concentration in solution. In this work, we developed an isochoric method to study protein aggregation in solutions at -5, -10, -15, and -20 °C. Lowering the temperature below the freezing point in a fixed volume prevents the aqueous solution from freezing, as pressure rises until equilibrium (P,T) is reached. Aggregation rates of bovine hemoglobin (BHb) increased at lower temperature (-20 °C) and higher BHb concentration. However, the addition of sucrose substantially decreased the aggregation rate and prevented aggregation when the concentration reached 300 g/L. The unfolding thermodynamics of BHb was studied using fluorescence, and the fraction of unfolded protein as a function of temperature was determined. A mathematical model was applied to describe BHb aggregation below the freezing temperature. This model was able to predict the aggregation curves for various storage temperatures and initial concentrations of BHb. The aggregation mechanism was revealed to be mediated by an unfolded state, followed by a fast growth of aggregates that readily precipitate. The aggregation kinetics increased for lower temperature because of the higher fraction of unfolded BHb closer to the cold denaturation temperature. Overall, the results obtained herein suggest that the isochoric method could provide a relatively simple approach to obtain fundamental thermodynamic information about the protein and the aggregation mechanism, thus providing a new approach to developing accelerated formulation studies below the freezing temperature. PMID:23808610

  5. Study of Aerothermodynamic Modeling Issues Relevant to High-Speed Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.

    2014-01-01

    This paper examines the application of state-of-the-art coupled ablation and radiation simulations to highspeed sample return vehicles, such as those returning from Mars or an asteroid. A defining characteristic of these entries is that the surface recession rates and temperatures are driven by nonequilibrium convective and radiative heating through a boundary layer with significant surface blowing and ablation products. Measurements relevant to validating the simulation of these phenomena are reviewed and the Stardust entry is identified as providing the best relevant measurements. A coupled ablation and radiation flowfield analysis is presented that implements a finite-rate surface chemistry model. Comparisons between this finite-rate model and a equilibrium ablation model show that, while good agreement is seen for diffusion-limited oxidation cases, the finite-rate model predicts up to 50% lower char rates than the equilibrium model at sublimation conditions. Both the equilibrium and finite rate models predict significant negative mass flux at the surface due to sublimation of atomic carbon. A sensitivity analysis to flowfield and surface chemistry rates show that, for a sample return capsule at 10, 12, and 14 km/s, the sublimation rates for C and C3 provide the largest changes to the convective flux, radiative flux, and char rate. A parametric uncertainty analysis of the radiative heating due to radiation modeling parameters indicates uncertainties ranging from 27% at 10 km/s to 36% at 14 km/s. Applying the developed coupled analysis to the Stardust entry results in temperatures within 10% of those inferred from observations, and final recession values within 20% of measurements, which improves upon the 60% over-prediction at the stagnation point obtained through an uncoupled analysis. Emission from CN Violet is shown to be over-predicted by nearly and order-of-magnitude, which is consistent with the results of previous independent analyses. Finally, the

  6. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part I: Instrumentation.

    PubMed

    Bruno, Thomas J

    2016-01-15

    Building on the successful application in the laboratory of PLOT-cryoadsorption as a means of collecting vapor (or headspace) samples for chromatographic analysis, in this paper a field portable apparatus is introduced. This device fits inside of a briefcase (aluminum tool carrier), and can be easily transported by vehicle or by air. The portable apparatus functions entirely on compressed air, making it suitable for use in locations lacking electrical power, and for use in flammable and explosive environments. The apparatus consists of four aspects: a field capable PLOT-capillary platform, the supporting equipment platform, the service interface between the PLOT-capillary and the supporting equipment, and the necessary peripherals. Vapor sampling can be done with either a hand piece (containing the PLOT capillary) or with a custom fabricated standoff module. Both the hand piece and the standoff module can be heated and cooled to facilitate vapor collection and subsequent vapor sample removal. The service interface between the support platform and the sampling units makes use of a unique counter current approach that minimizes loss of cooling and heating due to heat transfer with the surroundings (recuperative thermostatting). Several types of PLOT-capillary elements and sampling probes are described in this report. Applications to a variety of samples relevant to forensic and environmental analysis are discussed in a companion paper. PMID:26687166

  7. Micro-electro-mechanical systems/near-infrared validation of different sampling modes and sample sets coupled with multiple models.

    PubMed

    Wu, Zhisheng; Shi, Xinyuan; Wan, Guang; Xu, Manfei; Zhan, Xueyan; Qiao, Yanjiang

    2015-01-01

    The aim of the present study was to demonstrate the reliability of micro-electro-mechanical systems/near-infrared technology by investigating analytical models of two modes of sampling (integrating sphere and fiber optic probe modes) and different sample sets. Baicalin in Yinhuang tablets was used as an example, and the experimental procedure included the optimization of spectral pretreatments, selection of wavelength regions using interval partial least squares, moving window partial least squares, and validation of the method using an accuracy profile. The results demonstrated that models that use the integrating sphere mode are better than those that use fiber optic probe modes. Spectra that use fiber optic probe modes tend to be more susceptible to interference information because the intensity of the incident light on a fiber optic probe mode is significantly weaker than that on an integrating sphere mode. According to the test set validation result of the method parameters, such as accuracy, precision, risk, and linearity, the selection of variables was found to make no significant difference to the performance of the full spectral model. The performance of the models whose sample sets ranged widely in concentration (i.e., 1-4 %) was found to be better than that of models whose samples had relatively narrow ranges (i.e., 1-2 %). The establishment and validation of this method can be used to clarify the analytical guideline in Chinese herbal medicine about two sampling modes and different sample sets in the micro-electro-mechanical systems/near-infrared technique. PMID:25626144

  8. Modeling apple surface temperature dynamics based on weather data.

    PubMed

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-01-01

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management. PMID:25350507

  9. Modeling Apple Surface Temperature Dynamics Based on Weather Data

    PubMed Central

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-01-01

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00–18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of “Fuji” apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management. PMID:25350507

  10. Cloud Impacts on Pavement Temperature in Energy Balance Models

    NASA Astrophysics Data System (ADS)

    Walker, C. L.

    2013-12-01

    Forecast systems provide decision support for end-users ranging from the solar energy industry to municipalities concerned with road safety. Pavement temperature is an important variable when considering vehicle response to various weather conditions. A complex, yet direct relationship exists between tire and pavement temperatures. Literature has shown that as tire temperature increases, friction decreases which affects vehicle performance. Many forecast systems suffer from inaccurate radiation forecasts resulting in part from the inability to model different types of clouds and their influence on radiation. This research focused on forecast improvement by determining how cloud type impacts the amount of shortwave radiation reaching the surface and subsequent pavement temperatures. The study region was the Great Plains where surface solar radiation data were obtained from the High Plains Regional Climate Center's Automated Weather Data Network stations. Road pavement temperature data were obtained from the Meteorological Assimilation Data Ingest System. Cloud properties and radiative transfer quantities were obtained from the Clouds and Earth's Radiant Energy System mission via Aqua and Terra Moderate Resolution Imaging Spectroradiometer satellite products. An additional cloud data set was incorporated from the Naval Research Laboratory Cloud Classification algorithm. Statistical analyses using a modified nearest neighbor approach were first performed relating shortwave radiation variability with road pavement temperature fluctuations. Then statistical associations were determined between the shortwave radiation and cloud property data sets. Preliminary results suggest that substantial pavement forecasting improvement is possible with the inclusion of cloud-specific information. Future model sensitivity testing seeks to quantify the magnitude of forecast improvement.

  11. Modelling Brain Temperature and Perfusion for Cerebral Cooling

    NASA Astrophysics Data System (ADS)

    Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael

    2015-11-01

    Brain temperature relies heavily on two aspects: i) blood perfusion and porous heat transport through tissue and ii) blood flow and heat transfer through embedded arterial and venous vasculature. Moreover brain temperature cannot be measured directly unless highly invasive surgical procedures are used. A 3D two-phase fluid-porous model for mapping flow and temperature in brain is presented with arterial and venous vessels extracted from MRI scans. Heat generation through metabolism is also included. The model is robust and reveals flow and temperature maps in unprecedented 3D detail. However, the Karmen-Kozeny parameters of the porous (tissue) phase need to be optimised for expected perfusion profiles. In order to optimise the K-K parameters a reduced order two-phase model is developed where 1D vessels are created with a tree generation algorithm embedded inside a 3D porous domain. Results reveal that blood perfusion is a strong function of the porosity distribution in the tissue. We present a qualitative comparison between the simulated perfusion maps and those obtained clinically. We also present results studying the effect of scalp cooling on core brain temperature and preliminary results agree with those observed clinically.

  12. 3.5 D temperature model of a coal stockpile

    SciTech Connect

    Ozdeniz, A.H.; Corumluoglu, O.; Kalayci, I.; Sensogut, C.

    2008-07-01

    Overproduced coal mines that are not sold should remain in coal stock sites. If these coal stockpiles remain at the stock yards over a certain period of time, a spontaneous combustion can be started. Coal stocks under combustion threat can cost too much economically to coal companies. Therefore, it is important to take some precautions for saving the stockpiles from the spontaneous combustion. In this research, a coal stock which was 5 m wide, 10 m long, and 3 m in height, with a weight of 120 tons, was monitored to observe internal temperature changes with respect to time under normal atmospheric conditions. Internal temperature measurements were obtained at 20 points distributed all over the two layers in the stockpile. Temperatures measured by a specially designed mechanism were then stored into a computer every 3 h for a period of 3 months. Afterward, this dataset was used to delineate 3.5 D temporal temperature distribution models for these two levels, and they were used to analyze and interpret what was seen in these models to derive some conclusions. It was openly seen, followed, and analyzed that internal temperature changes in the stockpile went up to 31{sup o}C by 3.5 D models created for this research.

  13. Spatiotemporal modeling of monthly soil temperature using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tang, Xiao-Ping; Guo, Nai-Jia; Yang, Chao; Liu, Hong-Bin; Shang, Yue-Feng

    2013-08-01

    Soil temperature data are critical for understanding land-atmosphere interactions. However, in many cases, they are limited at both spatial and temporal scales. In the current study, an attempt was made to predict monthly mean soil temperature at a depth of 10 cm using artificial neural networks (ANNs) over a large region with complex terrain. Gridded independent variables, including latitude, longitude, elevation, topographic wetness index, and normalized difference vegetation index, were derived from a digital elevation model and remote sensing images with a resolution of 1 km. The good performance and robustness of the proposed ANNs were demonstrated by comparisons with multiple linear regressions. On average, the developed ANNs presented a relative improvement of about 44 % in root mean square error, 70 % in mean absolute percentage error, and 18 % in coefficient of determination over classical linear models. The proposed ANN models were then applied to predict soil temperatures at unsampled locations across the study area. Spatiotemporal variability of soil temperature was investigated based on the obtained database. Future work will be needed to test the applicability of ANNs for estimating soil temperature at finer scales.

  14. Land-surface temperature measurement from space - Physical principles and inverse modeling

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming; Dozier, Jeff

    1989-01-01

    To apply the multiple-wavelength (split-window) method used for satellite measurement of sea-surface temperature from thermal-infrared data to land-surface temperatures, the authors statistically analyze simulations using an atmospheric radiative transfer model. The range of atmospheric conditions and surface temperatures simulated is wide enough to cover variations in clear atmospheric properties and surface temperatures, both of which are larger over land than over sea. Surface elevation is also included in the simulation as the most important topographic effect. Land covers characterized by measured or modeled spectral emissivities include snow, clay, sands, and tree leaf samples. The empirical inverse model can estimate the surface temperature with a standard deviation less than 0.3 K and a maximum error less than 1 K, for viewing angles up to 40 degrees from nadir under cloud-free conditions, given satellite measurements in three infrared channels. A band in the region from 10.2 to 11.0 microns will usually give the most reliable single-band estimate of surface temperature. In addition, a band in either the 3.5-4.0-micron region or in the 11.5-12.6-micron region must be included for accurate atmospheric correction, and a band below the ozone absorption feature at 9.6 microns (e.g., 8.2-8.8 microns) will increase the accuracy of the estimate of surface temperature.

  15. High-temperature transport in the Hubbard Model

    NASA Astrophysics Data System (ADS)

    Shastry, B. Sriram; Perepelitsky, Edward; Galatas, Andrew; Khatami, Ehsan; Mravlje, Jernej; Georges, Antoine

    We examine the general behavior of the frequency and momentum dependent single-particle scattering rate and the transport coefficients, of many-body systems in the high-temperature limit. We find that the single-particle scattering rate always saturates in temperature, while the transport coefficients always decay like 1/T, where T is the temperature. A consequence of this is a resistivity which is ubiquitously linear in T at high temperatures. For the Hubbard model, by using the high-temperature series, we are able to calculate the first few moments of the single particle scattering rate Σ (k --> , ω) and the conductivity σ (k --> , ω) in the high-temperature regime in d spatial dimensions. Further in the case of d --> ∞ , we are able to calculate a large number of moments using symbolic computation. We make a direct comparison between these moments and those obtained through Dynamical Mean Field Theory (DMFT). Finally, we use the moments to reconstruct the ω-dependent optical conductivity σ (ω) =limk-->0 σ (k --> , ω) in the high-temperature regime. The work at UCSC was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award # FG02-06ER46319.

  16. Sampling Biases in Datasets of Historical Mean Air Temperature over Land

    NASA Astrophysics Data System (ADS)

    Wang, Kaicun

    2014-04-01

    Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends.

  17. Sampling biases in datasets of historical mean air temperature over land.

    PubMed

    Wang, Kaicun

    2014-01-01

    Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends. PMID:24717688

  18. IMPROVED TEMPERATURE STABILITY OF SULFUR DIOXIDE SAMPLES COLLECTED BY THE FEDERAL REFERENCE METHOD

    EPA Science Inventory

    This report describes an examination of the reagents present in the SO2 Federal Reference Method (FRM) to determine if any change in reagent concentration or condition could bring about substantial, if not complete, retardation of the effect of temperature on the stability of col...

  19. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    2000-05-01

    The Vapor Liquid Equilibrium measurement setup of this work was first established several years ago. It is a flow type high temperature high pressure apparatus which was designed to operate below 500 C temperature and 2000 psia pressure. Compared with the static method, this method has three major advantages: the first is that large quantity of sample can be obtained from the system without disturbing the equilibrium state which was established before; the second is that the residence time of the sample in the equilibrium cell is greatly reduced, thus decomposition or contamination of the sample can be effectively prevented; the third is that the flow system allows the sample to degas as it heats up since any non condensable gas will exit in the vapor stream, accumulate in the vapor condenser, and not be recirculated. The first few runs were made with Quinoline-Tetralin system, the results were fairly in agreement with the literature data . The former graduate student Amad used the same apparatus acquired the Benzene-Ethylbenzene system VLE data. This work used basically the same setup (several modifications had been made) to get the VLE data of Ethylbenzene-Quinoline system.

  20. A computer model of global thermospheric winds and temperatures

    NASA Technical Reports Server (NTRS)

    Killeen, T. L.; Roble, R. G.; Spencer, N. W.

    1987-01-01

    Output data from the NCAR Thermospheric GCM and a vector-spherical-harmonic (VSH) representation of the wind field are used in constructing a computer model of time-dependent global horizontal vector neutral wind and temperature fields at altitude 130-300 km. The formulation of the VSH model is explained in detail, and some typical results obtained with a preliminary version (applicable to December solstice at solar maximum) are presented graphically. Good agreement with DE-2 satellite measurements is demonstrated.

  1. The topomer-sampling model of protein folding

    PubMed Central

    Debe, Derek A.; Carlson, Matt J.; Goddard, William A.

    1999-01-01

    Clearly, a protein cannot sample all of its conformations (e.g., ≈3100 ≈ 1048 for a 100 residue protein) on an in vivo folding timescale (<1 s). To investigate how the conformational dynamics of a protein can accommodate subsecond folding time scales, we introduce the concept of the native topomer, which is the set of all structures similar to the native structure (obtainable from the native structure through local backbone coordinate transformations that do not disrupt the covalent bonding of the peptide backbone). We have developed a computational procedure for estimating the number of distinct topomers required to span all conformations (compact and semicompact) for a polypeptide of a given length. For 100 residues, we find ≈3 × 107 distinct topomers. Based on the distance calculated between different topomers, we estimate that a 100-residue polypeptide diffusively samples one topomer every ≈3 ns. Hence, a 100-residue protein can find its native topomer by random sampling in just ≈100 ms. These results suggest that subsecond folding of modest-sized, single-domain proteins can be accomplished by a two-stage process of (i) topomer diffusion: random, diffusive sampling of the 3 × 107 distinct topomers to find the native topomer (≈0.1 s), followed by (ii) intratopomer ordering: nonrandom, local conformational rearrangements within the native topomer to settle into the precise native state. PMID:10077555

  2. Determination of cross-grain properties of clearwood samples under kiln-drying conditions at temperature up to 140 C

    SciTech Connect

    Keep, L.B.; Keey, R.B.

    2000-07-01

    Small specimens of Pinus radiata have been tested to determine the creep strain that occurs during the kiln drying of boards. The samples have been tested over a range of temperatures from 20 C to 140 C. The samples, measuring 150 x 50 x 5 mm, were conditioned at various relative humidities in a pilot-plant kiln, in which the experiments at constant moisture content (MC) in the range of 5--20% MC were undertaken to eliminate mechano-sorptive strains. To determine the creep strain, the samples were brought to their equilibrium moisture content (EMC), then mechanically loaded under tension in the direction perpendicular to the grain. The strain was measured using small linear position sensors (LPS) which detect any elongation or shrinkage in the sample. The instantaneous compliance was measured within 60 sec of the application of the load (stress). The subsequent creep was monitored by the continued logging of strain data from the LPS units. The results of these experiments are consistent with previous studies of Wu and Milota (1995) on Douglas-fir (Pseudotsuga Menziesii). An increase in temperature or moisture content causes a rise in the creep strain while the sample is under tension. Values for the instantaneous compliance range from 1.7 x 10{sup {minus}3} to 1.28 x 10{sup {minus}2} M/Pa at temperatures between 20 C and 140 C and moisture content in the range of 5--20%. The rates of change of the creep strains are in the order of magnitude 10{sup {minus}7} to 10{sup {minus}8 s{sup {minus}1}} for these temperatures and moisture contents. The experimental data have been fitted to the constitutive equations of Wu and Milota (1996) for Douglas-fir to give material parameters for the instantaneous and creep strain components for Pinus radiata.

  3. A minimal model for finite temperature superfluid dynamics

    NASA Astrophysics Data System (ADS)

    Andersson, N.; Krüger, C.; Comer, G. L.; Samuelsson, L.

    2013-12-01

    Building on a recently improved understanding of the problem of heat flow in general relativity, we develop a hydrodynamical model for coupled finite temperature superfluids. The formalism is designed with the dynamics of the outer core of a mature neutron star (where superfluid neutrons are coupled to a conglomerate of protons and electrons) in mind, but the main ingredients are relevant for a range of analogous problems. The entrainment between material fluid components (the condensates) and the entropy (the thermal excitations) plays a central role in the development. We compare and contrast the new model to previous results in the literature, and provide estimates for the relevant entrainment coefficients that should prove useful in future applications. Finally, we consider the sound-wave propagation in the system in two simple limits, demonstrating the presence of second sound if the temperature is sub-critical, but absence of this phenomenon above the critical temperature for superfluidity.

  4. Temperature Driven Annealing of Perforations in Bicellar Model Membranes

    SciTech Connect

    Nieh, Mu-Ping; Raghunathan, V.A.; Pabst, Georg; Harroun, Thad; Nagashima, K; Morales, H; Katsaras, John; Macdonald, P

    2011-01-01

    Bicellar model membranes composed of 1,2-dimyristoylphosphatidylcholine (DMPC) and 1,2-dihexanoylphosphatidylcholine (DHPC), with a DMPC/DHPC molar ratio of 5, and doped with the negatively charged lipid 1,2-dimyristoylphosphatidylglycerol (DMPG), at DMPG/DMPC molar ratios of 0.02 or 0.1, were examined using small angle neutron scattering (SANS), {sup 31}P NMR, and {sup 1}H pulsed field gradient (PFG) diffusion NMR with the goal of understanding temperature effects on the DHPC-dependent perforations in these self-assembled membrane mimetics. Over the temperature range studied via SANS (300-330 K), these bicellar lipid mixtures exhibited a well-ordered lamellar phase. The interlamellar spacing d increased with increasing temperature, in direct contrast to the decrease in d observed upon increasing temperature with otherwise identical lipid mixtures lacking DHPC. {sup 31}P NMR measurements on magnetically aligned bicellar mixtures of identical composition indicated a progressive migration of DHPC from regions of high curvature into planar regions with increasing temperature, and in accord with the 'mixed bicelle model' (Triba, M. N.; Warschawski, D. E.; Devaux, P. E. Biophys. J.2005, 88, 1887-1901). Parallel PFG diffusion NMR measurements of transbilayer water diffusion, where the observed diffusion is dependent on the fractional surface area of lamellar perforations, showed that transbilayer water diffusion decreased with increasing temperature. A model is proposed consistent with the SANS, {sup 31}P NMR, and PFG diffusion NMR data, wherein increasing temperature drives the progressive migration of DHPC out of high-curvature regions, consequently decreasing the fractional volume of lamellar perforations, so that water occupying these perforations redistributes into the interlamellar volume, thereby increasing the interlamellar spacing.

  5. In-sample and out-of-sample model selection and error estimation for support vector machines.

    PubMed

    Anguita, Davide; Ghio, Alessandro; Oneto, Luca; Ridella, Sandro

    2012-09-01

    In-sample approaches to model selection and error estimation of support vector machines (SVMs) are not as widespread as out-of-sample methods, where part of the data is removed from the training set for validation and testing purposes, mainly because their practical application is not straightforward and the latter provide, in many cases, satisfactory results. In this paper, we survey some recent and not-so-recent results of the data-dependent structural risk minimization framework and propose a proper reformulation of the SVM learning algorithm, so that the in-sample approach can be effectively applied. The experiments, performed both on simulated and real-world datasets, show that our in-sample approach can be favorably compared to out-of-sample methods, especially in cases where the latter ones provide questionable results. In particular, when the number of samples is small compared to their dimensionality, like in classification of microarray data, our proposal can outperform conventional out-of-sample approaches such as the cross validation, the leave-one-out, or the Bootstrap methods. PMID:24807923

  6. Activation energy for a model ferrous-ferric half reaction from transition path sampling

    NASA Astrophysics Data System (ADS)

    Drechsel-Grau, Christof; Sprik, Michiel

    2012-01-01

    Activation parameters for the model oxidation half reaction of the classical aqueous ferrous ion are compared for different molecular simulation techniques. In particular, activation free energies are obtained from umbrella integration and Marcus theory based thermodynamic integration, which rely on the diabatic gap as the reaction coordinate. The latter method also assumes linear response, and both methods obtain the activation entropy and the activation energy from the temperature dependence of the activation free energy. In contrast, transition path sampling does not require knowledge of the reaction coordinate and directly yields the activation energy [C. Dellago and P. G. Bolhuis, Mol. Simul. 30, 795 (2004), 10.1080/08927020412331294869]. Benchmark activation energies from transition path sampling agree within statistical uncertainty with activation energies obtained from standard techniques requiring knowledge of the reaction coordinate. In addition, it is found that the activation energy for this model system is significantly smaller than the activation free energy for the Marcus model, approximately half the value, implying an equally large entropy contribution.

  7. Models of Solar Irradiance Variability and the Instrumental Temperature Record

    NASA Technical Reports Server (NTRS)

    Marcus, S. L.; Ghil, M.; Ide, K.

    1998-01-01

    The effects of decade-to-century (Dec-Cen) variations in total solar irradiance (TSI) on global mean surface temperature Ts during the pre-Pinatubo instrumental era (1854-1991) are studied by using two different proxies for TSI and a simplified version of the IPCC climate model.

  8. Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling

    ERIC Educational Resources Information Center

    Babcock, Ben

    2011-01-01

    Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…

  9. An Importance Sampling EM Algorithm for Latent Regression Models

    ERIC Educational Resources Information Center

    von Davier, Matthias; Sinharay, Sandip

    2007-01-01

    Reporting methods used in large-scale assessments such as the National Assessment of Educational Progress (NAEP) rely on latent regression models. To fit the latent regression model using the maximum likelihood estimation technique, multivariate integrals must be evaluated. In the computer program MGROUP used by the Educational Testing Service for…

  10. Dynamic mechanical response and a constitutive model of Fe-based high temperature alloy at high temperatures and strain rates.

    PubMed

    Su, Xiang; Wang, Gang; Li, Jianfeng; Rong, Yiming

    2016-01-01

    The effects of strain rate and temperature on the dynamic behavior of Fe-based high temperature alloy was studied. The strain rates were 0.001-12,000 s(-1), at temperatures ranging from room temperature to 800 °C. A phenomenological constitutive model (Power-Law constitutive model) was proposed considering adiabatic temperature rise and accurate material thermal physical properties. During which, the effects of the specific heat capacity on the adiabatic temperature rise was studied. The constitutive model was verified to be accurate by comparison between predicted and experimental results. PMID:27186468

  11. Daily indoor-to-outdoor temperature and humidity relationships: a sample across seasons and diverse climatic regions.

    PubMed

    Nguyen, Jennifer L; Dockery, Douglas W

    2016-02-01

    The health consequences of heat and cold are usually evaluated based on associations with outdoor measurements collected at a nearby weather reporting station. However, people in the developed world spend little time outdoors, especially during extreme temperature events. We examined the association between indoor and outdoor temperature and humidity in a range of climates. We measured indoor temperature, apparent temperature, relative humidity, dew point, and specific humidity (a measure of moisture content in air) for one calendar year (2012) in a convenience sample of eight diverse locations ranging from the equatorial region (10 °N) to the Arctic (64 °N). We then compared the indoor conditions to outdoor values recorded at the nearest airport weather station. We found that the shape of the indoor-to-outdoor temperature and humidity relationships varied across seasons and locations. Indoor temperatures showed little variation across season and location. There was large variation in indoor relative humidity between seasons and between locations which was independent of outdoor airport measurements. On the other hand, indoor specific humidity, and to a lesser extent dew point, tracked with outdoor, airport measurements both seasonally and between climates, across a wide range of outdoor temperatures. These results suggest that, in general, outdoor measures of actual moisture content in air better capture indoor conditions than outdoor temperature and relative humidity. Therefore, in studies where water vapor is among the parameters of interest for examining weather-related health effects, outdoor measurements of actual moisture content can be more reliably used as a proxy for indoor exposure than the more commonly examined variables of temperature and relative humidity. PMID:26054827

  12. Daily indoor-to-outdoor temperature and humidity relationships: a sample across seasons and diverse climatic regions

    NASA Astrophysics Data System (ADS)

    Nguyen, Jennifer L.; Dockery, Douglas W.

    2016-02-01

    The health consequences of heat and cold are usually evaluated based on associations with outdoor measurements collected at a nearby weather reporting station. However, people in the developed world spend little time outdoors, especially during extreme temperature events. We examined the association between indoor and outdoor temperature and humidity in a range of climates. We measured indoor temperature, apparent temperature, relative humidity, dew point, and specific humidity (a measure of moisture content in air) for one calendar year (2012) in a convenience sample of eight diverse locations ranging from the equatorial region (10 °N) to the Arctic (64 °N). We then compared the indoor conditions to outdoor values recorded at the nearest airport weather station. We found that the shape of the indoor-to-outdoor temperature and humidity relationships varied across seasons and locations. Indoor temperatures showed little variation across season and location. There was large variation in indoor relative humidity between seasons and between locations which was independent of outdoor airport measurements. On the other hand, indoor specific humidity, and to a lesser extent dew point, tracked with outdoor, airport measurements both seasonally and between climates, across a wide range of outdoor temperatures. These results suggest that, in general, outdoor measures of actual moisture content in air better capture indoor conditions than outdoor temperature and relative humidity. Therefore, in studies where water vapor is among the parameters of interest for examining weather-related health effects, outdoor measurements of actual moisture content can be more reliably used as a proxy for indoor exposure than the more commonly examined variables of temperature and relative humidity.

  13. Visual Sample Plan (VSP) Models and Code Verification

    SciTech Connect

    Gilbert, Richard O.; Davidson, James R.; Wilson, John E.; Pulsipher, Brent A.

    2001-03-06

    VSP is an easy to use, visual and graphic software tool being developed to select the right number and location of environmental samples so that the results of statistical tests performed to provide input to environmental decisions have the required confidence and performance. It is a significant help for implementing the 6th and 7th steps of the Data Quality Objectives (DQO) planning process ("Specify Tolerable Limits on Decision Errors" and "Optimize the Design for Obtaining Data," respectively).

  14. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    SciTech Connect

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  15. Investigations into the low temperature behavior of jet fuels: Visualization, modeling, and viscosity studies

    NASA Astrophysics Data System (ADS)

    Atkins, Daniel L.

    Aircraft operation in arctic regions or at high altitudes exposes jet fuel to temperatures below freeze point temperature specifications. Fuel constituents may solidify and remain within tanks or block fuel system components. Military and scientific requirements have been met with costly, low freeze point specialty jet fuels. Commercial airline interest in polar routes and the use of high altitude unmanned aerial vehicles (UAVs) has spurred interest in the effects of low temperatures and low-temperature additives on jet fuel. The solidification of jet fuel due to freezing is not well understood and limited visualization of fuel freezing existed prior to the research presented in this dissertation. Consequently, computational fluid dynamics (CFD) modeling that simulates jet fuel freezing and model validation were incomplete prior to the present work. The ability to simulate jet fuel freezing is a necessary tool for fuel system designers. An additional impediment to the understanding and simulation of jet fuel freezing has been the absence of published low-temperature thermo-physical properties, including viscosity, which the present work addresses. The dissertation is subdivided into three major segments covering visualization, modeling and validation, and viscosity studies. In the first segment samples of jet fuel, JPTS, kerosene, Jet A and Jet A containing additives, were cooled below their freeze point temperatures in a rectangular, optical cell. Images and temperature data recorded during the solidification process provided information on crystal habit, crystallization behavior, and the influence of the buoyancy-driven flow on freezing. N-alkane composition of the samples was determined. The Jet A sample contained the least n-alkane mass. The cooling of JPTS resulted in the least wax formation while the cooling of kerosene yielded the greatest wax formation. The JPTS and kerosene samples exhibited similar crystallization behavior and crystal habits during

  16. Modelling of temperature and perfusion during scalp cooling

    NASA Astrophysics Data System (ADS)

    Janssen, F. E. M.; Van Leeuwen, G. M. J.; Van Steenhoven, A. A.

    2005-09-01

    Hair loss is a feared side effect of chemotherapy treatment. It may be prevented by cooling the scalp during administration of cytostatics. The supposed mechanism is that by cooling the scalp, both temperature and perfusion are diminished, affecting drug supply and drug uptake in the hair follicle. However, the effect of scalp cooling varies strongly. To gain more insight into the effect of cooling, a computer model has been developed that describes heat transfer in the human head during scalp cooling. Of main interest in this study are the mutual influences of scalp temperature and perfusion during cooling. Results of the standard head model show that the temperature of the scalp skin is reduced from 34.4 °C to 18.3 °C, reducing tissue blood flow to 25%. Based upon variations in both thermal properties and head anatomies found in the literature, a parameter study was performed. The results of this parameter study show that the most important parameters affecting both temperature and perfusion are the perfusion coefficient Q10 and the thermal resistances of both the fat and the hair layer. The variations in the parameter study led to skin temperature ranging from 10.1 °C to 21.8 °C, which in turn reduced relative perfusion to 13% and 33%, respectively.

  17. Comparison of climate model simulated and observed borehole temperature profiles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Rouco, J. F.; Stevens, M. B.; Beltrami, H.; Goosse, H.; Rath, V.; Zorita, E.; Smerdon, J.

    2009-04-01

    Advances in understanding climate variability through the last millennium lean on simulation and reconstruction efforts. Progress in the integration of both approaches can potentially provide new means of assessing confidence on model projections of future climate change, of constraining the range of climate sensitivity and/or attributing past changes found in proxy evidence to external forcing. This work addresses specifically possible strategies for comparison of paleoclimate model simulations and the information recorded in borehole temperature profiles (BTPs). First efforts have allowed to design means of comparison of model simulated and observed BTPs in the context of the climate of the last millennium. This can be done by diffusing the simulated temperatures into the ground in order to produce synthetic BTPs that can be in turn assigned to collocated, real BTPs. Results suggest that there is sensitivity of borehole temperatures at large and regional scales to changes in external forcing over the last centuries. The comparison between borehole climate reconstructions and model simulations may also be subjected to non negligible uncertainties produced by the influence of past glacial and Holocene changes. While the thermal climate influence of the last deglaciation can be found well below 1000 m depth, such type of changes can potentially exert an influence on our understanding of subsurface climate in the top ca. 500 m. This issue is illustrated in control and externally forced climate simulations of the last millennium with the ECHO-G and LOVECLIM models, respectively.

  18. Geostationary Operational Environmental Satellite (GOES) Gyro Temperature Model

    NASA Technical Reports Server (NTRS)

    Rowe, J. N.; Noonan, C. H.; Garrick, J.

    1996-01-01

    The geostationary Operational Environmental Satellite (GOES) 1/M series of spacecraft are geostationary weather satellites that use the latest in weather imaging technology. The inertial reference unit package onboard consists of three gyroscopes measuring angular velocity along each of the spacecraft's body axes. This digital integrating rate assembly (DIRA) is calibrated and used to maintain spacecraft attitude during orbital delta-V maneuvers. During the early orbit support of GOES-8 (April 1994), the gyro drift rate biases exhibited a large dependency on gyro temperature. This complicated the calibration and introduced errors into the attitude during delta-V maneuvers. Following GOES-8, a model of the DIRA temperature and drift rate bias variation was developed for GOES-9 (May 1995). This model was used to project a value of the DIRA bias to use during the orbital delta-V maneuvers based on the bias change observed as the DIRA warmed up during the calibration. The model also optimizes the yaw reorientation necessary to achieve the correct delta-V pointing attitude. As a result, a higher accuracy was achieved on GOES-9 leading to more efficient delta-V maneuvers and a propellant savings. This paper summarizes the: Data observed on GOES-8 and the complications it caused in calibration; DIRA temperature/drift rate model; Application and results of the model on GOES-9 support.

  19. Modelling Arctic Ozone Loss: Effects of Temperature on Interannual Variability

    NASA Astrophysics Data System (ADS)

    Podolske, J. R.; Drdla, K.

    2005-12-01

    Observations have shown that Arctic ozone loss is strongly correlated with the extent of cold temperatures over the course of the winter. In order to understand this relationship, a model has been used to simulate Arctic stratospheric evolution and ozone loss for each of the last 14 winters. The model includes detailed polar stratospheric cloud microphysics, allowing realistic calculations of denitrification, and comprehensive chemistry. Using the model, it is possible to quantify the relationships between specific factors (i.e., chlorine activation, denitrification, aerosol loading, and exposure to sunlight), ozone loss, and cold temperatures. An understanding of the mechanisms causing the observed ozone loss trends will allow more reliable estimates of the potential for future ozone loss.

  20. Hall Thruster Modeling with a Given Temperature Profile

    SciTech Connect

    L. Dorf; V. Semenov; Y. Raitses; N.J. Fisch

    2002-06-12

    A quasi one-dimensional steady-state model of the Hall thruster is presented. For given mass flow rate, magnetic field profile, and discharge voltage the unique solution can be constructed, assuming that the thruster operates in one of the two regimes: with or without the anode sheath. It is shown that for a given temperature profile, the applied discharge voltage uniquely determines the operating regime; for discharge voltages greater than a certain value, the sheath disappears. That result is obtained over a wide range of incoming neutral velocities, channel lengths and widths, and cathode plane locations. A good correlation between the quasi one-dimensional model and experimental results can be achieved by selecting an appropriate temperature profile. We also show how the presented model can be used to obtain a two-dimensional potential distribution.

  1. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  2. Temperature-Corrected Model of Turbulence in Hot Jet Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Massey, Steven J.; Elmiligui, Alaa

    2007-01-01

    An improved correction has been developed to increase the accuracy with which certain formulations of computational fluid dynamics predict mixing in shear layers of hot jet flows. The CFD formulations in question are those derived from the Reynolds-averaged Navier-Stokes equations closed by means of a two-equation model of turbulence, known as the k-epsilon model, wherein effects of turbulence are summarized by means of an eddy viscosity. The need for a correction arises because it is well known among specialists in CFD that two-equation turbulence models, which were developed and calibrated for room-temperature, low Mach-number, plane-mixing-layer flows, underpredict mixing in shear layers of hot jet flows. The present correction represents an attempt to account for increased mixing that takes place in jet flows characterized by high gradients of total temperature. This correction also incorporates a commonly accepted, previously developed correction for the effect of compressibility on mixing.

  3. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  4. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  5. Skin temperature oscillation model for assessing vasomotion of microcirculation

    NASA Astrophysics Data System (ADS)

    Tang, Yuan-Liang; He, Ying; Shao, Hong-Wei; Mizeva, Irina

    2015-02-01

    It has been proved that there exists a certain correlation between fingertip temperature oscillations and blood flow oscillations. In this work, a porous media model of human hand is presented to investigate how the blood flow oscillation in the endothelial frequency band influences fingertip skin temperature oscillations. The porosity which represents the density of micro vessels is assumed to vary periodically and is a function of the skin temperature. Finite element analysis of skin temperature for a contra lateral hand under a cooling test was conducted. Subsequently, wavelet analysis was carried out to extract the temperature oscillations of the data through the numerical analysis and experimental measurements. Furthermore, the oscillations extracted from both numerical analyses and experiments were statistically analyzed to compare the amplitude. The simulation and experimental results show that for the subjects in cardiovascular health, the skin temperature fluctuations in endothelial frequency decrease during the cooling test and increase gradually after cooling, implying that the assumed porosity variation can represent the vasomotion in the endothelial frequency band.

  6. Numerical modeling of high temperature fracture of metallic composites

    NASA Astrophysics Data System (ADS)

    Cendales, E. D.; García, A.

    2016-02-01

    Mechanical properties of materials are strongly affected by increasing temperature, showing behaviors that could cause failure as creep. This article provides a brief theoretical description about fracture of materials, deepening on creep and intergranular creep. Some parameters as creep strain, strain rate, time to failure and displacement of the crack tip of a metallic glass selected at high temperature were studied. This paper shows a computer numerical model that permits establish mechanical behavior of a metal composite material Zr52.5Cu18Ni14.5Al10Ti5, bulk metallic glass. In the presence of cracking when the material is subjected to temperatures exceeding 30% of the melt temperature of material. The results obtained by computer simulation show correlation with the results about the behavior of the material viewed through the creep test. From the results we conclude that the mechanical properties of the material generally do not undergo major changes at high temperatures. However, at temperatures greater than 650C, the effect of the application of stress during creep entails failures in this kind of material.

  7. Modelling spoilage of fresh turbot and evaluation of a time-temperature integrator (TTI) label under fluctuating temperature.

    PubMed

    Nuin, Maider; Alfaro, Begoña; Cruz, Ziortza; Argarate, Nerea; George, Susie; Le Marc, Yvan; Olley, June; Pin, Carmen

    2008-10-31

    Kinetic models were developed to predict the microbial spoilage and the sensory quality of fresh fish and to evaluate the efficiency of a commercial time-temperature integrator (TTI) label, Fresh Check(R), to monitor shelf life. Farmed turbot (Psetta maxima) samples were packaged in PVC film and stored at 0, 5, 10 and 15 degrees C. Microbial growth and sensory attributes were monitored at regular time intervals. The response of the Fresh Check device was measured at the same temperatures during the storage period. The sensory perception was quantified according to a global sensory indicator obtained by principal component analysis as well as to the Quality Index Method, QIM, as described by Rahman and Olley [Rahman, H.A., Olley, J., 1984. Assessment of sensory techniques for quality assessment of Australian fish. CSIRO Tasmanian Regional Laboratory. Occasional paper n. 8. Available from the Australian Maritime College library. Newnham. Tasmania]. Both methods were found equally valid to monitor the loss of sensory quality. The maximum specific growth rate of spoilage bacteria, the rate of change of the sensory indicators and the rate of change of the colour measurements of the TTI label were modelled as a function of temperature. The temperature had a similar effect on the bacteria, sensory and Fresh Check kinetics. At the time of sensory rejection, the bacterial load was ca. 10(5)-10(6) cfu/g. The end of shelf life indicated by the Fresh Check label was close to the sensory rejection time. The performance of the models was validated under fluctuating temperature conditions by comparing the predicted and measured values for all microbial, sensory and TTI responses. The models have been implemented in a Visual Basic add-in for Excel called "Fish Shelf Life Prediction (FSLP)". This program predicts sensory acceptability and growth of spoilage bacteria in fish and the response of the TTI at constant and fluctuating temperature conditions. The program is freely

  8. On effective temperature in network models of collective behavior.

    PubMed

    Porfiri, Maurizio; Ariel, Gil

    2016-04-01

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system-ordered or disordered. By establishing a fluctuation-dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems with small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order-disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena. PMID:27131488

  9. Shear modeling: thermoelasticity at high temperature and pressure for tantalum

    SciTech Connect

    Orlikowski, D; Soderlind, P; Moriarty, J A

    2004-12-06

    For large-scale constitutive strength models the shear modulus is typically assumed to be linearly dependent on temperature. However, for materials compressed beyond the Hugoniot or in regimes where there is very little experimental data, accurate and validated models must be used. To this end, we present here a new methodology that fully accounts for electron- and ion-thermal contributions to the elastic moduli over broad ranges of temperature (<20,000 K) and pressure (<10 Mbar). In this approach, the full potential linear muffin-tin orbital (FP-LMTO) method for the cold and electron-thermal contributions is closely coupled with ion-thermal contributions. For the latter two separate approaches are used. In one approach, the quasi-harmonic, ion-thermal contribution is obtained through a Brillouin zone sum of strain derivatives of the phonons, and in the other a full anharmonic ion-thermal contribution is obtained directly through Monte Carlo (MC) canonical distribution averages of strain derivatives on the multi-ion potential itself. Both approaches use quantum-based interatomic potentials derived from model generalized pseudopotential theory (MGPT). For tantalum, the resulting elastic moduli are compared to available ultrasonic measurements and diamond-anvil-cell compression experiments. Over the range of temperature and pressure considered, the results are then used in a polycrystalline averaging for the shear modulus to assess the linear temperature dependence for Ta.

  10. Numerical Modeling of High-Temperature Corrosion Processes

    NASA Technical Reports Server (NTRS)

    Nesbitt, James A.

    1995-01-01

    Numerical modeling of the diffusional transport associated with high-temperature corrosion processes is reviewed. These corrosion processes include external scale formation and internal subscale formation during oxidation, coating degradation by oxidation and substrate interdiffusion, carburization, sulfidation and nitridation. The studies that are reviewed cover such complexities as concentration-dependent diffusivities, cross-term effects in ternary alloys, and internal precipitation where several compounds of the same element form (e.g., carbides of Cr) or several compounds exist simultaneously (e.g., carbides containing varying amounts of Ni, Cr, Fe or Mo). In addition, the studies involve a variety of boundary conditions that vary with time and temperature. Finite-difference (F-D) techniques have been applied almost exclusively to model either the solute or corrodant transport in each of these studies. Hence, the paper first reviews the use of F-D techniques to develop solutions to the diffusion equations with various boundary conditions appropriate to high-temperature corrosion processes. The bulk of the paper then reviews various F-D modeling studies of diffusional transport associated with high-temperature corrosion.

  11. On effective temperature in network models of collective behavior

    NASA Astrophysics Data System (ADS)

    Porfiri, Maurizio; Ariel, Gil

    2016-04-01

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system—ordered or disordered. By establishing a fluctuation-dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems with small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order-disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena.

  12. Thermal modeling and temperature measurements in optoelectronic waveguide devices

    NASA Astrophysics Data System (ADS)

    Allard, M.; Boudreau, Marcel G.; Masut, Remo A.

    1998-12-01

    Optoelectronic devices are particularly sensitive to temperature changes induced by light absorption and current flow. In order to study the thermal issues arising in the Mach-Zehnder optical modulator manufactured by Nortel, a non-linear finite-element thermal model of the device was constructed, which computes the internal temperature as a function of the applied voltage and optical power in the waveguide. An experimental technique was also developed, in which liquid crystals are used to measure the temperature on the device surface. The model predictions and the experimental results were found to agree well over wide ranges of optical power and voltage. The model and the technique have produced evidence of a thermal cross-talk between an integrated laser and the modulator: the peak internal temperature inside the modulator is higher in integrated devices than in the stand-alone configuration for identical voltage and optical power. Because of the desire to integrate multiple devices on a common substrate and the continuous increase of the optical powers in optical fiber systems, thermal issues will only become more important in future systems.

  13. Shock structure and temperature overshoot in macroscopic multi-temperature model of mixtures

    NASA Astrophysics Data System (ADS)

    Madjarević, Damir; Ruggeri, Tommaso; Simić, Srboljub

    2014-10-01

    The paper discusses the shock structure in macroscopic multi-temperature model of gaseous mixtures, recently established within the framework of extended thermodynamics. The study is restricted to weak and moderate shocks in a binary mixture of ideal gases with negligible viscosity and heat conductivity. The model predicts the existence of temperature overshoot of heavier constituent, like more sophisticated approaches, but also puts in evidence its non-monotonic behavior not documented in other studies. This phenomenon is explained as a consequence of weak energy exchange between the constituents, either due to large mass difference, or large rarefaction of the mixture. In the range of small Mach number it is also shown that shock thickness (or equivalently, the inverse of Knudsen number) decreases with the increase of Mach number, as well as when the mixture tends to behave like a single-component gas (small mass difference and/or presence of one constituent in traces).

  14. Shock structure and temperature overshoot in macroscopic multi-temperature model of mixtures

    SciTech Connect

    Madjarević, Damir Simić, Srboljub; Ruggeri, Tommaso

    2014-10-15

    The paper discusses the shock structure in macroscopic multi-temperature model of gaseous mixtures, recently established within the framework of extended thermodynamics. The study is restricted to weak and moderate shocks in a binary mixture of ideal gases with negligible viscosity and heat conductivity. The model predicts the existence of temperature overshoot of heavier constituent, like more sophisticated approaches, but also puts in evidence its non-monotonic behavior not documented in other studies. This phenomenon is explained as a consequence of weak energy exchange between the constituents, either due to large mass difference, or large rarefaction of the mixture. In the range of small Mach number it is also shown that shock thickness (or equivalently, the inverse of Knudsen number) decreases with the increase of Mach number, as well as when the mixture tends to behave like a single-component gas (small mass difference and/or presence of one constituent in traces)

  15. Directional infrared temperature and emissivity of vegetation: Measurements and models

    NASA Technical Reports Server (NTRS)

    Norman, J. M.; Castello, S.; Balick, L. K.

    1994-01-01

    Directional thermal radiance from vegetation depends on many factors, including the architecture of the plant canopy, thermal irradiance, emissivity of the foliage and soil, view angle, slope, and the kinetic temperature distribution within the vegetation-soil system. A one dimensional model, which includes the influence of topography, indicates that thermal emissivity of vegetation canopies may remain constant with view angle, or emissivity may increase or decrease as view angle from nadir increases. Typically, variations of emissivity with view angle are less than 0.01. As view angle increases away from nadir, directional infrared canopy temperature usually decreases but may remain nearly constant or even increase. Variations in directional temperature with view angle may be 5C or more. Model predictions of directional emissivity are compared with field measurements in corn canopies and over a bare soil using a method that requires two infrared thermometers, one sensitive to the 8 to 14 micrometer wavelength band and a second to the 14 to 22 micrometer band. After correction for CO2 absorption by the atmosphere, a directional canopy emissivity can be obtained as a function of view angle in the 8 to 14 micrometer band to an accuracy of about 0.005. Modeled and measured canopy emissivities for corn varied slightly with view angle (0.990 at nadir and 0.982 at 75 deg view zenith angle) and did not appear to vary significantly with view angle for the bare soil. Canopy emissivity is generally nearer to unity than leaf emissivity may vary by 0.02 with wavelength even though leaf emissivity. High spectral resolution, canopy thermal emissivity may vary by 0.02 with wavelength even though leaf emissivity may vary by 0.07. The one dimensional model provides reasonably accurate predictions of infrared temperature and can be used to study the dependence of infrared temperature on various plant, soil, and environmental factors.

  16. Effects of high temperature on different restorations in forensic identification: Dental samples and mandible

    PubMed Central

    Patidar, Kalpana A; Parwani, Rajkumar; Wanjari, Sangeeta

    2010-01-01

    Introduction: The forensic odontologist strives to utilize the charred human dentition throughout each stage of dental evaluation, and restorations are as unique as fingerprints and their radiographic morphology as well as the types of filling materials are often the main feature for identification. The knowledge of detecting residual restorative material and composition of unrecovered adjacent restoration is a valuable tool-mark in the presumptive identification of the dentition of a burned victim. Gold, silver amalgam, silicate restoration, and so on, have a different resistance to prolonged high temperature, therefore, the identification of burned bodies can be correlated with adequate qualities and quantities of the traces. Most of the dental examination relies heavily on the presence of the restoration as well as the relationship of one dental structure to another. This greatly narrows the research for the final identification that is based on postmortem data. Aim: The purpose of this study is to examine the resistance of teeth and different restorative materials, and the mandible, to variable temperature and duration, for the purpose of identification. Materials and Methods: The study was conducted on 72 extracted teeth which were divided into six goups of 12 teeth each based on the type of restorative material. (Group 1 - unrestored teeth, group 2 - teeth restored with Zn3(PO4)2, group 3 - with silver amalgam, group 4 with glass ionomer cement, group 5 - Ni-Cr-metal crown, group 6 - metal ceramic crown) and two specimens of the mandible. The effect of incineration at 400°C (5 mins, 15 mins, 30 mins) and 1100°C (15 mins) was studied. Results: Damage to the teeth subjected to variable temperatures and time can be categorized as intact (no damage), scorched (superficially parched and discolored), charred (reduced to carbon by incomplete combustion) and incinerated (burned to ashes). PMID:21189989

  17. Large Sample Hydrology : Building an international sample of watersheds to improve consistency and robustness of model evaluation

    NASA Astrophysics Data System (ADS)

    Mathevet, Thibault; Kumar, Rohini; Gupta, Hoshin; Vaze, Jai; Andréassian, Vazken

    2015-04-01

    This poster introduces the aims of the Large Sample Hydrology working group (LSH-WG) of the new IAHS Panta Rhei decade (2013-2022). The aim of the LSH-WG is to promote large sample hydrology, as discussed by Gupta et al. (2014) and to invite the community to collaborate on building and sharing a comprehensive and representative world-wide sample of watershed datasets. By doing so, LSH will allow the community to work towards 'hydrological consistency' (Martinez and Gupta, 2011) as a basis for hydrologic model development and evaluation, thereby increasing robustness of the model evaluation process. Classical model evaluation metrics based on 'robust statistics' are needed, but clearly not sufficient: multi-criteria assessments based on multiple hydrological signatures can help to better characterize hydrological functioning. Further, large-sample data sets can greatly facilitate: (i) improved understanding through rigorous testing and comparison of competing model hypothesis and structures, (ii) improved robustness of generalizations through statistical analyses that minimize the influence of outliers and case-specific studies, (iii) classification, regionalization and model transfer across a broad diversity of hydrometeorological contexts, and (iv) estimation of predictive uncertainties at a location and across locations (Mathevet et al., 2006; Andréassian et al., 2009; Gupta et al., 2014) References Andréassian, V., Perrin, C., Berthet, L., Le Moine, N., Lerat, J., Loumagne, C., Oudin, L., Mathevet, T., Ramos, M. H., and Valéry, A.: Crash tests for a standardized evaluation of hydrological models, Hydrology and Earth System Sciences, 1757-1764, 2009. Gupta, H. V., Perrin, C., Blöschl, G., Montanari, A., Kumar, R., Clark, M., and Andréassian, V.: Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., 18, 463-477, doi:10.5194/hess-18-463-2014, 2014. Martinez, G. F., and H. V.Gupta (2011), Hydrologic consistency as a basis for

  18. Effects of Low-Temperature Plasma-Sterilization on Mars Analog Soil Samples Mixed with Deinococcus radiodurans.

    PubMed

    Schirmack, Janosch; Fiebrandt, Marcel; Stapelmann, Katharina; Schulze-Makuch, Dirk

    2016-01-01

    We used Ar plasma-sterilization at a temperature below 80 °C to examine its effects on the viability of microorganisms when intermixed with tested soil. Due to a relatively low temperature, this method is not thought to affect the properties of a soil, particularly its organic component, to a significant degree. The method has previously been shown to work well on spacecraft parts. The selected microorganism for this test was Deinococcus radiodurans R1, which is known for its remarkable resistance to radiation effects. Our results showed a reduction in microbial counts after applying a low temperature plasma, but not to a degree suitable for a sterilization of the soil. Even an increase of the treatment duration from 1.5 to 45 min did not achieve satisfying results, but only resulted in in a mean cell reduction rate of 75% compared to the untreated control samples. PMID:27240407

  19. Effects of Low-Temperature Plasma-Sterilization on Mars Analog Soil Samples Mixed with Deinococcus radiodurans

    PubMed Central

    Schirmack, Janosch; Fiebrandt, Marcel; Stapelmann, Katharina; Schulze-Makuch, Dirk

    2016-01-01

    We used Ar plasma-sterilization at a temperature below 80 °C to examine its effects on the viability of microorganisms when intermixed with tested soil. Due to a relatively low temperature, this method is not thought to affect the properties of a soil, particularly its organic component, to a significant degree. The method has previously been shown to work well on spacecraft parts. The selected microorganism for this test was Deinococcus radiodurans R1, which is known for its remarkable resistance to radiation effects. Our results showed a reduction in microbial counts after applying a low temperature plasma, but not to a degree suitable for a sterilization of the soil. Even an increase of the treatment duration from 1.5 to 45 min did not achieve satisfying results, but only resulted in in a mean cell reduction rate of 75% compared to the untreated control samples. PMID:27240407

  20. Nonparametric Spatial Models for Extremes: Application to Extreme Temperature Data.

    PubMed

    Fuentes, Montserrat; Henry, John; Reich, Brian

    2013-03-01

    Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States. PMID:24058280

  1. Modeling Regolith Temperatures and Volatile Ice Processes (Invited)

    NASA Astrophysics Data System (ADS)

    Mellon, M. T.

    2013-12-01

    Surface and subsurface temperatures are an important tool for exploring the distribution and dynamics of volatile ices on and within planetary regoliths. I will review thermal-analysis approaches and recent applications in the studies of volatile ice processes. Numerical models of regolith temperatures allow us to examine the response of ices to periodic and secular changes in heat sources such as insolation. Used in conjunction with spatially and temporally distributed remotely-sensed temperatures, numerical models can: 1) constrain the stability and dynamics of volatile ices; 2) define the partitioning between phases of ice, gas, liquid, and adsorbate; and 3) in some instances be used to probe the distribution of ice hidden from view beneath the surface. The vapor pressure of volatile ices (such as water, carbon dioxide, and methane) depends exponentially on temperature. Small changes in temperature can result in transitions between stable phases. Cyclic temperatures and the propagation of thermal waves into the subsurface can produce a strong hysteresis in the population and partitioning of various phases (such as between ice, vapor, and adsorbate) and result in bulk transport. Condensation of ice will also have a pronounced effect on the thermal properties of otherwise loose particulate regolith. Cementing grains at their contacts through ice deposition will increase the thermal conductivity, and may enhance the stability of additional ice. Likewise sintering of grains within a predominantly icy regolith will increase the thermal conductivity. Subsurface layers that result from ice redistribution can be discriminated by remote sensing when combined with numerical modeling. Applications of these techniques include modeling of seasonal carbon dioxide frosts on Mars, predicting and interpreting the subsurface ice distribution on Mars and in Antarctica, and estimating the current depth of ice-rich permafrost on Mars. Additionally, understanding cold trapping ices

  2. Elevated temperature alters carbon cycling in a model microbial community

    NASA Astrophysics Data System (ADS)

    Mosier, A.; Li, Z.; Thomas, B. C.; Hettich, R. L.; Pan, C.; Banfield, J. F.

    2013-12-01

    Earth's climate is regulated by biogeochemical carbon exchanges between the land, oceans and atmosphere that are chiefly driven by microorganisms. Microbial communities are therefore indispensible to the study of carbon cycling and its impacts on the global climate system. In spite of the critical role of microbial communities in carbon cycling processes, microbial activity is currently minimally represented or altogether absent from most Earth System Models. Method development and hypothesis-driven experimentation on tractable model ecosystems of reduced complexity, as presented here, are essential for building molecularly resolved, benchmarked carbon-climate models. Here, we use chemoautotropic acid mine drainage biofilms as a model community to determine how elevated temperature, a key parameter of global climate change, regulates the flow of carbon through microbial-based ecosystems. This study represents the first community proteomics analysis using tandem mass tags (TMT), which enable accurate, precise, and reproducible quantification of proteins. We compare protein expression levels of biofilms growing over a narrow temperature range expected to occur with predicted climate changes. We show that elevated temperature leads to up-regulation of proteins involved in amino acid metabolism and protein modification, and down-regulation of proteins involved in growth and reproduction. Closely related bacterial genotypes differ in their response to temperature: Elevated temperature represses carbon fixation by two Leptospirillum genotypes, whereas carbon fixation is significantly up-regulated at higher temperature by a third closely related genotypic group. Leptospirillum group III bacteria are more susceptible to viral stress at elevated temperature, which may lead to greater carbon turnover in the microbial food web through the release of viral lysate. Overall, this proteogenomics approach revealed the effects of climate change on carbon cycling pathways and other

  3. Modeling Lunar Borehole Temperature in order to Reconstruct Historical Total Solar Irradiance and Estimate Surface Temperature in Permanently Shadowed Regions

    NASA Astrophysics Data System (ADS)

    Wen, G.; Cahalan, R. F.; Miyahara, H.; Ohmura, A.

    2007-12-01

    The Moon is an ideal place to reconstruct historical total solar irradiance (TSI). With undisturbed lunar surface albedo and the very low thermal diffusivity of lunar regolith, changes in solar input lead to changes in lunar surface temperature that diffuse downward to be recorded in the temperature profile in the near-surface layer. Using regolith thermal properties from Apollo, we model the heat transfer in the regolith layer, and compare modeled surface temperature to Apollo observations to check model performance. Using as alternative input scenarios two reconstructed TSI time series from 1610 to 2000 (Lean, 2000; Wang, Lean, and Sheeley 2005), we conclude that the two scenarios can be distinguished by detectable differences in regolith temperature, with the peak difference of about 10 mK occuring at a depth of about 10 m (Miyahara et al., 2007). The possibility that water ice exists in permanently shadowed areas near the lunar poles (Nozette et al., 1997; Spudis et al, 1998), makes it of interest to estimate surface temperature in such dark regions. "Turning off" the Sun in our time dependent model, we found it would take several hundred years for the surface temperature to drop from ~~100K immediately after sunset down to a nearly constant equilibrium temperature of about 24~~38 K, with the range determined by the range of possible input from Earth, from 0 W/m2 without Earth visible, up to about 0.1 W/m2 at maximum Earth phase. A simple equilibrium model (e.g., Huang 2007) is inappropriate to relate the Apollo-observed nighttime temperature to Earth's radiation budget, given the long multi- centennial time scale needed for equilibration of the lunar surface layer after sunset. Although our results provide the key mechanisms for reconstructing historical TSI, further research is required to account for topography of lunar surfaces, and new measurements of regolith thermal properties will also be needed once a new base of operations is

  4. Modeling of Boehmite Leaching from Actual Hanford High-Level Waste Samples

    SciTech Connect

    Peterson, Reid A.; Lumetta, Gregg J.; Rapko, Brian M.; Poloski, Adam P.

    2007-06-27

    The Department of Energy plans to vitrify approximately 60,000 metric tons of high level waste sludge from underground storage tanks at the Hanford Nuclear Reservation. To reduce the volume of high level waste requiring treatment, a goal has been set to remove about 90 percent of the aluminum, which comprises nearly 70 percent of the sludge. Aluminum in the form of gibbsite and sodium aluminate can be easily dissolved by washing the waste stream with caustic, but boehmite, which comprises nearly half of the total aluminum, is more resistant to caustic dissolution and requires higher treatment temperatures and hydroxide concentrations. In this work, the dissolution kinetics of aluminum species during caustic leaching of actual Hanford high level waste samples is examined. The experimental results are used to develop a shrinking core model that provides a basis for prediction of dissolution dynamics from known process temperature and hydroxide concentration. This model is further developed to include the effects of particle size polydispersity, which is found to strongly influence the rate of dissolution.

  5. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    NASA Astrophysics Data System (ADS)

    Li, Hong-Yi; Ruby Leung, L.; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan

    2015-12-01

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. It is then shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparing against the observed data from over 320 USGS stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. Furthermore, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August-October) by as much as 1˜2°C due to enhanced low-flow conditions, which have important implications to aquatic ecosystems. Sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  6. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    SciTech Connect

    Li, Hongyi; Leung, Lai-Yung R.; Tesfa, Teklu K.; Voisin, Nathalie; Hejazi, Mohamad I.; Liu, Lu; Liu, Ying; Rice, Jennie S.; Wu, Huan; Yang, Xiaofan

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. Sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  7. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras.

    PubMed

    Wolf, Alejandro; Pezoa, Jorge E; Figueroa, Miguel

    2016-01-01

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘ C, when the array's temperature varies by approximately 15 ∘ C. PMID:27447637

  8. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras

    PubMed Central

    Wolf, Alejandro; Pezoa, Jorge E.; Figueroa, Miguel

    2016-01-01

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘C, when the array’s temperature varies by approximately 15 ∘C. PMID:27447637

  9. Quasi-steady model for predicting temperature of aqueous foams circulating in geothermal wellbores

    SciTech Connect

    Blackwell, B.F.; Ortega, A.

    1983-01-01

    A quasi-steady model has been developed for predicting the temperature profiles of aqueous foams circulating in geothermal wellbores. The model assumes steady one-dimensional incompressible flow in the wellbore; heat transfer by conduction from the geologic formation to the foam is one-dimensional radially and time-dependent. The vertical temperature distribution in the undisturbed geologic formation is assumed to be composed of two linear segments. For constant values of the convective heat-transfer coefficient, a closed-form analytical solution is obtained. It is demonstrated that the Prandtl number of aqueous foams is large (1000 to 5000); hence, a fully developed temperature profile may not exist for representative drilling applications. Existing convective heat-transfer-coefficient solutions are adapted to aqueous foams. The simplified quasi-steady model is successfully compared with a more-sophisticated finite-difference computer code. Sample temperature-profile calculations are presented for representative values of the primary parameters. For a 5000-ft wellbore with a bottom hole temperature of 375{sup 0}F, the maximum foam temperature can be as high as 300{sup 0}F.

  10. Paramagnetic Meissner effect of high-temperature granular superconductors: Interpretation by anisotropic and isotropic models

    SciTech Connect

    Chen, F.H. |; Horng, W.C.; Hsu, H.T.; Tseng, T.Y.

    1995-02-01

    The field-cooled magnetization of high-{Tc} superconducting ceramics measured in low magnetic field exhibits the paramagnetic Meissner effect (PME), i.e., the diamagnetic signal initially increases with decrease in temperature but reaches a maximum at temperature T{sub d} and later decreases with decrease in temperature. Even in some samples the signal is ultimately able to transform inversely into a paramagnetic regime once the sample is cooled below a temperature T{sub p} as long as the applied field is sufficiently small. This PME has been observed in various high-{Tc} cuprates and is explained by disparate aspects. An anisotropic model, in which the granular superconductors are assumed to be ideally anisotropic, was first alternatively proposed in the present work so as to theoretically account for this effect. On the other hand, an isotropic model, suitable for granular superconductors with randomly oriented grains, was proposed to deal with the samples prepared by a conventional solid-state reaction method. The anomalous magnetization behavior in the present model was demonstrated to be the superposition of the diamagnetic signal, which occurs as a result of the intragranular shielding currents, over the paramagnetic one due to the induction of the intergranular component induced by these currents where the intergranular one behaved as the effective pinning centers. The PME was demonstrated by this model to exist parasitically in granular superconductors. This intergranular effect is therefore worthy of remark when evaluating the volume fraction of superconductivity for the samples from the Meissner signal, in particular, at a low magnetic field.

  11. The room temperature preservation of filtered environmental DNA samples and assimilation into a phenol-chloroform-isoamyl alcohol DNA extraction.

    PubMed

    Renshaw, Mark A; Olds, Brett P; Jerde, Christopher L; McVeigh, Margaret M; Lodge, David M

    2015-01-01

    Current research targeting filtered macrobial environmental DNA (eDNA) often relies upon cold ambient temperatures at various stages, including the transport of water samples from the field to the laboratory and the storage of water and/or filtered samples in the laboratory. This poses practical limitations for field collections in locations where refrigeration and frozen storage is difficult or where samples must be transported long distances for further processing and screening. This study demonstrates the successful preservation of eDNA at room temperature (20 °C) in two lysis buffers, CTAB and Longmire's, over a 2-week period of time. Moreover, the preserved eDNA samples were seamlessly integrated into a phenol-chloroform-isoamyl alcohol (PCI) DNA extraction protocol. The successful application of the eDNA extraction to multiple filter membrane types suggests the methods evaluated here may be broadly applied in future eDNA research. Our results also suggest that for many kinds of studies recently reported on macrobial eDNA, detection probabilities could have been increased, and at a lower cost, by utilizing the Longmire's preservation buffer with a PCI DNA extraction. PMID:24834966

  12. The room temperature preservation of filtered environmental DNA samples and assimilation into a phenol–chloroform–isoamyl alcohol DNA extraction

    PubMed Central

    Renshaw, Mark A; Olds, Brett P; Jerde, Christopher L; McVeigh, Margaret M; Lodge, David M

    2015-01-01

    Current research targeting filtered macrobial environmental DNA (eDNA) often relies upon cold ambient temperatures at various stages, including the transport of water samples from the field to the laboratory and the storage of water and/or filtered samples in the laboratory. This poses practical limitations for field collections in locations where refrigeration and frozen storage is difficult or where samples must be transported long distances for further processing and screening. This study demonstrates the successful preservation of eDNA at room temperature (20 °C) in two lysis buffers, CTAB and Longmire's, over a 2-week period of time. Moreover, the preserved eDNA samples were seamlessly integrated into a phenol–chloroform–isoamyl alcohol (PCI) DNA extraction protocol. The successful application of the eDNA extraction to multiple filter membrane types suggests the methods evaluated here may be broadly applied in future eDNA research. Our results also suggest that for many kinds of studies recently reported on macrobial eDNA, detection probabilities could have been increased, and at a lower cost, by utilizing the Longmire's preservation buffer with a PCI DNA extraction. PMID:24834966

  13. Low-noise rotating sample holder for ultrafast transient spectroscopy at cryogenic temperatures.

    PubMed

    Fanciulli, R; Cerjak, I; Herek, J L

    2007-05-01

    We present the design and testing of a rotating device that fits within a commercial helium cryostat and is capable of providing at 4 K a fresh sample surface for subsequent shots of a 1-10 kHz amplified pulsed laser. We benchmark this rotator in a transient-absorption experiment on molecular switches. After showing that the device introduces only a small amount of additional noise, we demonstrate how the effect of signal degradation due to high fluence is completely resolved. PMID:17552807

  14. Low-noise rotating sample holder for ultrafast transient spectroscopy at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Fanciulli, R.; Cerjak, I.; Herek, J. L.

    2007-05-01

    We present the design and testing of a rotating device that fits within a commercial helium cryostat and is capable of providing at 4K a fresh sample surface for subsequent shots of a 1-10kHz amplified pulsed laser. We benchmark this rotator in a transient-absorption experiment on molecular switches. After showing that the device introduces only a small amount of additional noise, we demonstrate how the effect of signal degradation due to high fluence is completely resolved.

  15. Melting Temperature Mapping Method: A Novel Method for Rapid Identification of Unknown Pathogenic Microorganisms within Three Hours of Sample Collection

    PubMed Central

    Niimi, Hideki; Ueno, Tomohiro; Hayashi, Shirou; Abe, Akihito; Tsurue, Takahiro; Mori, Masashi; Tabata, Homare; Minami, Hiroshi; Goto, Michihiko; Akiyama, Makoto; Yamamoto, Yoshihiro; Saito, Shigeru; Kitajima, Isao

    2015-01-01

    Acquiring the earliest possible identification of pathogenic microorganisms is critical for selecting the appropriate antimicrobial therapy in infected patients. We herein report the novel “melting temperature (Tm) mapping method” for rapidly identifying the dominant bacteria in a clinical sample from sterile sites. Employing only seven primer sets, more than 100 bacterial species can be identified. In particular, using the Difference Value, it is possible to identify samples suitable for Tm mapping identification. Moreover, this method can be used to rapidly diagnose the absence of bacteria in clinical samples. We tested the Tm mapping method using 200 whole blood samples obtained from patients with suspected sepsis, 85% (171/200) of which matched the culture results based on the detection level. A total of 130 samples were negative according to the Tm mapping method, 98% (128/130) of which were also negative based on the culture method. Meanwhile, 70 samples were positive according to the Tm mapping method, and of the 59 suitable for identification, 100% (59/59) exhibited a “match” or “broad match” with the culture or sequencing results. These findings were obtained within three hours of whole blood collection. The Tm mapping method is therefore useful for identifying infectious diseases requiring prompt treatment. PMID:26218169

  16. Modeling the Surface Temperature of Earth-like Planets

    NASA Astrophysics Data System (ADS)

    Vladilo, Giovanni; Silva, Laura; Murante, Giuseppe; Filippi, Luca; Provenzale, Antonello

    2015-05-01

    We introduce a novel Earth-like planet surface temperature model (ESTM) for habitability studies based on the spatial-temporal distribution of planetary surface temperatures. The ESTM adopts a surface energy balance model (EBM) complemented by: radiative-convective atmospheric column calculations, a set of physically based parameterizations of meridional transport, and descriptions of surface and cloud properties more refined than in standard EBMs. The parameterization is valid for rotating terrestrial planets with shallow atmospheres and moderate values of axis obliquity (ɛ ≲ 45{}^\\circ ). Comparison with a 3D model of atmospheric dynamics from the literature shows that the equator-to-pole temperature differences predicted by the two models agree within ≈ 5 K when the rotation rate, insolation, surface pressure and planet radius are varied in the intervals 0.5≲ {Ω }/{{{Ω }}\\oplus }≲ 2, 0.75≲ S/{{S}\\circ }≲ 1.25, 0.3≲ p/(1 bar)≲ 10, and 0.5≲ R/{{R}\\oplus }≲ 2, respectively. The ESTM has an extremely low computational cost and can be used when the planetary parameters are scarcely known (as for most exoplanets) and/or whenever many runs for different parameter configurations are needed. Model simulations of a test-case exoplanet (Kepler-62e) indicate that an uncertainty in surface pressure within the range expected for terrestrial planets may impact the mean temperature by ˜ 60 K. Within the limits of validity of the ESTM, the impact of surface pressure is larger than that predicted by uncertainties in rotation rate, axis obliquity, and ocean fractions. We discuss the possibility of performing a statistical ranking of planetary habitability taking advantage of the flexibility of the ESTM.

  17. Mathematical model of the metal mould surface temperature optimization

    SciTech Connect

    Mlynek, Jaroslav Knobloch, Roman; Srb, Radek

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  18. Modelling of monovacancy diffusion in W over wide temperature range

    SciTech Connect

    Bukonte, L. Ahlgren, T.; Heinola, K.

    2014-03-28

    The diffusion of monovacancies in tungsten is studied computationally over a wide temperature range from 1300 K until the melting point of the material. Our modelling is based on Molecular Dynamics technique and Density Functional Theory. The monovacancy migration barriers are calculated using nudged elastic band method for nearest and next-nearest neighbour monovacancy jumps. The diffusion pre-exponential factor for monovacancy diffusion is found to be two to three orders of magnitude higher than commonly used in computational studies, resulting in attempt frequency of the order 10{sup 15} Hz. Multiple nearest neighbour jumps of monovacancy are found to play an important role in the contribution to the total diffusion coefficient, especially at temperatures above 2/3 of T{sub m}, resulting in an upward curvature of the Arrhenius diagram. The probabilities for different nearest neighbour jumps for monovacancy in W are calculated at different temperatures.

  19. Mathematical model of the metal mould surface temperature optimization

    NASA Astrophysics Data System (ADS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-11-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  20. HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS

    SciTech Connect

    Vinayak N. Kabadi

    2000-05-01

    The flow VLE apparatus designed and built for a previous project was upgraded and recalibrated for data measurements for this project. The modifications include better and more accurate sampling technique, addition of a digital recorder to monitor temperature and pressure inside the VLE cell, and a new technique for remote sensing of the liquid level in the cell. VLE data measurements for three binary systems, tetralin-quinoline, benzene--ethylbenzene and ethylbenzene--quinoline, have been completed. The temperature ranges of data measurements were 325 C to 370 C for the first system, 180 C to 300 C for the second system, and 225 C to 380 C for the third system. The smoothed data were found to be fairly well behaved when subjected to thermodynamic consistency tests. SETARAM C-80 calorimeter was used for incremental enthalpy and heat capacity measurements for benzene--ethylbenzene binary liquid mixtures. Data were measured from 30 C to 285 C for liquid mixtures covering the entire composition range. An apparatus has been designed for simultaneous measurement of excess volume and incremental enthalpy of liquid mixtures at temperatures from 30 C to 300 C. The apparatus has been tested and is ready for data measurements. A flow apparatus for measurement of heat of mixing of liquid mixtures at high temperatures has also been designed, and is currently being tested and calibrated.

  1. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    NASA Astrophysics Data System (ADS)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  2. A novel powder sample holder for the determination of glass transition temperatures by DMA.

    PubMed

    Mahlin, Denny; Wood, John; Hawkins, Nicholas; Mahey, Jas; Royall, Paul G

    2009-04-17

    The use of a new sample holder for dynamic mechanical analysis (DMA) as a means to characterise the Tg of powdered hydroxypropyl methyl cellulose (HPMC) has been investigated. A sample holder was constructed consisting of a rectangular stainless steel container and a lid engineered to fit exactly within the walls of the container when clamped within a TA instruments Q800 DMA in dual cantilever configuration. Physical mixtures of HPMC (E4M) and aluminium oxide powders were placed in the holder and subjected to oscillating strains (1 Hz, 10 Hz and 100 Hz) whilst heated at 3 degrees C/min. The storage and loss modulus signals showed a large reduction in the mechanical strength above 150 degrees C which was attributed to a glass transition. Optimal experimental parameters were determined using a design of experiment procedure and by analysing the frequency dependence of Tg in Arrhenius plots. The parameters were a clamping pressure of 62 kPa, a mass ratio of 0.2 HPMC in aluminium oxide, and a loading mass of either 120 mg or 180 mg. At 1 Hz, a Tg of 177+/-1.2 degrees C (n=6) for powdered HPMC was obtained. In conclusion, the new powder holder was capable of measuring the Tg of pharmaceutical powders and a simple optimization protocol was established, useful in further applications of the DMA powder holder. PMID:19167475

  3. Improving Shade Modelling in a Regional River Temperature Model Using Fine-Scale LIDAR Data

    NASA Astrophysics Data System (ADS)

    Hannah, D. M.; Loicq, P.; Moatar, F.; Beaufort, A.; Melin, E.; Jullian, Y.

    2015-12-01

    Air temperature is often considered as a proxy of the stream temperature to model the distribution areas of aquatic species water temperature is not available at a regional scale. To simulate the water temperature at a regional scale (105 km²), a physically-based model using the equilibrium temperature concept and including upstream-downstream propagation of the thermal signal was developed and applied to the entire Loire basin (Beaufort et al., submitted). This model, called T-NET (Temperature-NETwork) is based on a hydrographical network topology. Computations are made hourly on 52,000 reaches which average 1.7 km long in the Loire drainage basin. The model gives a median Root Mean Square Error of 1.8°C at hourly time step on the basis of 128 water temperature stations (2008-2012). In that version of the model, tree shadings is modelled by a constant factor proportional to the vegetation cover on 10 meters sides the river reaches. According to sensitivity analysis, improving the shade representation would enhance T-NET accuracy, especially for the maximum daily temperatures, which are currently not very well modelized. This study evaluates the most efficient way (accuracy/computing time) to improve the shade model thanks to 1-m resolution LIDAR data available on tributary of the LoireRiver (317 km long and an area of 8280 km²). Two methods are tested and compared: the first one is a spatially explicit computation of the cast shadow for every LIDAR pixel. The second is based on averaged vegetation cover characteristics of buffers and reaches of variable size. Validation of the water temperature model is made against 4 temperature sensors well spread along the stream, as well as two airborne thermal infrared imageries acquired in summer 2014 and winter 2015 over a 80 km reach. The poster will present the optimal length- and crosswise scale to characterize the vegetation from LIDAR data.

  4. Modeling compressive reaction and estimating model uncertainty in shock loaded porous samples of Hexanitrostilbene (HNS)

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Gump, Jared

    2011-06-01

    Neat pressings of HNS powders have been used in many explosive applications for over 50 years. However, characterization of its crystalline properties has lagged that of other explosives, and the solid stress has been inferred from impact experiments or estimated from mercury porosimetry. This lack of knowledge of the precise crystalline isotherm can contribute to large model uncertainty in the reacted response of pellets to shock impact. At high impact stresses, deflagration-to-detonation transition (DDT) processes initiated by compressive reaction have been interpreted from velocity interferometry at the surface of distended HNS-FP pellets. In particular, the Baer-Nunziato multiphase model in CTH, Sandia's Eulerian, finite volume shock propagation code, was used to predict compressive waves in pellets having approximately a 60% theoretical maximum density (TMD). These calculations were repeated with newly acquired isothermal compression measurements of fine-particle HNS using diamond anvil cells to compress the sample and powder x-ray diffraction to obtain the sample volume at each pressure point. Hence, estimating the model uncertainty provides a simple method for conveying the impact of future model improvements based upon new experimental data.

  5. Modeling compressive reaction and estimating model uncertainty in shock loaded porous samples of hexanitrostilbene (HNS)

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron L.; Gump, Jared C.

    2012-03-01

    Neat pressings of HNS powders have been used in many explosive applications for over 50 years. However, characterization of its crystalline properties has lagged that of other explosives, and the solid stress has been inferred from impact experiments or estimated from mercury porosimetry. This lack of knowledge of the precise crystalline isotherm can contribute to large model uncertainty in the reacted response of pellets to shock impact. At high impact stresses, deflagration-to-detonation transition (DDT) processes initiated by compressive reaction have been interpreted from velocity interferometry at the surface of distended HNS-FP pellets. In particular, the Baer-Nunziato multiphase model in CTH, Sandia's Eulerian, finite volume shock propagation code, was used to predict compressive waves in pellets having approximately a 60% theoretical maximum density (TMD). These calculations were repeated with newly acquired isothermal compression measurements of fineparticle HNS using diamond anvil cells to compress the sample and powder x-ray diffraction to obtain the sample volume at each pressure point. Hence, estimating the model uncertainty provides a simple method for conveying the impact of future model improvements based upon new experimental data.

  6. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms 1. Modifications. Institutions that modify the model clauses will be deemed in compliance as long as they do not delete required...

  7. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 9 2014-01-01 2014-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms 1. Modifications. Institutions that modify the model clauses will be deemed in compliance as long as they do not delete required...

  8. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 4 2014-01-01 2014-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  9. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 4 2012-01-01 2012-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  10. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 4 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM (CONTINUED) TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b))...

  11. On the fate of the Standard Model at finite temperature

    NASA Astrophysics Data System (ADS)

    Rose, Luigi Delle; Marzo, Carlo; Urbano, Alfredo

    2016-05-01

    In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 1018 GeV, we find that the instability bound excludes values of the top mass M t ≳ 173 .6 GeV, with M h ≃ 125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.

  12. Systems Modeling for Crew Core Body Temperature Prediction Postlanding

    NASA Technical Reports Server (NTRS)

    Cross, Cynthia; Ochoa, Dustin

    2010-01-01

    The Orion Crew Exploration Vehicle, NASA s latest crewed spacecraft project, presents many challenges to its designers including ensuring crew survivability during nominal and off nominal landing conditions. With a nominal water landing planned off the coast of San Clemente, California, off nominal water landings could range from the far North Atlantic Ocean to the middle of the equatorial Pacific Ocean. For all of these conditions, the vehicle must provide sufficient life support resources to ensure that the crew member s core body temperatures are maintained at a safe level prior to crew rescue. This paper will examine the natural environments, environments created inside the cabin and constraints associated with post landing operations that affect the temperature of the crew member. Models of the capsule and the crew members are examined and analysis results are compared to the requirement for safe human exposure. Further, recommendations for updated modeling techniques and operational limits are included.

  13. Chaos in Temperature in Generic 2p-Spin Models

    NASA Astrophysics Data System (ADS)

    Panchenko, Dmitry

    2016-02-01

    We prove chaos in temperature for even p-spin models which include sufficiently many p-spin interaction terms. Our approach is based on a new invariance property for coupled asymptotic Gibbs measures, similar in spirit to the invariance property that appeared in the proof of ultrametricity in Panchenko (Ann Math (2) 177(1):383-393, 2013), used in combination with Talagrand's analogue of Guerra's replica symmetry breaking bound for coupled systems.

  14. Solid state convection models of lunar internal temperature

    NASA Technical Reports Server (NTRS)

    Schubert, G.; Young, R. E.; Cassen, P.

    1975-01-01

    Thermal models of the Moon were made which include cooling by subsolidus creep and consideration of the creep behavior of geologic material. Measurements from the Apollo program on seismic velocities, electrical conductivity of the Moon's interior, and heat flux at two locations were used in the calculations. Estimates of 1500 to 1600 K were calculated for the temperature, and one sextillion to ten sextillion sq cm/sec were calcualted for the viscosity of the deep lunar interior.

  15. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  16. Modeling sample variables with an Experimental Factor Ontology

    PubMed Central

    Malone, James; Holloway, Ele; Adamusiak, Tomasz; Kapushesky, Misha; Zheng, Jie; Kolesnikov, Nikolay; Zhukova, Anna; Brazma, Alvis; Parkinson, Helen

    2010-01-01

    Motivation: Describing biological sample variables with ontologies is complex due to the cross-domain nature of experiments. Ontologies provide annotation solutions; however, for cross-domain investigations, multiple ontologies are needed to represent the data. These are subject to rapid change, are often not interoperable and present complexities that are a barrier to biological resource users. Results: We present the Experimental Factor Ontology, designed to meet cross-domain, application focused use cases for gene expression data. We describe our methodology and open source tools used to create the ontology. These include tools for creating ontology mappings, ontology views, detecting ontology changes and using ontologies in interfaces to enhance querying. The application of reference ontologies to data is a key problem, and this work presents guidelines on how community ontologies can be presented in an application ontology in a data-driven way. Availability: http://www.ebi.ac.uk/efo Contact: malone@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20200009

  17. A regional neural network model for predicting mean daily river water temperature

    USGS Publications Warehouse

    Wagner, Tyler; DeWeber, Jefferson Tyrell

    2014-01-01

    Water temperature is a fundamental property of river habitat and often a key aspect of river resource management, but measurements to characterize thermal regimes are not available for most streams and rivers. As such, we developed an artificial neural network (ANN) ensemble model to predict mean daily water temperature in 197,402 individual stream reaches during the warm season (May–October) throughout the native range of brook trout Salvelinus fontinalis in the eastern U.S. We compared four models with different groups of predictors to determine how well water temperature could be predicted by climatic, landform, and land cover attributes, and used the median prediction from an ensemble of 100 ANNs as our final prediction for each model. The final model included air temperature, landform attributes and forested land cover and predicted mean daily water temperatures with moderate accuracy as determined by root mean squared error (RMSE) at 886 training sites with data from 1980 to 2009 (RMSE = 1.91 °C). Based on validation at 96 sites (RMSE = 1.82) and separately for data from 2010 (RMSE = 1.93), a year with relatively warmer conditions, the model was able to generalize to new stream reaches and years. The most important predictors were mean daily air temperature, prior 7 day mean air temperature, and network catchment area according to sensitivity analyses. Forest land cover at both riparian and catchment extents had relatively weak but clear negative effects. Predicted daily water temperature averaged for the month of July matched expected spatial trends with cooler temperatures in headwaters and at higher elevations and latitudes. Our ANN ensemble is unique in predicting daily temperatures throughout a large region, while other regional efforts have predicted at relatively coarse time steps. The model may prove a useful tool for predicting water temperatures in sampled and unsampled rivers under current conditions and future projections of climate

  18. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  19. Precipitates/Salts Model Calculations for Various Drift Temperature Environments

    SciTech Connect

    P. Marnier

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b).

  20. Chemical vapor deposition modeling for high temperature materials

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.

    1992-01-01

    The formalism for the accurate modeling of chemical vapor deposition (CVD) processes has matured based on the well established principles of transport phenomena and chemical kinetics in the gas phase and on surfaces. The utility and limitations of such models are discussed in practical applications for high temperature structural materials. Attention is drawn to the complexities and uncertainties in chemical kinetics. Traditional approaches based on only equilibrium thermochemistry and/or transport phenomena are defended as useful tools, within their validity, for engineering purposes. The role of modeling is discussed within the context of establishing the link between CVD process parameters and material microstructures/properties. It is argued that CVD modeling is an essential part of designing CVD equipment and controlling/optimizing CVD processes for the production and/or coating of high performance structural materials.

  1. Determination of filbertone in spiked olive oil samples using headspace-programmed temperature vaporization-gas chromatography-mass spectrometry.

    PubMed

    Pérez Pavón, José Luis; del Nogal Sánchez, Miguel; Fernández Laespada, María Esther; Moreno Cordero, Bernardo

    2009-07-01

    A sensitive method for the fast analysis of filbertone in spiked olive oil samples is presented. The applicability of a headspace (HS) autosampler in combination with a gas chromatograph (GC) equipped with a programmable temperature vaporizer (PTV) and a mass spectrometric (MS) detector is explored. A modular accelerated column heater (MACH) was used to control the temperature of the capillary gas chromatography column. This module can be heated and cooled very rapidly, shortening total analysis cycle times to a considerable extent. The proposed method does not require any previous analyte extraction, filtration and preconcentration step, as in most methods described to date. Sample preparation is reduced to placing the olive oil sample in the vial. This reduces the analysis time and the experimental errors associated with this step of the analytical process. By using headspace generation, the volatiles of the sample are analysed without interference by the non-volatile matrix, and by using injection in solvent-vent mode at the PTV inlet, most of the compounds that are more volatile than filbertone are purged and the matrix effect is minimised. Use of a liner packed with Tenax-TA allowed the compound of interest to be retained during the venting process. The limits of detection and quantification were as low as 0.27 and 0.83 microg/L, respectively, and precision (measured as the relative standard deviation) was 5.7%. The method was applied to the determination of filbertone in spiked olive oil samples and the results revealed the good accuracy obtained with the method. PMID:19396589

  2. COMPUTER MODEL OF TEMPERATURE DISTRIBUTION IN OPTICALLY PUMPED LASER RODS

    NASA Technical Reports Server (NTRS)

    Farrukh, U. O.

    1994-01-01

    Managing the thermal energy that accumulates within a solid-state laser material under active pumping is of critical importance in the design of laser systems. Earlier models that calculated the temperature distribution in laser rods were single dimensional and assumed laser rods of infinite length. This program presents a new model which solves the temperature distribution problem for finite dimensional laser rods and calculates both the radial and axial components of temperature distribution in these rods. The modeled rod is either side-pumped or end-pumped by a continuous or a single pulse pump beam. (At the present time, the model cannot handle a multiple pulsed pump source.) The optical axis is assumed to be along the axis of the rod. The program also assumes that it is possible to cool different surfaces of the rod at different rates. The user defines the laser rod material characteristics, determines the types of cooling and pumping to be modeled, and selects the time frame desired via the input file. The program contains several self checking schemes to prevent overwriting memory blocks and to provide simple tracing of information in case of trouble. Output for the program consists of 1) an echo of the input file, 2) diffusion properties, radius and length, and time for each data block, 3) the radial increments from the center of the laser rod to the outer edge of the laser rod, and 4) the axial increments from the front of the laser rod to the other end of the rod. This program was written in Microsoft FORTRAN77 and implemented on a Tandon AT with a 287 math coprocessor. The program can also run on a VAX 750 mini-computer. It has a memory requirement of about 147 KB and was developed in 1989.

  3. High-resolution room-temperature sample scanning superconducting quantum interference device microscope configurable for geological and biomagnetic applications

    SciTech Connect

    Fong, L.E.; Holzer, J.R.; McBride, K.K.; Lima, E.A.; Baudenbacher, F.; Radparvar, M.

    2005-05-15

    We have developed a scanning superconducting quantum interference device (SQUID) microscope system with interchangeable sensor configurations for imaging magnetic fields of room-temperature (RT) samples with submillimeter resolution. The low-critical-temperature (T{sub c}) niobium-based monolithic SQUID sensors are mounted on the tip of a sapphire and thermally anchored to the helium reservoir. A 25 {mu}m sapphire window separates the vacuum space from the RT sample. A positioning mechanism allows us to adjust the sample-to-sensor spacing from the top of the Dewar. We achieved a sensor-to-sample spacing of 100 {mu}m, which could be maintained for periods of up to four weeks. Different SQUID sensor designs are necessary to achieve the best combination of spatial resolution and field sensitivity for a given source configuration. For imaging thin sections of geological samples, we used a custom-designed monolithic low-T{sub c} niobium bare SQUID sensor, with an effective diameter of 80 {mu}m, and achieved a field sensitivity of 1.5 pT/Hz{sup 1/2} and a magnetic moment sensitivity of 5.4x10{sup -18} A m{sup 2}/Hz{sup 1/2} at a sensor-to-sample spacing of 100 {mu}m in the white noise region for frequencies above 100 Hz. Imaging action currents in cardiac tissue requires a higher field sensitivity, which can only be achieved by compromising spatial resolution. We developed a monolithic low-T{sub c} niobium multiloop SQUID sensor, with sensor sizes ranging from 250 {mu}m to 1 mm, and achieved sensitivities of 480-180 fT/Hz{sup 1/2} in the white noise region for frequencies above 100 Hz, respectively. For all sensor configurations, the spatial resolution was comparable to the effective diameter and limited by the sensor-to-sample spacing. Spatial registration allowed us to compare high-resolution images of magnetic fields associated with action currents and optical recordings of transmembrane potentials to study the bidomain nature of cardiac tissue or to match petrography

  4. Reducing temperature uncertainties by stochastic geothermal reservoir modelling

    NASA Astrophysics Data System (ADS)

    Vogt, C.; Mottaghy, D.; Wolf, A.; Rath, V.; Pechnig, R.; Clauser, C.

    2010-04-01

    Quantifying and minimizing uncertainty is vital for simulating technically and economically successful geothermal reservoirs. To this end, we apply a stochastic modelling sequence, a Monte Carlo study, based on (i) creating an ensemble of possible realizations of a reservoir model, (ii) forward simulation of fluid flow and heat transport, and (iii) constraining post-processing using observed state variables. To generate the ensemble, we use the stochastic algorithm of Sequential Gaussian Simulation and test its potential fitting rock properties, such as thermal conductivity and permeability, of a synthetic reference model and-performing a corresponding forward simulation-state variables such as temperature. The ensemble yields probability distributions of rock properties and state variables at any location inside the reservoir. In addition, we perform a constraining post-processing in order to minimize the uncertainty of the obtained distributions by conditioning the ensemble to observed state variables, in this case temperature. This constraining post-processing works particularly well on systems dominated by fluid flow. The stochastic modelling sequence is applied to a large, steady-state 3-D heat flow model of a reservoir in The Hague, Netherlands. The spatial thermal conductivity distribution is simulated stochastically based on available logging data. Errors of bottom-hole temperatures provide thresholds for the constraining technique performed afterwards. This reduce the temperature uncertainty for the proposed target location significantly from 25 to 12K (full distribution width) in a depth of 2300m. Assuming a Gaussian shape of the temperature distribution, the standard deviation is 1.8K. To allow a more comprehensive approach to quantify uncertainty, we also implement the stochastic simulation of boundary conditions and demonstrate this for the basal specific heat flow in the reservoir of The Hague. As expected, this results in a larger distribution width

  5. Effects of electrostatic discharge on three cryogenic temperature sensor models

    NASA Astrophysics Data System (ADS)

    Courts, S. Scott; Mott, Thomas B.

    2014-01-01

    Cryogenic temperature sensors are not usually thought of as electrostatic discharge (ESD) sensitive devices. However, the most common cryogenic thermometers in use today are thermally sensitive diodes or resistors - both electronic devices in their base form. As such, they are sensitive to ESD at some level above which either catastrophic or latent damage can occur. Instituting an ESD program for safe handling and installation of the sensor is costly and it is desirable to balance the risk of ESD damage against this cost. However, this risk cannot be evaluated without specific knowledge of the ESD vulnerability of the devices in question. This work examines three types of cryogenic temperature sensors for ESD sensitivity - silicon diodes, Cernox{trade mark, serif} resistors, and wire wound platinum resistors, all manufactured by Lake Shore Cryotronics, Inc. Testing was performed per TIA/EIA FOTP129 (Human Body Model). Damage was found to occur in the silicon diode sensors at discharge levels of 1,500 V. For Cernox{trade mark, serif} temperature sensors, damage was observed at 3,500 V. The platinum temperature sensors were not damaged by ESD exposure levels of 9,900 V. At the lower damage limit, both the silicon diode and the Cernox{trade mark, serif} temperature sensors showed relatively small calibration shifts of 1 to 3 K at room temperature. The diode sensors were stable with time and thermal cycling, but the long term stability of the Cernox{trade mark, serif} sensors was degraded. Catastrophic failure occurred at higher levels of ESD exposure.

  6. Effects of electrostatic discharge on three cryogenic temperature sensor models

    SciTech Connect

    Courts, S. Scott; Mott, Thomas B.

    2014-01-29

    Cryogenic temperature sensors are not usually thought of as electrostatic discharge (ESD) sensitive devices. However, the most common cryogenic thermometers in use today are thermally sensitive diodes or resistors - both electronic devices in their base form. As such, they are sensitive to ESD at some level above which either catastrophic or latent damage can occur. Instituting an ESD program for safe handling and installation of the sensor is costly and it is desirable to balance the risk of ESD damage against this cost. However, this risk cannot be evaluated without specific knowledge of the ESD vulnerability of the devices in question. This work examines three types of cryogenic temperature sensors for ESD sensitivity - silicon diodes, Cernox(trade mark, serif) resistors, and wire wound platinum resistors, all manufactured by Lake Shore Cryotronics, Inc. Testing was performed per TIA/EIA FOTP129 (Human Body Model). Damage was found to occur in the silicon diode sensors at discharge levels of 1,500 V. For Cernox(trade mark, serif) temperature sensors, damage was observed at 3,500 V. The platinum temperature sensors were not damaged by ESD exposure levels of 9,900 V. At the lower damage limit, both the silicon diode and the Cernox(trade mark, serif) temperature sensors showed relatively small calibration shifts of 1 to 3 K at room temperature. The diode sensors were stable with time and thermal cycling, but the long term stability of the Cernox(trade mark, serif) sensors was degraded. Catastrophic failure occurred at higher levels of ESD exposure.

  7. A Two-temperature Model of Magnetized Protostellar Outflows

    NASA Astrophysics Data System (ADS)

    Wang, Liang-Yao; Shang, Hsien; Krasnopolsky, Ruben; Chiang, Tzu-Yang

    2015-12-01

    We explore kinematics and morphologies of molecular outflows driven by young protostars using magnetohydrodynamic simulations in the context of the unified wind model of Shang et al. The model explains the observed high-velocity jet and low-velocity shell features. In this work we investigate how these characteristics are affected by the underlying temperature and magnetic field strength. We study the problem of a warm wind running into a cold ambient toroid by using a tracer field that keeps track of the wind material. While an isothermal equation of state is adopted, the effective temperature is determined locally based on the wind mass fraction. In the unified wind model, the density of the wind is cylindrically stratified and highly concentrated toward the outflow axis. Our simulations show that for a sufficiently magnetized wind, the jet identity can be well maintained even at high temperatures. However, for a high temperature wind with low magnetization, the thermal pressure of the wind gas can drive material away from the axis, making the jet less collimated as it propagates. We also study the role of the poloidal magnetic field of the toroid. It is shown that the wind-ambient interface becomes more resistant to corrugation when the poloidal field is present, and the poloidal field that bunches up within the toroid prevents the swept-up material from being compressed into a thin layer. This suggests that the ambient poloidal field may play a role in producing a smoother and thicker swept-up shell structure in the molecular outflow.

  8. A Physical Method for Generating the Surface Temperature from Passive Microwave Observations by Addressing the Thermal Sampling Depth for Barren Land

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Zhou, J.; Dai, F.

    2015-12-01

    The land surface temperature (LST) is an important parameter in studying the global and regional climate change. Passive microwave (PMW) remote sensing is less influenced by the atmosphere and has a unique advantage in cloudy regions compared to satellite thermal infrared (TIR) remote sensing. However, the accuracy of LST estimation of many PMW remote sensing models, especially in barren land, is unsatisfactory due to the neglected discrepancy of thermal sampling depth between PMW and TIR. Here, a physical method for PMW remote sensing is proposed to generate the surface temperature, which has the same physically meaning as the TIR surface temperature, by addressing the thermal sampling depth over barren land surface. The method was applied to the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) data. Validation with the synchronous Moderate Resolution Imaging Spectroradiometer (MODIS) LSTs demonstrates that the method has better performances in estimating LSTs than another two methods that neglect the thermal sampling depth. In Northwest China and a part of Mongolia, the root mean squared errors (RMSEs) the physical method were 3.9 K and 3.7K for daytime and nighttime cases, respectively. In the region of western Namibia, the corresponding RMSEs were 3.8 K and 4.5 K. Further comparison with the in-situ measured LST temperatures at a ground station confirmed the better performance of the proposed method, compared with another two methods. The proposed method will be beneficial for improving the accuracies of the LSTs estimated from PMW observations and integrating the LST products generated from both the TIR and PMW remote sensing.

  9. Modeling problem behaviors in a nationally representative sample of adolescents.

    PubMed

    O'Connor, Kate L; Dolphin, Louise; Fitzgerald, Amanda; Dooley, Barbara

    2016-07-01

    Research on multiple problem behaviors has focused on the concept of Problem Behavior Syndrome (PBS). Problem Behavior Theory (PBT) is a complex and comprehensive social-psychological framework designed to explain the development of a range of problem behaviors. This study examines the structure of PBS and the applicability of PBT in adolescents. Participants were 6062 adolescents; aged 12-19 (51.3% female) who took part in the My World Survey-Second Level (MWS-SL). Regarding PBS, Confirmatory Factor Analysis established that problem behaviors, such as alcohol and drug use loaded significantly onto a single, latent construct for males and females. Using Structural Equation Modeling, the PBT framework was found to be a good fit for males and females. Socio-demographic, perceived environment system and personality accounted for over 40% of the variance in problem behaviors for males and females. Our findings have important implications for understanding how differences in engaging in problem behaviors vary by gender. PMID:27161989

  10. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  11. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Modeling of Boehmite Leaching from Actual Hanford High-Level Waste Samples

    SciTech Connect

    Snow, L.A.; Rapko, B.M.; Poloski, A.P.; Peterson, R.A.

    2007-07-01

    The U.S. Department of Energy plans to vitrify approximately 60,000 metric tons of high-level waste (HLW) sludge from underground storage tanks at the Hanford Site in Southwest Washington State. To reduce the volume of HLW requiring treatment, a goal has been set to remove a significant quantity of the aluminum, which comprises nearly 70 percent of the sludge. Aluminum is found in the form of gibbsite and sodium aluminate, which can be easily dissolved by washing the waste stream with caustic, and boehmite, which comprises nearly half of the total aluminum, but is more resistant to caustic dissolution and requires higher treatment temperatures and hydroxide concentrations. Chromium, which makes up a much smaller amount ({approx}3%) of the sludge, must also be removed because there is a low tolerance for chromium in the HLW immobilization process. In this work, the coupled dissolution kinetics of aluminum and chromium species during caustic leaching of actual Hanford HLW samples is examined. The experimental results are used to develop a model that provides a basis for predicting dissolution dynamics from known process temperature and hydroxide concentration. (authors)

  13. Temperature dependence of full set tensor properties of KTiOPO4 single crystal measured from one sample

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Tang, Liguo; Ji, Nianjing; Liu, Gang; Wang, Jiyang; Jiang, Huaidong; Cao, Wenwu

    2016-03-01

    The temperature dependence of the complete set of elastic, dielectric, and piezoelectric constants of KTiOPO4 single crystal has been measured from 20 °C to 150 °C. All 17 independent constants for the mm2 symmetry piezoelectric crystal were measured from one sample using extended resonance ultrasound spectroscopy (RUS), which guaranteed the self-consistency of the matrix data. The unique characteristics of the RUS method allowed the accomplishment of such a challenging task, which could not be done by any other existing methods. It was found that the elastic constants ( c11 E , c13 E , c22 E , and c33 E ) and piezoelectric constants ( d 15 , d 24 , and d 32 ) strongly depend on temperature, while other constants are only weakly temperature dependent in this temperature range. These as-grown single domain data allowed us to calculate the orientation dependence of elastic, dielectric, and piezoelectric properties of KTiOPO4, which are useful for finding the optimum cut for particular applications.

  14. Rheological modelling of physiological variables during temperature variations at rest.

    PubMed

    Vogelaere, P; De Meyer, F

    1990-08-01

    The evolution with time of cardio-respiratory variables, blood pressure and body temperature has been studied on six males, resting in semi-nude conditions during short (30 min) cold stress exposure (0 degree C) and during passive recovery (60 min) at 20 degrees C. Passive cold exposure does not induce a change in HR but increases VO2, VCO2, Ve and core temperature Tre, whereas peripheral temperature is significantly lowered. The kinetic evolution of the studied variables was investigated using a Kelvin-Voigt rheological model. The results suggest that the human body, and by extension the measured physiological variables of its functioning, does not react as a perfect viscoelastic system. Cold exposure induces a more rapid adaptation for heart rate, blood pressure and skin temperatures than that observed during the rewarming period (20 degrees C), whereas respiratory adjustments show an opposite evolution. During the cooling period of the experiment the adaptative mechanisms, taking effect to preserve core homeothermy and to obtain a higher oxygen supply, increase the energy loss of the body. PMID:2228298

  15. Low reheating temperatures in monomial and binomial inflationary models

    NASA Astrophysics Data System (ADS)

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-06-01

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied phi2 inflationary potential is no longer favored by current CMB data, as well as phip with p>2, a phi1 potential and canonical reheating (0wre=) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, ns, implies an upper bound on the reheating temperature of Trelesssim 6× 1010 GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a phi1 potential. We find that as a subdominant phi2 term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of Tre=4 MeV is excluded by the Planck 2015 68% confidence limit.

  16. An error model for GCM precipitation and temperature simulations

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Woldemeskel, F.; Mehrotra, R.; Sivakumar, B.

    2012-04-01

    Water resources assessments for future climates require meaningful simulations of likely precipitation and evaporation for simulation of flow and derived quantities of interest. The current approach for making such assessments involve using simulations from one or a handful of General Circulation Models (GCMs), for usually one assumed future greenhouse gas emission scenario, deriving associated flows and the planning or design attributes required, and using these as the basis of any planning or design that is needed. An assumption that is implicit in this approach is that the single or multiple simulations being considered are representative of what is likely to occur in the future. Is this a reasonable assumption to make and use in designing future water resources infrastructure? Is the uncertainty in the simulations captured through this process a real reflection of the likely uncertainty, even though a handful of GCMs are considered? Can one, instead, develop a measure of this uncertainty for a given GCM simulation for all variables in space and time, and use this information as the basis of water resources planning (similar to using "input uncertainty" in rainfall-runoff modelling)? These are some of the questions we address in course of this presentation. We present here a new basis for assigning a measure of uncertainty to GCM simulations of precipitation and temperature. Unlike other alternatives which assess overall GCM uncertainty, our approach leads to a unique measure of uncertainty in the variable of interest for each simulated value in space and time. We refer to this as an error model of GCM precipitation and temperature simulations, to allow a complete assessment of the merits or demerits associated with future infrastructure options being considered, or mitigation plans being devised. The presented error model quantifies the error variance of GCM monthly precipitation and temperature, and reports it as the Square Root Error Variance (SREV

  17. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  18. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Model Clauses and Sample Forms B Appendix B to... SYSTEM TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model Clauses for Account Disclosures (Section 230.4(b)) B-2—Model...

  19. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 9 2014-01-01 2014-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  20. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  1. 12 CFR Appendix B to Part 1030 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Model Clauses and Sample Forms B Appendix B to.... 1030, App. B Appendix B to Part 1030—Model Clauses and Sample Forms Table of Contents B-1—Model Clauses for Account Disclosures (Section 1030.4(b)) B-2—Model Clauses for Change in Terms (Section...

  2. Room temperature ionic liquid-mediated molecularly imprinted polymer monolith for the selective recognition of quinolones in pork samples.

    PubMed

    Sun, Xiangli; He, Jia; Cai, Guorui; Lin, Anqing; Zheng, Wenjie; Liu, Xuan; Chen, Langxing; He, Xiwen; Zhang, Yukui

    2010-12-01

    A novel molecularly imprinted polymer monolith was prepared by the room temperature ionic liquid-mediated in situ molecular imprinting technique, using norfloxacin (NOR) as the template, methacrylic acid as the functional monomer, ethylene glycol dimethacrylate as the cross-linker. The optimal synthesis conditions and recognition properties of NOR-imprinted monolithic column were investigated. The results indicated that the imprinted monoliths exhibited good ability of selective recognition against the template and its structural analog. Using the fabricated material as solid-phase extraction sorbent, a sample pre-treatment procedure of molecularly imprinted solid-phase extraction coupling with HPLC was developed for determination of trace quinolone residues in animal tissues samples. The recoveries ranging from 78.16 to 93.50% for eight quinolones antibiotics such as marbofloxacin, NOR, ciprofloxacin, danofloxacin, difloxacin, oxolinic acid, flumequine and enrofloxacin were obtained. PMID:21082676

  3. Fluid sampling and chemical modeling of geopressured brines containing methane. Final report, March 1980-February 1981

    SciTech Connect

    Dudak, B.; Galbraith, R.; Hansen, L.; Sverjensky, D.; Weres, O.

    1982-07-01

    The development of a flowthrough sampler capable of obtaining fluid samples from geopressured wells at temperatures up to 400/sup 0/F and pressures up to 20,000 psi is described. The sampler has been designed, fabricated from MP35N alloy, laboratory tested, and used to obtain fluid samples from a geothermal well at The Geysers, California. However, it has not yet been used in a geopressured well. The design features, test results, and operation of this device are described. Alternative sampler designs are also discussed. Another activity was to review the chemistry and geochemistry of geopressured brines and reservoirs, and to evaluate the utility of available computer codes for modeling the chemistry of geopressured brines. The thermodynamic data bases for such codes are usually the limiting factor in their application to geopressured systems, but it was concluded that existing codes can be updated with reasonable effort and can usefully explain and predict the chemical characteristics of geopressured systems, given suitable input data.

  4. MgO melting curve constraints from shock temperature and rarefaction overtake measurements in samples preheated to 2300 K

    NASA Astrophysics Data System (ADS)

    Fat'yanov, O. V.; Asimow, P. D.

    2014-05-01

    Continuing our effort to obtain experimental constraints on the melting curve of MgO at 100-200 GPa, we extended our target preheating capability to 2300 K. Our new Mo capsule design holds a long MgO crystal in a controlled thermal gradient until impact by a Ta flyer launched at up to 7.5 km/s on the Caltech two-stage light-gas gun. Radiative shock temperatures and rarefaction overtake times were measured simultaneously by a 6-channel VIS/NIR pyrometer with 3 ns time resolution. The majority of our experiments showed smooth monotonic increases in MgO sound speed and shock temperature with pressure from 197 to 243 GPa. The measured temperatures as well as the slopes of the pressure dependences for both temperature and sound speed were in good agreement with those calculated numerically for the solid phase at our peak shock compression conditions. Most observed sound speeds, however, were ~800 m/s higher than those predicted by the model. A single unconfirmed data point at 239 GPa showed anomalously low temperature and sound speed, which could both be explained by partial melting in this experiment and could suggest that the Hugoniot of MgO preheated to 2300 K crosses its melting line just slightly above 240 GPa.

  5. Comparison of ET estimations by the three-temperature model, SEBAL model and eddy covariance observations

    NASA Astrophysics Data System (ADS)

    Zhou, Xinyao; Bi, Shaojie; Yang, Yonghui; Tian, Fei; Ren, Dandan

    2014-11-01

    The three-temperature (3T) model is a simple model which estimates plant transpiration from only temperature data. In-situ field experimental results have shown that 3T is a reliable evapotranspiration (ET) estimation model. Despite encouraging results from recent efforts extending the 3T model to remote sensing applications, literature shows limited comparisons of the 3T model with other remote sensing driven ET models. This research used ET obtained from eddy covariance to evaluate the 3T model and in turn compared the model-simulated ET with that of the more traditional SEBAL (Surface Energy Balance Algorithm for Land) model. A field experiment was conducted in the cotton fields of Taklamakan desert oasis in Xinjiang, Northwest China. Radiation and surface temperature were obtained from hyperspectral and thermal infrared images for clear days in 2013. The images covered the time period of 0900-1800 h at four different phenological stages of cotton. Meteorological data were automatically recorded in a station located at the center of the cotton field. Results showed that the 3T model accurately captured daily and seasonal variations in ET. As low dry soil surface temperatures induced significant errors in the 3T model, it was unsuitable for estimating ET in the early morning and late afternoon periods. The model-simulated ET was relatively more accurate for squaring, bolling and boll-opening stages than for seedling stage of cotton during when ET was generally low. Wind speed was apparently not a limiting factor of ET in the 3T model. This was attributed to the fact that surface temperature, a vital input of the model, indirectly accounted for the effect of wind speed on ET. Although the 3T model slightly overestimated ET compared with SEBAL and eddy covariance, it was generally reliable for estimating daytime ET during 0900-1600 h.

  6. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet (VUV) synchrotron ionization quadrupole mass spectrometry: Application to low--temperature kinetics and product detection

    SciTech Connect

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D; Ferrell, Sarah J; Leone, Stephen R; Wilson, Kevin R

    2011-10-12

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radicalneutralchemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has beendeveloped that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion withexcellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by theairfoil is negligible. The reaction of C2H with C2H2 is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification basedon the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic ratesclose to the collision-determined limit.

  7. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet synchrotron ionization quadrupole mass spectrometry: application to low-temperature kinetics and product detection.

    PubMed

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D; Ferrell, Sarah J; Leone, Stephen R; Wilson, Kevin R

    2011-12-01

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radical-neutral chemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has been developed that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion with excellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by the airfoil is negligible. The reaction of C(2)H with C(2)H(2) is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification based on the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic rates close to the collision-determined limit. PMID:22225233

  8. Airfoil sampling of a pulsed Laval beam with tunable vacuum ultraviolet synchrotron ionization quadrupole mass spectrometry: Application to low-temperature kinetics and product detection

    NASA Astrophysics Data System (ADS)

    Soorkia, Satchin; Liu, Chen-Lin; Savee, John D.; Ferrell, Sarah J.; Leone, Stephen R.; Wilson, Kevin R.

    2011-12-01

    A new pulsed Laval nozzle apparatus with vacuum ultraviolet (VUV) synchrotron photoionization quadrupole mass spectrometry is constructed to study low-temperature radical-neutral chemical reactions of importance for modeling the atmosphere of Titan and the outer planets. A design for the sampling geometry of a pulsed Laval nozzle expansion has been developed that operates successfully for the determination of rate coefficients by time-resolved mass spectrometry. The new concept employs airfoil sampling of the collimated expansion with excellent sampling throughput. Time-resolved profiles of the high Mach number gas flow obtained by photoionization signals show that perturbation of the collimated expansion by the airfoil is negligible. The reaction of C2H with C2H2 is studied at 70 K as a proof-of-principle result for both low-temperature rate coefficient measurements and product identification based on the photoionization spectrum of the reaction product versus VUV photon energy. This approach can be used to provide new insights into reaction mechanisms occurring at kinetic rates close to the collision-determined limit.

  9. Determination of trace elements in biological samples by inductively coupled plasma mass spectrometry with tetramethylammonium hydroxide solubilization at room temperature.

    PubMed

    Batista, Bruno Lemos; Grotto, Denise; Rodrigues, Jairo Lisboa; Souza, Vanessa Cristina de Oliveira; Barbosa, Fernando

    2009-07-30

    A simple method for sample preparation of biological samples for trace elements determination by inductively coupled plasma mass spectrometry (ICP-MS) is described. Prior to analysis, 75 mg of the biological samples were accurately weighed into (15 mL) conical tubes. Then, 1 mL of 50% (v/v) tetramethylammonium hydroxide (TMAH) solution was added to the samples, incubated at room temperature for 12 h and the volume made up to 10 mL with a solution containing 0.5% (v/v) HNO(3), 0.01% (v/v) Triton X-100 and 10 microg L(-1) of Rh. After preparation samples may be stored at -20 degrees C during 3 days until the analysis by ICP-MS. With these conditions, the use of the dynamic reaction cell was only mandatory for chromium determination. Method detection limits were 0.2145, 0.0020, 0.0051, 0.0017, 0.0027, 0.0189, 0.02, 0.5, 0.1, 0.0030, 0.0043, 0.0066, 0.0009, 0.020, 0.0043, 0.1794, 0.1 microg(-1) for Al, As, Ba, Cd, Co, Cr, Cu, Fe, Mg, Mn, Mo, Pb, Sb, Se, Sr, V and Zn, respectively. Validation data are provided based on the analysis of six certified reference materials (CRMs) purchased from the National Institute of Standards and Technology (NIST) and National Research Council Canada (NRCC). Additional validation was provided by the analysis of brain, kidney, liver and heart samples collected from rats and analyzed by the proposed method and by using microwave digestion. PMID:19523552

  10. Baryon number dissipation at finite temperature in the standard model

    SciTech Connect

    Mottola, E. ); Raby, S. . Dept. of Physics); Starkman, G. . Dept. of Astronomy)

    1990-01-01

    We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, {gamma} is given in terms of real time correlation functions of the operator E{center dot}B, and is directly proportional to the sphaleron transition rate, {Gamma}: {gamma} {preceq} n{sub f}{Gamma}/T{sup 3}. Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs.

  11. Temperature Effect on Micelle Formation: Molecular Thermodynamic Model Revisited.

    PubMed

    Khoshnood, Atefeh; Lukanov, Boris; Firoozabadi, Abbas

    2016-03-01

    Temperature affects the aggregation of macromolecules such as surfactants, polymers, and proteins in aqueous solutions. The effect on the critical micelle concentration (CMC) is often nonmonotonic. In this work, the effect of temperature on the micellization of ionic and nonionic surfactants in aqueous solutions is studied using a molecular thermodynamic model. Previous studies based on this technique have predicted monotonic behavior for ionic surfactants. Our investigation shows that the choice of tail transfer energy to describe the hydrophobic effect between the surfactant tails and the polar solvent molecules plays a key role in the predicted CMC. We modify the tail transfer energy by taking into account the effect of the surfactant head on the neighboring methylene group. The modification improves the description of the CMC and the predicted micellar size for aqueous solutions of sodium n-alkyl sulfate, dodecyl trimethylammonium bromide (DTAB), and n-alkyl polyoxyethylene. The new tail transfer energy describes the nonmonotonic behavior of CMC versus temperature. In the DTAB-water system, we redefine the head size by including the methylene group, next to the nitrogen, in the head. The change in the head size along with our modified tail transfer energy improves the CMC and aggregation size prediction significantly. Tail transfer is a dominant energy contribution in micellar and microemulsion systems. It also promotes the adsorption of surfactants at fluid-fluid interfaces and affects the formation of adsorbed layer at fluid-solid interfaces. Our proposed modifications have direct applications in the thermodynamic modeling of the effect of temperature on molecular aggregation, both in the bulk and at the interfaces. PMID:26854650

  12. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  13. The Effects of Sample Size, Estimation Methods, and Model Specification on SEM Indices.

    ERIC Educational Resources Information Center

    Fan, Xitao; And Others

    A Monte Carlo simulation study was conducted to investigate the effects of sample size, estimation method, and model specification on structural equation modeling (SEM) fit indices. Based on a balanced 3x2x5 design, a total of 6,000 samples were generated from a prespecified population covariance matrix, and eight popular SEM fit indices were…

  14. Application of the Tripartite Model to a Complicated Sample of Residential Youth with Externalizing Problems

    ERIC Educational Resources Information Center

    Chin, Eu Gene; Ebesutani, Chad; Young, John

    2013-01-01

    The tripartite model of anxiety and depression has received strong support among child and adolescent populations. Clinical samples of children and adolescents in these studies, however, have usually been referred for treatment of anxiety and depression. This study investigated the fit of the tripartite model with a complicated sample of…

  15. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  16. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    PubMed

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r, consistent with an exponential Q10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches.Synthesis and Applications. The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  17. Model for temperature-dependent magnetization of nanocrystalline materials

    SciTech Connect

    Bian, Q.; Niewczas, M.

    2015-01-07

    A magnetization model of nanocrystalline materials incorporating intragrain anisotropies, intergrain interactions, and texture effects has been extended to include the thermal fluctuations. The method relies on the stochastic Landau–Lifshitz–Gilbert theory of magnetization dynamics and permits to study the magnetic properties of nanocrystalline materials at arbitrary temperature below the Currie temperature. The model has been used to determine the intergrain exchange constant and grain boundary anisotropy constant of nanocrystalline Ni at 100 K and 298 K. It is found that the thermal fluctuations suppress the strength of the intergrain exchange coupling and also reduce the grain boundary anisotropy. In comparison with its value at 2 K, the interparticle exchange constant decreases by 16% and 42% and the grain boundary anisotropy constant decreases by 28% and 40% at 100 K and 298 K, respectively. An application of the model to study the grain size-dependent magnetization indicates that when the thermal activation energy is comparable to the free energy of grains, the decrease in the grain size leads to the decrease in the magnetic permeability and saturation magnetization. The mechanism by which the grain size influences the magnetic properties of nc–Ni is discussed.

  18. Model for temperature-dependent magnetization of nanocrystalline materials

    NASA Astrophysics Data System (ADS)

    Bian, Q.; Niewczas, M.

    2015-01-01

    A magnetization model of nanocrystalline materials incorporating intragrain anisotropies, intergrain interactions, and texture effects has been extended to include the thermal fluctuations. The method relies on the stochastic Landau-Lifshitz-Gilbert theory of magnetization dynamics and permits to study the magnetic properties of nanocrystalline materials at arbitrary temperature below the Currie temperature. The model has been used to determine the intergrain exchange constant and grain boundary anisotropy constant of nanocrystalline Ni at 100 K and 298 K. It is found that the thermal fluctuations suppress the strength of the intergrain exchange coupling and also reduce the grain boundary anisotropy. In comparison with its value at 2 K, the interparticle exchange constant decreases by 16% and 42% and the grain boundary anisotropy constant decreases by 28% and 40% at 100 K and 298 K, respectively. An application of the model to study the grain size-dependent magnetization indicates that when the thermal activation energy is comparable to the free energy of grains, the decrease in the grain size leads to the decrease in the magnetic permeability and saturation magnetization. The mechanism by which the grain size influences the magnetic properties of nc-Ni is discussed.

  19. A simplified physically-based model to calculate surface water temperature of lakes from air temperature in climate change scenarios

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Toffolon, M.

    2012-12-01

    Modifications of water temperature are crucial for the ecology of lakes, but long-term analyses are not usually able to provide reliable estimations. This is particularly true for climate change studies based on Global Circulation Models, whose mesh size is normally too coarse for explicitly including even some of the biggest lakes on Earth. On the other hand, modeled predictions of air temperature changes are more reliable, and long-term, high-resolution air temperature observational datasets are more available than water temperature measurements. For these reasons, air temperature series are often used to obtain some information about the surface temperature of water bodies. In order to do that, it is common to exploit regression models, but they are questionable especially when it is necessary to extrapolate current trends beyond maximum (or minimum) measured temperatures. Moreover, water temperature is influenced by a variety of processes of heat exchange across the lake surface and by the thermal inertia of the water mass, which also causes an annual hysteresis cycle between air and water temperatures that is hard to consider in regressions. In this work we propose a simplified, physically-based model for the estimation of the epilimnetic temperature in lakes. Starting from the zero-dimensional heat budget, we derive a simplified first-order differential equation for water temperature, primarily forced by a seasonally varying external term (mainly related to solar radiation) and an exchange term explicitly depending on the difference between air and water temperatures. Assuming annual sinusoidal cycles of the main heat flux components at the atmosphere-lake interface, eight parameters (some of them can be disregarded, though) are identified, which can be calibrated if two temporal series of air and water temperature are available. We note that such a calibration is supported by the physical interpretation of the parameters, which provide good initial

  20. Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models

    PubMed Central

    Drugowitsch, Jan

    2016-01-01

    We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models’ parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation. PMID:26864391

  1. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  2. Modelling mass balance and temperature sensitivity on Shallap glacier, Peru

    NASA Astrophysics Data System (ADS)

    Gurgiser, W.; Marzeion, B.; Nicholson, L. I.; Ortner, M.; Kaser, G.

    2013-12-01

    Due to pronounced dry seasons in the tropical Andes of Peru glacier melt water is an important factor for year-round water availability for the local society. Andean glaciers have been shrinking during the last decades but present day's magnitudes of glacier mass balance and sensitivities to changes in atmospheric drivers are not well known. Therefore we have calculated spatial distributed glacier mass and energy balance of Shallap glacier (4700 m - 5700 m, 9°S), Cordillera Blanca, Peru, on hourly time steps for the period Sept. 2006 to Aug. 2008 with records from an AWS close to the glacier as model input. Our model evaluation against measured surface height change in the ablation zone of the glacier yields our model results to be reasonable and within an expectable error range. For the mass balance characteristics we found similar vertical gradients and accumulation area ratios but markedly differences in specific mass balance from year to year. The differences were mainly caused by large differences in annual ablation in the glacier area below 5000m. By comparing the meteorological conditions in both years we found for the year with more negative mass balance that total precipitation was only slightly lower but mean annual temperature was higher, thus the fraction of liquid precipitation and the snow line altitude too. As shortwave net energy turned out to be the key driver of ablation in all seasons the deviations in snow line altitude and surface albedo explain most of the deviations in available melt energy. Hence, mass balance of tropical Shallap glacier was not only sensitive to precipitation but also to temperature which has not been expected for glaciers in the Peruvian Andes before. We furthermore have investigated impacts of increasing temperature due to its multiple effects on glacier mass and energy balance (fraction of liquid precipitation, long wave incoming radiation, sensible and latent heat flux). Presenting these results should allow for better

  3. Assessment of precipitation and temperature data from CMIP3 global climate models for hydrologic simulation

    NASA Astrophysics Data System (ADS)

    McMahon, T. A.; Peel, M. C.; Karoly, D. J.

    2015-01-01

    The objective of this paper is to identify better performing Coupled Model Intercomparison Project phase 3 (CMIP3) global climate models (GCMs) that reproduce grid-scale climatological statistics of observed precipitation and temperature for input to hydrologic simulation over global land regions. Current assessments are aimed mainly at examining the performance of GCMs from a climatology perspective and not from a hydrology standpoint. The performance of each GCM in reproducing the precipitation and temperature statistics was ranked and better performing GCMs identified for later analyses. Observed global land surface precipitation and temperature data were drawn from the Climatic Research Unit (CRU) 3.10 gridded data set and re-sampled to the resolution of each GCM for comparison. Observed and GCM-based estimates of mean and standard deviation of annual precipitation, mean annual temperature, mean monthly precipitation and temperature and Köppen-Geiger climate type were compared. The main metrics for assessing GCM performance were the Nash-Sutcliffe efficiency (NSE) index and root mean square error (RMSE) between modelled and observed long-term statistics. This information combined with a literature review of the performance of the CMIP3 models identified the following better performing GCMs from a hydrologic perspective: HadCM3 (Hadley Centre for Climate Prediction and Research), MIROCm (Model for Interdisciplinary Research on Climate) (Center for Climate System Research (The University of Tokyo), National Institute for Environmental Studies, and Frontier Research Center for Global Change), MIUB (Meteorological Institute of the University of Bonn, Meteorological Research Institute of KMA, and Model and Data group), MPI (Max Planck Institute for Meteorology) and MRI (Japan Meteorological Research Institute). The future response of these GCMs was found to be representative of the 44 GCM ensemble members which confirms that the selected GCMs are reasonably

  4. Control and diagnosis of temperature, density, and uniformity in x-ray heated iron/magnesium samples for opacity measurements

    SciTech Connect

    Nagayama, T.; Bailey, J. E.; Loisel, G.; Hansen, S. B.; Rochau, G. A.; Mancini, R. C.; MacFarlane, J. J.; Golovkin, I.

    2014-05-15

    Experimental tests are in progress to evaluate the accuracy of the modeled iron opacity at solar interior conditions, in particular to better constrain the solar abundance problem [S. Basu and H. M. Antia, Phys. Rep. 457, 217 (2008)]. Here, we describe measurements addressing three of the key requirements for reliable opacity experiments: control of sample conditions, independent sample condition diagnostics, and verification of sample condition uniformity. The opacity samples consist of iron/magnesium layers tamped by plastic. By changing the plastic thicknesses, we have controlled the iron plasma conditions to reach (1) T{sub e} = 167 ± 3 eV and n{sub e} = (7.1 ± 1.5)× 10{sup 21} cm{sup −3}, (2) T{sub e} = 170 ± 2 eV and n{sub e} = (2.0 ± 0.2) × 10{sup 22} cm{sup −3}, and (3) T{sub e} = 196 ± 6 eV and n{sub e} = (3.8 ± 0.8) × 10{sup 22} cm{sup −3}, which were measured by magnesium tracer K-shell spectroscopy. The opacity sample non-uniformity was directly measured by a separate experiment where Al is mixed into the side of the sample facing the radiation source and Mg into the other side. The iron condition was confirmed to be uniform within their measurement uncertainties by Al and Mg K-shell spectroscopy. The conditions are suitable for testing opacity calculations needed for modeling the solar interior, other stars, and high energy density plasmas.

  5. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    PubMed Central

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  6. Estimating species - area relationships by modeling abundance and frequency subject to incomplete sampling.

    PubMed

    Yamaura, Yuichi; Connor, Edward F; Royle, J Andrew; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-07-01

    Models and data used to describe species-area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species-area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species-area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density-area relationships and occurrence probability-area relationships can alter the form of species-area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied to a

  7. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    USGS Publications Warehouse

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  8. Modelling electron transport in magnetized low-temperature discharge plasmas

    NASA Astrophysics Data System (ADS)

    Hagelaar, G. J. M.

    2007-02-01

    Magnetic fields are sometimes used to confine the plasma in low-pressure low-temperature gas discharges, for example in magnetron discharges, Hall-effect-thruster discharges, electron-cyclotron-resonance discharges and helicon discharges. We discuss how these magnetized discharges can be modelled by two-dimensional self-consistent models based on electron fluid equations. The magnetized electron flux is described by an anisotropic drift diffusion equation, where the electron mobility is much smaller perpendicular to the magnetic field than parallel to it. The electric potential is calculated either from Poisson's equation or from the electron equations, assuming quasineutrality. Although these models involve many assumptions, they are appropriate to study the main effects of the magnetic field on the charged particle transport and space charge electric fields in realistic two-dimensional discharge configurations. We demonstrate by new results that these models reproduce known phenomena such as the establishment of the Boltzmann relation along magnetic field lines, the penetration of perpendicular applied electric fields into the plasma bulk and the decrease in magnetic confinement by short-circuit wall currents. We also present an original method to prevent numerical errors arising from the extreme anisotropy of the electron mobility, which tend to invalidate model results from standard numerical methods.

  9. Satellite Derived Land Surface Temperature for Model Assimilation

    NASA Technical Reports Server (NTRS)

    Suggs, Ronnie J.; Jedlovec, Gary J.; Lapenta, William

    1999-01-01

    Studies have shown that land surface temperature (LST) tendencies are sensitive to the surface moisture availability which is a function of soil moisture and vegetation. The assimilation of satellite derived LST tendencies into the surface energy budget of mesoscale models has shown promise in improving the representation of the complex effects of both soil moisture and vegetation within the models for short term simulations. LST derived from geostationary satellites has the potential of providing the temporal and spatial resolution needed for an LST assimilation process. This paper presents an analysis comparing the LST derived from GOES-8 infrared measurements with LST calculated by the MM5 numerical model. The satellite derived LSTs are calculated using a physical split window approach using channels 4 and 5 of GOES-8. The differences in the LST data sets, especially the tendencies, are presented and examined. Quantifying the differences between the data sets provide insight of possible weaknesses in the model parameterizations affecting the surface energy budget calculations and an indication of the potential effectiveness o f assimilating LST into the models.

  10. The impact of orbital sampling, monthly averaging and vertical resolution on climate chemistry model evaluation with satellite observations

    NASA Astrophysics Data System (ADS)

    Aghedo, A. M.; Bowman, K. W.; Shindell, D. T.; Faluvegi, G.

    2011-03-01

    Ensemble climate model simulations used for the Intergovernmental Panel on Climate Change (IPCC) assessments have become important tools for exploring the response of the Earth System to changes in anthropogenic and natural forcings. The systematic evaluation of these models through global satellite observations is a critical step in assessing the uncertainty of climate change projections. This paper presents the technical steps required for using nadir sun-synchronous infrared satellite observations for multi-model evaluation and the uncertainties associated with each step. This is motivated by need to use satellite observations to evaluate climate models. We quantified the implications of the effect of satellite orbit and spatial coverage, the effect of variations in vertical sensitivity as quantified by the observation operator and the impact of averaging the operators for use with monthly-mean model output. We calculated these biases in ozone, carbon monoxide, atmospheric temperature and water vapour by using the output from two global chemistry climate models (ECHAM5-MOZ and GISS-PUCCINI) and the observations from the Tropospheric Emission Spectrometer (TES) satellite from January 2005 to December 2008. The results show that sampling and monthly averaging of the observation operators produce biases of less than ±3% for ozone and carbon monoxide throughout the entire troposphere in both models. Water vapour sampling biases were also within the insignificant range of ±3% (that is ±0.14 g kg-1) in both models. Sampling led to a temperature bias of ±0.3 K over the tropical and mid-latitudes in both models, and up to -1.4 K over the boundary layer in the higher latitudes. Using the monthly average of temperature and water vapour operators lead to large biases over the boundary layer in the southern-hemispheric higher latitudes and in the upper troposphere, respectively. Up to 8% bias was calculated in the upper troposphere water vapour due to monthly

  11. The impact of orbital sampling, monthly averaging and vertical resolution on climate chemistry model evaluation with satellite observations

    NASA Astrophysics Data System (ADS)

    Aghedo, A. M.; Bowman, K. W.; Shindell, D. T.; Faluvegi, G.

    2011-07-01

    Ensemble climate model simulations used for the Intergovernmental Panel on Climate Change (IPCC) assessments have become important tools for exploring the response of the Earth System to changes in anthropogenic and natural forcings. The systematic evaluation of these models through global satellite observations is a critical step in assessing the uncertainty of climate change projections. This paper presents the technical steps required for using nadir sun-synchronous infrared satellite observations for multi-model evaluation and the uncertainties associated with each step. This is motivated by need to use satellite observations to evaluate climate models. We quantified the implications of the effect of satellite orbit and spatial coverage, the effect of variations in vertical sensitivity as quantified by the observation operator and the impact of averaging the operators for use with monthly-mean model output. We calculated these biases in ozone, carbon monoxide, atmospheric temperature and water vapour by using the output from two global chemistry climate models (ECHAM5-MOZ and GISS-PUCCINI) and the observations from the Tropospheric Emission Spectrometer (TES) instrument on board the NASA-Aura satellite from January 2005 to December 2008. The results show that sampling and monthly averaging of the observation operators produce zonal-mean biases of less than ±3 % for ozone and carbon monoxide throughout the entire troposphere in both models. Water vapour sampling zonal-mean biases were also within the insignificant range of ±3 % (that is ±0.14 g kg-1) in both models. Sampling led to a temperature zonal-mean bias of ±0.3 K over the tropical and mid-latitudes in both models, and up to -1.4 K over the boundary layer in the higher latitudes. Using the monthly average of temperature and water vapour operators lead to large biases over the boundary layer in the southern-hemispheric higher latitudes and in the upper troposphere, respectively. Up to 8 % bias was

  12. Order-parameter-aided temperature-accelerated sampling for the exploration of crystal polymorphism and solid-liquid phase transitions

    PubMed Central

    Yu, Tang-Qing; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Vanden-Eijnden, Eric; Tuckerman, Mark

    2014-01-01

    The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency. PMID:24907992

  13. Order-parameter-aided temperature-accelerated sampling for the exploration of crystal polymorphism and solid-liquid phase transitions

    SciTech Connect

    Yu, Tang-Qing Vanden-Eijnden, Eric; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Tuckerman, Mark

    2014-06-07

    The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency.

  14. Spring Fluids from a Low-temperature Hydrothermal System at Dorado Outcrop: The First Samples of a Massive Global Flux

    NASA Astrophysics Data System (ADS)

    Wheat, C. G.; Fisher, A. T.; McManus, J.; Hulme, S.; Orcutt, B.

    2015-12-01

    Hydrothermal circulation through the volcanic ocean crust extracts about one fourth of Earth's lithospheric heat. Most of this advective heat loss occurs through ridge flanks, areas far from the magmatic influence of seafloor spreading, at relatively low temperatures (2-25 degrees Celsius). This process results in a flux of seawater through the oceanic crust that is commensurate with that delivered to the ocean from rivers. Given this large flow, even a modest (1-5 percent) change in concentration during circulation would impact geochemical cycles for many ions. Until recently such fluids that embody this process have not been collected or quantified despite the importance of this process, mainly because no site of focused, low-temperature discharge has been found. In 2013 we used Sentry (an AUV) and Jason II (an ROV) to generate a bathymetric map and locate springs within a geologic context on Dorado Outcrop, a ridge flank hydrothermal system that typifies such hydrothermal processes in the Pacific. Dorado Outcrop is located on 23 M.y. old seafloor of the Cocos Plate, where 70-90 percent of the lithospheric heat is removed. Spring fluids collected in 2013 confirmed small chemical anomalies relative to seawater, requiring new methods to collect, analyze, and interpret samples and data. In 2014 the submersible Alvin utilized these methods to recover the first high-quality spring samples from this system and year-long experiments. These unique data and samples represent the first of their type. For example, the presence of dissolved oxygen is the first evidence of an oxic ridge flank hydrothermal fluid, even though such fluids have been postulated to exist throughout a vast portion of the oceanic crust. Furthermore, chemical data confirm modest anomalies relative to seawater for some elements. Such anomalies, if characteristic throughout the global ocean, impact global geochemical cycles, crustal evolution, and subsurface microbial activity.

  15. High Temperature Chemical Kinetic Combustion Modeling of Lightly Methylated Alkanes

    SciTech Connect

    Sarathy, S M; Westbrook, C K; Pitz, W J; Mehl, M

    2011-03-01

    Conventional petroleum jet and diesel fuels, as well as alternative Fischer-Tropsch (FT) fuels and hydrotreated renewable jet (HRJ) fuels, contain high molecular weight lightly branched alkanes (i.e., methylalkanes) and straight chain alkanes (n-alkanes). Improving the combustion of these fuels in practical applications requires a fundamental understanding of large hydrocarbon combustion chemistry. This research project presents a detailed high temperature chemical kinetic mechanism for n-octane and three lightly branched isomers octane (i.e., 2-methylheptane, 3-methylheptane, and 2,5-dimethylhexane). The model is validated against experimental data from a variety of fundamental combustion devices. This new model is used to show how the location and number of methyl branches affects fuel reactivity including laminar flame speed and species formation.

  16. Probing temperature during laser spot welding from vapor composition and modeling

    NASA Astrophysics Data System (ADS)

    He, X.; DebRoy, T.; fürschbach, P. W.

    2003-11-01

    Measurement of weld pool temperature during laser spot welding is a difficult task because of the short pulse duration, often lasting only a few milliseconds, highly transient nature of the process, and the presence of a metal vapor plume near the weld pool. This article describes recent research to estimate weld pool temperatures experimentally and theoretically. Composition of the metal vapor from the weld pool was determined by condensing a portion of the vapor on the inner surface of an open ended quartz tube which was mounted perpendicular to the sample surface and coaxial with the laser beam. It was found that iron, chromium, and manganese were the main metallic species in the vapor phase. The concentrations of Fe and Cr in the vapor increased slightly while the concentration of Mn in the vapor decreased somewhat with the increase in power density. The vapor composition was used to determine an effective temperature of the weld pool. A transient, three-dimensional numerical heat transfer and fluid flow model based on the solution of the equations of conservation of mass, momentum and energy was used to calculate the temperature and velocity fields in the weld pool as a function of time. The experimentally determined geometry of the spot welds agreed well with that determined from the computed temperature field. The effective temperature determined from the vapor composition was found to be close to the numerically computed peak temperature at the weld pool surface. Because of the short process duration and other serious problems in the direct measurement of temperature during laser spot welding, estimating approximate values of peak temperature from metal vapor composition is particularly valuable.

  17. 12 CFR Appendix B to Part 230 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Model Clauses and Sample Forms B Appendix B to Part 230 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN SAVINGS (REGULATION DD) Pt. 230, App. B Appendix B to Part 230—Model Clauses and Sample Forms Table of contents B-1—Model...

  18. Low reheating temperatures in monomial and binomial inflationary models

    SciTech Connect

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-06-23

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ{sup 2} inflationary potential is no longer favored by current CMB data, as well as ϕ{sup p} with p>2, a ϕ{sup 1} potential and canonical reheating (w{sub re}=0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n{sub s}, implies an upper bound on the reheating temperature of T{sub re}≲6×10{sup 10} GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ{sup 1} potential. We find that as a subdominant ϕ{sup 2} term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T{sub re}=4 MeV is excluded by the Planck 2015 68% confidence limit.

  19. Improving the performance of temperature index snowmelt model of SWAT by using MODIS land surface temperature data.

    PubMed

    Yang, Yan; Onishi, Takeo; Hiramatsu, Ken

    2014-01-01

    Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R (2)) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations. PMID:25165746

  20. Improving the Performance of Temperature Index Snowmelt Model of SWAT by Using MODIS Land Surface Temperature Data

    PubMed Central

    Yang, Yan; Onishi, Takeo; Hiramatsu, Ken

    2014-01-01

    Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R2) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations. PMID:25165746

  1. Detection of high molecular weight organic tracers in vegetation smoke samples by high-temperature gas chromatography-mass spectrometry

    SciTech Connect

    Elias, V.O.; Simoneit, B.R.T. ); Pereira, A.S.; Cardoso, J.N. ); Cabral, J.A. )

    1999-07-15

    High-temperature high-resolution gas chromatography (HTGC) is an established technique for the separation of complex mixtures of high molecular weight (HMW) compounds which do not elute when analyzed on conventional GC columns. The combination of this technique with mass spectrometry is not so common and application to aerosols is novel. The HTGC and HTGC-MS analyses of smoke samples taken by particle filtration from combustion of different species of plants provided the characterization of various classes of HMW compounds reported to occur for the first time in emissions from biomass burning. Among these components are a series of wax esters with up to 58 carbon numbers, aliphatic hydrocarbons, triglycerides, long chain methyl ketones, alkanols and a series of triterpenyl fatty acid esters which have been characterized as novel natural products. Long chain fatty acids with more than 32 carbon numbers are not present in the smoke samples analyzed. The HMW compounds in smoke samples from the burning of plants from Amazonia indicate the input of directly volatilized natural products in the original plants during their combustion. However, the major organic compounds extracted from smoke consist of a series of lower molecular weight polar components, which are not natural products but the result of the thermal breakdown of cellulose and lignin. In contrast, the HMW natural products may be suitable tracers for specific sources of vegetation combustion because they are emitted as particles without thermal alternation in the smoke and can thus be related directly to the original plant material.

  2. Continuous Measurements of Electrical Conductivity and Viscosity of Lherzorite Analogue Samples during Slow Increases and Decreases in Temperature: Melting and Pre-melting Effects

    NASA Astrophysics Data System (ADS)

    Sueyoshi, K.; Hiraga, T.

    2014-12-01

    It has been considered that transport properties of the mantle (ex. electrical conductivity, viscosity, seismic attenuation) changes dramatically during ascend of the mantle especially at around the mantle solidus. To understand the mechanism of such changes, we measured the electrical conductivity and viscosity of the lherzorite analogues during slow increases and decreases in temperature reproducing the mantle crossing its solidus. Two types of samples, one was forsterite plus 20% diopside and the other was 50% forsterite, 40% enstatite and 10% diopside with addition of 0.5% spinel, were synthesized from Mg(OH)2, SiO2, CaCO3 and MgAl2O4 (spinel) powders with particle size of <50 nm. Samples were expected to exhibit different manners in initiation of partial melt and amount of melt during the temperature change. We continuously measured electrical conductivity of these samples at every temperature during gradual temperature change, which crosses the sample solidus (~1380℃ and 1230℃ for forsterite + diopside sample and spinel-added samples, respectively). Sample viscosity were also measured under constant loads of 0.5~50 MPa. The electrical conductivity and viscosity at well below (>150℃) the sample solidus exhibited linear distributions in their Arrhenius plots indicating that a single mechanism controls for each transport property within the experimental temperature ranges. Such linear relationship especially in the electrical conductivity was no longer observed at higher temperature regime exhibiting its exponential increase until the temperature reached the sample solidus. Such dramatic change with changing temperature has not been detected for the sample viscosity. Monotonic increase of electrical conductivity in accordance with increasing melt fraction was observed above the sample solidus.

  3. Development and application of a thermophysical property model for cane fiberboard subjected to high temperatures

    SciTech Connect

    Hensel, S.J.; Gromada, R.J.

    1994-06-01

    A thermophysical property model has been developed to analytically determine the thermal response of cane fiberboard when exposed to temperatures and heat fluxes associated with the 10 CFR 71 hypothetical accident condition (HAC) and associated post fire cooling. The complete model was developed from high temperature cane fiberboard 1-D test results and consists of heating and cooling sub-models. The heating property model accounts for the enhanced heat transfer of the hot gases in the fiberboard, the loss of energy via venting, and the loss of mass from venting during the heating portion of the test. The cooling property model accounts for the degraded material effects and the continued heat transfer associated with the hot gases after removal of the external heating source. Agreement between the test results of a four inch thick fiberboard sample with the analytical application of the complete property model is quite good and will be presented. A comparison of analysis results and furnace test data for the 9966 package suggests that the property model sufficiently accounts for the heat transfer in an actual package.

  4. Clausius-Clapeyron temperature-precipitation scaling over the UK in high-resolution climate models

    NASA Astrophysics Data System (ADS)

    Chan, Steven; Fowler, Hayley; Kendon, Elizabeth; Roberts, Malcolm; Roberts, Nigel; Ferro, Christopher; Blenkinsop, Stephen

    2014-05-01

    Clausius-Clapyeron (C-C) temperature-precipitation scaling relationships for extreme hourly precipitation (99th quantile) are examined in observations and a set of 12-km parameterized-convection and 1.5-km convection-permitting regional climate model (RCM) simulations, over a domain covering England and Wales for the summer months (JJA). RCM simulations have been carried out driven by ERA-interim reanalysis, and also for control (1996-2009) and future (~2100) runs driven by a 60km resolution Met Office Unified Model using the Global Atmosphere GA3.0 configuration. Radar observations are found to give at least a 1xC-C scaling for UK hourly extreme precipitation at temperatures above 10°C. Despite sharing the same large-scale conditions, the 1.5km explicit-convection model shows very different C-C scaling relationships to the 12km model, whose C-C scaling is shown to be highly sensitive to the lateral boundary conditions - suggesting that the model physics play an important role in the scaling. In contrast, the 1.5km model shows consistent C-C scaling relationships for all present-day (ERA-interim and control) simulations and these are generally in line with observed C-C scaling relationships which sample temperatures mainly between 10°C and 20°C. The future simulations indicate the fallacy of extrapolating present-day scaling relationships to infer extreme precipitation in a future warmer climate. All future climate simulations show a sharp decline in the scaling relationship at high-temperatures (~>20°C), which are not well sampled in the current climate. This is consistent with observational studies in other regions which have also found declines in the scaling relationship at high temperatures. This suggests that there may be an upper temperature limit to super-Clausius-Clapeyron scaling of short-duration extreme precipitation which differs dependent on ambient climate conditions in the study location.

  5. Sample size determination for the non-randomised triangular model for sensitive questions in a survey.

    PubMed

    Tian, Guo-Liang; Tang, Man-Lai; Zhenqiu Liu; Ming Tan; Tang, Nian-Sheng

    2011-06-01

    Sample size determination is an essential component in public health survey designs on sensitive topics (e.g. drug abuse, homosexuality, induced abortions and pre or extramarital sex). Recently, non-randomised models have been shown to be an efficient and cost effective design when comparing with randomised response models. However, sample size formulae for such non-randomised designs are not yet available. In this article, we derive sample size formulae for the non-randomised triangular design based on the power analysis approach. We first consider the one-sample problem. Power functions and their corresponding sample size formulae for the one- and two-sided tests based on the large-sample normal approximation are derived. The performance of the sample size formulae is evaluated in terms of (i) the accuracy of the power values based on the estimated sample sizes and (ii) the sample size ratio of the non-randomised triangular design and the design of direct questioning (DDQ). We also numerically compare the sample sizes required for the randomised Warner design with those required for the DDQ and the non-randomised triangular design. Theoretical justification is provided. Furthermore, we extend the one-sample problem to the two-sample problem. An example based on an induced abortion study in Taiwan is presented to illustrate the proposed methods. PMID:19221169

  6. Impact of caramelization on the glass transition temperature of several caramelized sugars. Part II: Mathematical modeling.

    PubMed

    Jiang, Bin; Liu, Yeting; Bhandari, Bhesh; Zhou, Weibiao

    2008-07-01

    Further to part I of this study, this paper discusses mathematical modeling of the relationship between caramelization of several sugars including fructose, glucose, and sucrose and their glass transition temperatures ( T g). Differential scanning calorimetry (DSC) was used for creating caramelized sugar samples and determining their glass transition temperatures ( T g). UV-vis absorbance measurement and high-performance liquid chromatography (HPLC) analysis were used for quantifying the extent of caramelization. Specifically, absorbances at 284 and 420 nm were obtained from UV-vis measurement, and the contents of sucrose, glucose, fructose, and 5-hydroxymethyl-furfural (HMF) in the caramelized sugars were obtained from HPLC measurements. Results from the UV and HPLC measurements were correlated with the Tg values measured by DSC. By using both linear and nonlinear regressions, two sets of mathematical models were developed for the prediction of Tg values of sugar caramels. The first set utilized information obtained from both UV-vis measurement and HPLC analysis, while the second set utilized only information from the UV-vis measurement, which is much easier to perform in practice. As a caramelization process is typically characterized by two stages, separate models were developed for each of the stages within a set. Furthermore, a third set of nonlinear equations were developed, serving as criteria to decide at which stage a caramelized sample is. The models were evaluated through a validation process. PMID:18553880

  7. Relationship between fire temperature and changes in chemical soil properties: a conceptual model of nutrient release

    NASA Astrophysics Data System (ADS)

    Thomaz, Edivaldo L.; Doerr, Stefan H.

    2014-05-01

    The purpose of this study was to evaluate the effects of fire temperatures (i.e., soil heating) on nutrient release and aggregate physical changes in soil. A preliminary conceptual model of nutrient release was established based on results obtained from a controlled burn in a slash-and-burn agricultural system located in Brazil. The study was carried out in clayey subtropical soil (humic Cambisol) from a plot that had been fallow for 8 years. A set of three thermocouples were placed in four trenches at the following depths: 0 cm on the top of the mineral horizon, 1.0 cm within the mineral horizon, and 2 cm within the mineral horizon. Three soil samples (true independent sample) were collected approximately 12 hours post-fire at depths of 0-2.5 cm. Soil chemical changes were more sensitive to fire temperatures than aggregate physical soil characteristics. Most of the nutrient response to soil heating was not linear. The results demonstrated that moderate temperatures (< 400°C) had a major effect on nutrient release (i.e., the optimum effect), whereas high temperatures (> 500 °C) decreased soil fertility.

  8. Modification of an RBF ANN-Based Temperature Compensation Model of Interferometric Fiber Optical Gyroscopes

    PubMed Central

    Cheng, Jianhua; Qi, Bing; Chen, Daidai; Jr. Landry, René

    2015-01-01

    This paper presents modification of Radial Basis Function Artificial Neural Network (RBF ANN)-based temperature compensation models for Interferometric Fiber Optical Gyroscopes (IFOGs). Based on the mathematical expression of IFOG output, three temperature relevant terms are extracted, which include: (1) temperature of fiber loops; (2) temperature variation of fiber loops; (3) temperature product term of fiber loops. Then, the input-modified RBF ANN-based temperature compensation scheme is established, in which temperature relevant terms are transferred to train the RBF ANN. Experimental temperature tests are conducted and sufficient data are collected and post-processed to form the novel RBF ANN. Finally, we apply the modified RBF ANN based on temperature compensation model in two IFOGs with temperature compensation capabilities. The experimental results show the proposed temperature compensation model could efficiently reduce the influence of environment temperature on the output of IFOG, and exhibit a better temperature compensation performance than conventional scheme without proposed improvements. PMID:25985163

  9. Data-Model Comparison of Pliocene Sea Surface Temperature

    NASA Astrophysics Data System (ADS)

    Dowsett, H. J.; Foley, K.; Robinson, M. M.; Bloemers, J. T.

    2013-12-01

    The mid-Piacenzian (late Pliocene) climate represents the most geologically recent interval of long-term average warmth and shares similarities with the climate projected for the end of the 21st century. As such, its fossil and sedimentary record represents a natural experiment from which we can gain insight into potential climate change impacts, enabling more informed policy decisions for mitigation and adaptation. We present the first systematic comparison of Pliocene sea surface temperatures (SST) between an ensemble of eight climate model simulations produced as part of PlioMIP (Pliocene Model Intercomparison Project) and the PRISM (Pliocene Research, Interpretation and Synoptic Mapping) Project mean annual SST field. Our results highlight key regional (mid- to high latitude North Atlantic and tropics) and dynamic (upwelling) situations where there is discord between reconstructed SST and the PlioMIP simulations. These differences can lead to improved strategies for both experimental design and temporal refinement of the palaeoenvironmental reconstruction. Scatter plot of multi-model-mean anomalies (squares) and PRISM3 data anomalies (large blue circles) by latitude. Vertical bars on data anomalies represent the variability of warm climate phase within the time-slab at each locality. Small colored circles represent individual model anomalies and show the spread of model estimates about the multi-model-mean. While not directly comparable in terms of the development of the means nor the meaning of variability, this plot provides a first order comparison of the anomalies. Encircled areas are a, PRISM low latitude sites outside of upwelling areas; b, North Atlantic coastal sequences and Mediterranean sites; c, large anomaly PRISM sites from the northern hemisphere. Numbers identify Ocean Drilling Program sites.

  10. Modeling Tree Shade Effect on Urban Ground Surface Temperature.

    PubMed

    Napoli, Marco; Massetti, Luciano; Brandani, Giada; Petralli, Martina; Orlandini, Simone

    2016-01-01

    There is growing interest in the role that urban forests can play as urban microclimate modifiers. Tree shade and evapotranspiration affect energy fluxes and mitigate microclimate conditions, with beneficial effects on human health and outdoor comfort. The aim of this study was to investigate surface temperature () variability under the shade of different tree species and to test the capability in predicting of a proposed heat transfer model. Surface temperature data on asphalt and grass under different shading conditions were collected in the Cascine park, Florence, Italy, and were used to test the performance of a one-dimensional heat transfer model integrated with a routine for estimating the effect of plant canopies on surface heat transfer. Shading effects of 10 tree species commonly used in Italian urban settings were determined by considering the infrared radiation and the tree canopy leaf area index (LAI). The results indicate that, on asphalt, was negatively related to the LAI of trees ( reduction ranging from 13.8 to 22.8°C). On grass, this relationship was weaker probably because of the combined effect of shade and grass evapotranspiration on ( reduction ranged from 6.9 to 9.4°C). A sensitivity analysis confirmed that other factors linked to soil water content play an important role in reduction of grassed areas. Our findings suggest that the energy balance model can be effectively used to estimate of the urban pavement under different shading conditions and can be applied to the analysis of microclimate conditions of urban green spaces. PMID:26828170

  11. Modeling temperature and stress in rocks exposed to the sun

    NASA Astrophysics Data System (ADS)

    Hallet, B.; Mackenzie, P.; Shi, J.; Eppes, M. C.

    2012-12-01

    The potential contribution of solar-driven thermal cycling to the progressive breakdown of surface rocks on the Earth and other planets is recognized but under studied. To shed light on this contribution we have launched a collaborative study integrating modern instrumental and numerical approaches to define surface temperatures, stresses, strains, and microfracture activity in exposed boulders, and to shed light on the thermo-mechanical response of boulders to diurnal solar exposure. The instrumental portion of our study is conducted by M. Eppes and coworkers who have monitored the surface and environmental conditions of two ~30 cm dia. granite boulders (one in North Carolina, one in New Mexico) in the field for one and tow years, respectively. Each boulder is instrumented with 8 thermocouples, 8 strain gauges, a surface moisture sensor and 6 acoustic emission (AE) sensors to monitor microfracture activity continuously and to locate it within 2.5 cm. Herein, we focus on the numerical modeling. Using a commercially available finite element program, MSC.Marc®2008r1, we have developed an adaptable, realistic thermo-mechanical model to investigate quantitatively the temporal and spatial distributions of both temperature and stress throughout a boulder. The model accounts for the effects of latitude and season (length of day and the sun's path relative to the object), atmospheric damping (reduction of solar radiation when traveling through the Earth's atmosphere), radiative interaction between the boulder and its surrounding soil, secondary heat exchange of the rock with air, and transient heat conduction in both rock and soil. Using representative thermal and elastic rock properties, as well as realistic representations of the size, shape and orientation of a boulder instrumented in the field in North Carolina, the model is validated by comparison with direct measurements of temperature and strain on the surface of one boulder exposed to the sun. Using the validated

  12. The primary target model of energetic ions penetration in thin botanic samples

    NASA Astrophysics Data System (ADS)

    Wang, Yugang; Du, Guanghua; Xue, Jianming; Liu, Feng; Wang, Sixue; Yan, Sha; Zhao, Weijiang

    2002-08-01

    The ion transmission spectra of very low current MeV H + ions through two kinds of botanic samples, kidney bean slices and onion endocuticle, were carried out. The experimental spectra confirmed the botanic sample is inhomogeneous in mass density. A target model with local density approximation was suggested to describe the penetration of the energetic ions in such kind of materials. From the fitting of proton transmission spectra of two-energies, this target model was verified primarily. Including the influence of surface roughness and irradiation damage, this target model could be improved to predict the profile of penetration depth and range distribution of the energetic ions in the botanic samples.

  13. Models to estimate the minimum ignition temperature of dusts and hybrid mixtures.

    PubMed

    Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich

    2016-03-01

    The minimum ignition temperatures (MIT) of hybrid mixtures have been investigated by performing several series of tests in a modified Godbert-Greenwald furnace. Five dusts as well as three perfect gases and three real were used in different combinations as test samples. Further, seven mathematical models for prediction of the MIT of dust/air mixtures were presented of which three were chosen for deeper study and comparison with the experimental results based on the availability of the input quantities needed and their applicability. Additionally, two alternative models were proposed to calculate the MIT of hybrid mixtures and were validated against the experimental results. A significant decrease of the minimum ignition temperature of either the gas or the vapor as well as an increase in the explosion likelihood could be observed when a small amount of dust which was either below its minimum explosible concentration or not ignitable itself at that particular temper