NASA Astrophysics Data System (ADS)
Wittmer, J. P.; Xu, H.; Polińska, P.; Weysser, F.; Baschnagel, J.
2013-03-01
The shear modulus G of two glass-forming colloidal model systems in d = 3 and d = 2 dimensions is investigated by means of, respectively, molecular dynamics and Monte Carlo simulations. Comparing ensembles where either the shear strain γ or the conjugated (mean) shear stress τ are imposed, we compute G from the respective stress and strain fluctuations as a function of temperature T while keeping a constant normal pressure P. The choice of the ensemble is seen to be highly relevant for the shear stress fluctuations μF(T) which at constant τ decay monotonously with T following the affine shear elasticity μA(T), i.e., a simple two-point correlation function. At variance, non-monotonous behavior with a maximum at the glass transition temperature Tg is demonstrated for μF(T) at constant γ. The increase of G below Tg is reasonably fitted for both models by a continuous cusp singularity, G(T)∝(1 - T/Tg)1/2, in qualitative agreement with recent theoretical predictions. It is argued, however, that longer sampling times may lead to a sharper transition.
Aerosol Sampling Models Survey
1993-03-01
Particle Sizes ) for Inlet Region of Aerosol Sampling Train .......... ................ .. 49 8 Model Efficiency Calculations (Polydispersed Particle ...0 As the MMAD particle size increases, the sampling efficiency decreases. As the flow rate increases, the sampling efficiency decreases. However, the...70 93.9 88.0 N/A 7. 5 900 130 99.5 99.2 N/A 8 . 15 900 130 94.4 93.2 N/A Table 10. Model Efficiency Calculations (Polydispersed Particle Sizes ) for
Recommended Maximum Temperature For Mars Returned Samples
NASA Technical Reports Server (NTRS)
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
Temperature Calibration for Sample Heating in Ultrahigh Vacuum
NASA Astrophysics Data System (ADS)
Wheelwright, Heidi; Shen, T. C.
2006-10-01
Precision temperature measurement is a challenge for ultrahigh vacuum sample preparations. Thermocouples and pyrometers can be used to measure the temperature of samples, but these two techniques need calibration. We have made a mathematical model to calibrate the thermocouple readings with the pyrometer readings. This model is based on equations considering the input power and the heat loss by conduction and radiation. The heat conduction constant is determined from pyrometer temperature measurements at various power inputs. Given any input power, this model will return a temperature value that agrees very closely to the thermocouple readings which have been calibrated with the pyrometer.
Multiphoton cryo microscope with sample temperature control
NASA Astrophysics Data System (ADS)
Breunig, H. G.; Uchugonova, A.; König, K.
2013-02-01
We present a multiphoton microscope system which combines the advantages of multiphoton imaging with precise control of the sample temperature. The microscope provides online insight in temperature-induced changes and effects in plant tissue and animal cells with subcellular resolution during cooling and thawing processes. Image contrast is based on multiphoton fluorescence intensity or fluorescence lifetime in the range from liquid nitrogen temperature up to +600°C. In addition, micro spectra from the imaged regions can be recorded. We present measurement results from plant leaf samples as well as Chinese hamster ovary cells.
Coercivity maxima at low temperatures. [of lunar samples
NASA Technical Reports Server (NTRS)
Schwerer, F. C.; Nagata, T.
1974-01-01
Recent measurements have shown that the magnetic coercive forces of some Apollo lunar samples show an unexpected decrease with decreasing temperature at cryogenic temperatures. This behavior can be explained quantitatively in terms of a model which considers additive contributions from a soft, reversible magnetic phase and from a harder, hysteretic magnetic phase.
Estimation of river and stream temperature trends under haphazard sampling
Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao
2015-01-01
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Poirson, Allen
1987-01-01
A model is described for positioning cones in the retina. Each cone has a circular disk of influence, and the disks are tightly packed outward from the center. This model has three parameters that can vary with eccentricity: the mean radius of the cone disk, the standard deviation of the cone disk radius, and the standard deviation of postpacking jitter. Estimates for these parameters out to 1.6 deg are found by using measurements reported by Hirsch and Hylton (1985) and Hirsch and Miller (1987) of the positions of the cone inner segments of an adult macaque. The estimation is based on fitting measures of variation in local intercone distances, and the fit to these measures is good.
Temperature effects: methane generation from landfill samples
Hartz, K.E.; Klink, R.E.; Ham, R.K.
1982-08-01
The objective of the investigation described to study the impact of temperature variations on the rate of methane generation from solid waste. The temperatures investigated ranged from 21/sup 0/C to 46/sup 0/C. Two approaches were applied. These were short term residence at seven different temperatures and intermediate term residence at two different temperatures. From the short term results, energy of activation values of 22.4 kilo calories per mole to 23.7 kilo calories per mole were calculated. The temperature of 41/sup 0/C was found to be the optimum for methane generation on a short term basis. 8 refs.
Temperature effects: methane generation from landfill samples
Hartz, K.E.; Klink, R.E.; Ham, R.K.
1983-08-01
An understanding of the breakdown of municipal solid wastes into gaseous products, especially methane, is important. Landfills act as batch anaerobic digestors. Temperature, one of the variables that affect digestion is examined. Since very minor temperature changes can be accomplished with rather substantial changes in methane generation, the generation rate can be modified to match the capacity of the gas recovery system. The effects of moisture content, oxygen supply, liquid solid ratios and bacterial acclimation are mentioned.
Apparatus for low temperature thermal desorption spectroscopy of portable samples.
Stuckenholz, S; Büchner, C; Ronneburg, H; Thielsch, G; Heyde, M; Freund, H-J
2016-04-01
An experimental setup for low temperature thermal desorption spectroscopy (TDS) integrated in an ultrahigh vacuum-chamber housing a high-end scanning probe microscope for comprehensive multi-tool surface science analysis is described. This setup enables the characterization with TDS at low temperatures (T > 22 K) of portable sample designs, as is the case for scanning probe optimized setups or high-throughput experiments. This combination of techniques allows a direct correlation between surface morphology, local spectroscopy, and reactivity of model catalysts. The performance of the multi-tool setup is illustrated by measurements of a model catalyst. TDS of CO from Mo(001) and from Mo(001) supported MgO thin films were carried out and combined with scanning tunneling microscopy measurements.
Apparatus for low temperature thermal desorption spectroscopy of portable samples
NASA Astrophysics Data System (ADS)
Stuckenholz, S.; Büchner, C.; Ronneburg, H.; Thielsch, G.; Heyde, M.; Freund, H.-J.
2016-04-01
An experimental setup for low temperature thermal desorption spectroscopy (TDS) integrated in an ultrahigh vacuum-chamber housing a high-end scanning probe microscope for comprehensive multi-tool surface science analysis is described. This setup enables the characterization with TDS at low temperatures (T > 22 K) of portable sample designs, as is the case for scanning probe optimized setups or high-throughput experiments. This combination of techniques allows a direct correlation between surface morphology, local spectroscopy, and reactivity of model catalysts. The performance of the multi-tool setup is illustrated by measurements of a model catalyst. TDS of CO from Mo(001) and from Mo(001) supported MgO thin films were carried out and combined with scanning tunneling microscopy measurements.
Proximity effect thermometer for local temperature measurements on mesoscopic samples.
Aumentado, J.; Eom, J.; Chandrasekhar, V.; Baldo, P. M.; Rehn, L. E.; Materials Science Division; Northwestern Univ; Univ. of Chicago
1999-11-29
Using the strong temperature-dependent resistance of a normal metal wire in proximity to a superconductor, we have been able to measure the local temperature of electrons heated by flowing a direct-current (dc) in a metallic wire to within a few tens of millikelvin at low temperatures. By placing two such thermometers at different parts of a sample, we have been able to measure the temperature difference induced by a dc flowing in the samples. This technique may provide a flexible means of making quantitative thermal and thermoelectric measurements on mesoscopic metallic samples.
Sample - Based Material Structure Modeling
NASA Astrophysics Data System (ADS)
Liu, Xingchen
The paradigm of Sample-based Material Structure Modeling is proposed to facilitate the design and manufacturing of material structures towards desired mechanical properties. By modeling material structure samples via a Markov random field, the proposed paradigm views material structure as a collection of neighborhoods. The abstraction facilitates the reconstruction of both periodic and stochastic material structures and extends to the reconstruction and design of spatially varying material structures, a principal mechanism for creating and controlling spatially varying material properties in nature and engineering. The spatially varying material properties are represented and controlled using the notion of material descriptors which include common geometric, statistical, and topological measures such as correlation functions and Minkowski functionals. The proposed method is of particular advantage in preserving the microscopic geometry and related properties of the material structure sample while achieving target macroscopic property distributions during the design of material structures. For material structures that exhibit anisotropy, properly oriented neighborhoods could greatly enhance the efficiency of the material. The expansion of the design space to include the rotation of neighborhoods is appropriate when the properties that need to be preserved can be safely regarded as rotation invariant. With the assumption of orthotropic symmetry, an automatic way to determine the principal axes of neighborhoods for material structure samples with stochastic orientations is proposed. A Green's function based homogenization method is investigated for the efficient evaluation of the mechanical properties of neighborhoods. The formulated integral equation is converted into a system of linear equations which is shown to be symmetric and positive definite with the appropriate reference material properties and can be solved efficiently using the conjugate gradient method
Stocker, Matthias; Pfeifer, Holger; Koslowski, Berndt
2014-05-15
The temperature of the electrodes is a crucial parameter in virtually all tunneling experiments. The temperature not only controls the thermodynamic state of the electrodes but also causes thermal broadening, which limits the energy resolution. Unfortunately, the construction of many scanning tunneling microscopes inherits a weak thermal link between tip and sample in order to make one side movable. Such, the temperature of that electrode is badly defined. Here, the authors present a procedure to calibrate the tip temperature by very simple means. The authors use a superconducting sample (Nb) and a standard tip made from W. Due to the asymmetry in the density of states of the superconductor (SC)—normal metal (NM) tunneling junction, the SC temperature controls predominantly the density of states while the NM controls the thermal smearing. By numerically simulating the I-V curves and numerically optimizing the tip temperature and the SC gap width, the tip temperature can be accurately deduced if the sample temperature is known or measureable. In our case, the temperature dependence of the SC gap may serve as a temperature sensor, leading to an accurate NM temperature even if the SC temperature is unknown.
Helium Pot System for Maintaining Sample Temperature after Cryocooler Deactivation
Haid, B J
2005-01-26
A system for maintaining a sample at a constant temperature below 10K after deactivating the cooling source is demonstrated. In this system, the cooling source is a GM cryocooler that is joined with the sample through an adaptor that consists of a helium pot and a resistive medium. Upon deactivating the cryocooler, the power applied to a heater located on the sample side of the resistive medium is decreased gradually to maintain an appropriate temperature rise across the resistive medium as the helium pot warms. The temperature is held constant in this manner without the use of solid or liquid cryogens and without mechanically disconnecting the sample from the cooler. Shutting off the cryocooler significantly reduces sample motion that results from vibration and expansion/contraction of the cold head housing. The reduction in motion permits certain processes that are very sensitive to sample position stability, but are not performed throughout the duration that the sample is at low-temperature. An apparatus was constructed to demonstrate this technique using a 4K GM cryocooler. Experimental and theoretical predictions indicate that when the helium pot is pressurized to the working pressure of the cryocooler's helium supply, a sample with continuous heat dissipation of several-hundred milliwatts can be maintained at 7K for several minutes when using an extension that increases the cold head length by less than 50%.
Radiation and temperature effects on LDEF fiber optic samples
NASA Technical Reports Server (NTRS)
Johnston, A. R.; Hartmayer, R.; Bergman, L. A.
1993-01-01
Results obtained from the JPL Fiber Optics Long Duration Exposure Facility (LDEF) Experiment since the June 1991 Experimenters' Workshop are addressed. Radiation darkening of laboratory control samples and the subsequent annealing was measured in the laboratory for the control samples. The long-time residual loss was compared to the LDEF flight samples and found to be in agreement. The results of laboratory temperature tests on the flight samples, extending over a period of about nine years, including the pre-flight and post-flight analysis periods, are described. The temperature response of the different cable samples varies widely, and appears in two samples to be affected by polymer aging. Conclusions to date are summarized.
Huh, Joonsuk; Yung, Man-Hong
2017-08-07
Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.
Rotating sample magnetometer for cryogenic temperatures and high magnetic fields.
Eisterer, M; Hengstberger, F; Voutsinas, C S; Hörhager, N; Sorta, S; Hecher, J; Weber, H W
2011-06-01
We report on the design and implementation of a rotating sample magnetometer (RSM) operating in the variable temperature insert (VTI) of a cryostat equipped with a high-field magnet. The limited space and the cryogenic temperatures impose the most critical design parameters: the small bore size of the magnet requires a very compact pick-up coil system and the low temperatures demand a very careful design of the bearings. Despite these difficulties the RSM achieves excellent resolution at high magnetic field sweep rates, exceeding that of a typical vibrating sample magnetometer by about a factor of ten. In addition the gas-flow cryostat and the high-field superconducting magnet provide a temperature and magnetic field range unprecedented for this type of magnetometer. © 2011 American Institute of Physics
High-temperature constitutive modeling
NASA Technical Reports Server (NTRS)
Robinson, D. N.; Ellis, J. R.
1984-01-01
Thermomechanical service conditions for high-temperature levels, thermal transients, and mechanical loads severe enough to cause measurable inelastic deformation are studied. Structural analysis in support of the design of high-temperature components depends strongly on accurate mathematical representations of the nonlinear, hereditary, inelastic behavior of structural alloys at high temperature, particularly in the relatively small strain range. Progress is discussed in the following areas: multiaxial experimentation to provide a basis for high-temperature multiaxial constitutive relationships; nonisothermal testing and theoretical development toward a complete thermomechanically path dependent formulation of viscoplasticity; and development of viscoplastic constitutive model accounting for initial anisotropy.
The effects of spatial sampling choices on MR temperature measurements.
Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L
2011-02-01
The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case.
Temperature data for phenological models.
Snyder, R L; Spano, D; Duce, P; Cesaraccio, C
2001-11-01
In an arid environment, the effect of evaporation on energy balance can affect air temperature recordings and greatly impact on degree-day calculations. This is an important consideration when choosing a site or climate data for phenological models. To our knowledge, there is no literature showing the effect of the underlying surface and its fetch around a weather station on degree-day accumulations. In this paper, we present data to show that this is a serious consideration, and it can lead to dubious models. Microscale measurements of temperature and energy balance are presented to explain why the differences occur. For example, the effect of fetch of irrigated grass and wetting of bare soil around a weather station on diurnal temperature are reported. A 43-day experiment showed that temperature measured on the upwind edge of an irrigated grass area averaged 4% higher than temperatures recorded 200 m inside the grass field. When the single-triangle method was used with a 10 degrees C threshold and starting on May 19, the station on the upwind edge recorded 900 degree-days on June 28, whereas the interior station recorded 900 degree-days on July 1. Clearly, a difference in fetch can lead to big errors for large degree-day accumulations. Immediately after wetting, the temperature over a wet soil surface was similar to that measured over grass. However, the temperature over the soil increased more than that over the grass as the soil surface dried. Therefore, the observed difference between temperatures measured over bare soil and those over grass increases with longer periods between wettings. In most arid locations, measuring temperature over irrigated grass gives a lower mean annual temperature, resulting in lower annual cumulative degree-day values. This was verified by comparing measurements over grass with those over bare soil at several weather stations in a range of climates. To eliminate the effect of rainfall frequency, using temperature data collected
Ultra sound absorption measurements in rock samples at low temperatures
NASA Technical Reports Server (NTRS)
Herminghaus, C.; Berckhemer, H.
1974-01-01
A new technique, comparable with the reverberation method in room acoustics, is described. It allows Q-measurements at rock samples of arbitrary shape in the frequency range of 50 to 600 kHz in vacuum (.1 mtorr) and at low temperatures (+20 to -180 C). The method was developed in particular to investigate rock samples under lunar conditions. Ultrasound absorption has been measured at volcanics, breccia, gabbros, feldspar and quartz of different grain size and texture yielding the following results: evacuation raises Q mainly through lowering the humidity in the rock. In a dry compact rock, the effect of evacuation is small. With decreasing temperature, Q generally increases. Between +20 and -30 C, Q does not change much. With further decrease of temperature in many cases distinct anomalies appear, where Q becomes frequency dependent.
Ultra sound absorption measurements in rock samples at low temperatures
NASA Technical Reports Server (NTRS)
Herminghaus, C.; Berckhemer, H.
1974-01-01
A new technique, comparable with the reverberation method in room acoustics, is described. It allows Q-measurements at rock samples of arbitrary shape in the frequency range of 50 to 600 kHz in vacuum (.1 mtorr) and at low temperatures (+20 to -180 C). The method was developed in particular to investigate rock samples under lunar conditions. Ultrasound absorption has been measured at volcanics, breccia, gabbros, feldspar and quartz of different grain size and texture yielding the following results: evacuation raises Q mainly through lowering the humidity in the rock. In a dry compact rock, the effect of evacuation is small. With decreasing temperature, Q generally increases. Between +20 and -30 C, Q does not change much. With further decrease of temperature in many cases distinct anomalies appear, where Q becomes frequency dependent.
Sample container temperature gradient influence on the BET specific surface area.
Badalyan, Alexander; Pendleton, Phillip
2005-03-15
Differences between BET specific surface area (BET SSA) values exist due to data collected in stainless steel and less thermally conductive sample holders. Not accounting for the temperature gradient along stainless steel sample holders during manometric gas adsorption measurements at cryogenic temperatures leads to errors of up to 3.2% in the BET SSA values with a relative combined standard uncertainty (RCSU) of 0.63%. A unidimensional heat flow model accurately accounts for the temperature gradient, leading to an agreement of 0.16% between the BET SSA values for both sample holder units.
Yamamori, Yu; Kitao, Akio
2013-10-14
A new and efficient conformational sampling method, MuSTAR MD (Multi-scale Sampling using Temperature Accelerated and Replica exchange Molecular Dynamics), is proposed to calculate the free energy landscape on a space spanned by a set of collective variables. This method is an extension of temperature accelerated molecular dynamics and can also be considered as a variation of replica-exchange umbrella sampling. In the MuSTAR MD, each replica contains an all-atom fine-grained model, at least one coarse-grained model, and a model defined by the collective variables that interacts with the other models in the same replica through coupling energy terms. The coarse-grained model is introduced to drive efficient sampling of large conformational space and the fine-grained model can serve to conduct more accurate conformational sampling. The collective variable model serves not only to mediate the coarse- and fine-grained models, but also to enhance sampling efficiency by temperature acceleration. We have applied this method to Ala-dipeptide and examined the sampling efficiency of MuSTAR MD in the free energy landscape calculation compared to that for replica exchange molecular dynamics, replica exchange umbrella sampling, temperature accelerated molecular dynamics, and conventional MD. The results clearly indicate the advantage of sampling a relatively high energy conformational space, which is not sufficiently sampled with other methods. This feature is important in the investigation of transition pathways that go across energy barriers. MuSTAR MD was also applied to Met-enkephalin as a test case in which two Gō-like models were employed as the coarse-grained model.
Dual-temperature acoustic levitation and sample transport apparatus
NASA Technical Reports Server (NTRS)
Trinh, E.; Robey, J.; Jacobi, N.; Wang, T.
1986-01-01
The properties of a dual-temperature resonant chamber to be used for acoustical levitation and positioning have been theoretically and experimentally studied. The predictions of a first-order dissipationless treatment of the generalized wave equation for an inhomogeneous medium are in close agreement with experimental results for the temperature dependence of the resonant mode spectrum and the acoustic pressure distribution, although the measured magnitude of the pressure variations does not correlate well with the calculated one. Ground-based levitation of low-density samples has been demonstrated at 800 C, where steady-state forces up to 700 dyn were generated.
Dual-temperature acoustic levitation and sample transport apparatus
NASA Technical Reports Server (NTRS)
Trinh, E.; Robey, J.; Jacobi, N.; Wang, T.
1986-01-01
The properties of a dual-temperature resonant chamber to be used for acoustical levitation and positioning have been theoretically and experimentally studied. The predictions of a first-order dissipationless treatment of the generalized wave equation for an inhomogeneous medium are in close agreement with experimental results for the temperature dependence of the resonant mode spectrum and the acoustic pressure distribution, although the measured magnitude of the pressure variations does not correlate well with the calculated one. Ground-based levitation of low-density samples has been demonstrated at 800 C, where steady-state forces up to 700 dyn were generated.
Optimal temperature sampling with SPOTS to improve acoustic predictions
NASA Astrophysics Data System (ADS)
Rike, Erik R.; Delbalzo, Donald R.; Samuels, Brian C.
2003-10-01
The Modular Ocean Data Assimilation System (MODAS) uses optimal interpolation to assimilate data (e.g., XBTs), and to create temperature nowcasts and associated uncertainties. When XBTs are dropped in a uniform grid (during surveys) or in random patterns and spaced according to resources available their assimilation can lead to nowcast errors in complex, littoral regions, especially when only a few measurements are available. To mitigate, Sensor Placement for Optimal Temperature Sampling (SPOTS) [Rike and DelBalzo, Proc. IEEE Oceans (2003)] was developed to rapidly optimize placement of a few XBTs and to maximize MODAS accuracy. This work involves high-density, in situ data assimilation into MODAS to create a ground-truth temperature field from which a ground-truth transmission loss field was computed. Optimal XBT location sets were chosen by SPOTS, based on original MODAS uncertainties, and additional sets were chosen, based on subjective choices by an oceanographer. For each XBT set, a MODAS temperature nowcast and associated transmission losses were computed. This work discusses the relationship between temperature uncertainty, temperature error, and acoustic error for the objective SPOTS approach and the subjective oceanographer approach. The SPOTS approach allowed significantly more accurate acoustic calculations, especially when few XBTS were used. [Work sponsored by NAVAIR.
Fluorescence temperature sensing on rotating samples in the cryogenic range
NASA Astrophysics Data System (ADS)
Bresson, F.; Devillers, R.
1999-07-01
A surface temperature measurement technique for rotating samples is proposed. It is based on the concept of fluorescence thermometry. The fluorescent and phosphorescent phenomena have been applied in thermometry for ambient and high-temperature measurement but not the cryogenic domain, which is explored using thermocouple- or platinum resistor-based thermometers. However, thermal behavior of Yb2+ ions in fluoride matrices seems to be interesting for thermometry in the range 20-120 K. We present here a remote sensing method which uses fluorescence behavior of Yb2+ ion-doped fluoride crystals. The fluorescence decay time of such crystals is related to its temperature. Since we developed a specific sol-gel process (OrMoSils) to make strongly adherent fluorescent layers, we applied the fluorescence thermometry method for rotating object surface temperature measurement. The main application is the monitoring of surface temperature of the ball bearing or turbopump axis in liquid propulsion rocket engines. Our method is presented and discussed, and we give some experimental results. An accurate calibration of the decay time of CaF2:Yb2+ versus temperature is also given.
Accurate sampling of PCDD/F in high temperature flue-gas using cooled sampling probes.
Phan, Duong Ngoc Chau; Weidemann, Eva; Lundin, Lisa; Marklund, Stellan; Jansson, Stina
2012-08-01
In a laboratory-scale combustion reactor, flue-gas samples were collected at two temperatures in the post-combustion zone, 700°C and 400°C, using two different water-cooled sampling probes. The probes were the cooled probe described in the European Standard method EN-1948:1, referred to as the original probe, and a modified probe that contained a salt/ice mixture to assist the cooling, referred to as the sub-zero probe. To determine the efficiency of the cooling probes, internal temperature measurements were recorded at 5cm intervals inside the probes. Flue-gas samples were analyzed for polychlorinated dibenzo-p-dioxin and dibenzofurans (PCDD/Fs). Samples collected at 700°C using the original cooling probe showed higher concentrations of PCDD/Fs compared to samples collected using the sub-zero probe. No significant differences were observed between samples collected at 400°C. The results indicated that artifact formation of PCDD/Fs readily occurs during flue-gas sampling at high temperatures if the cooling within the probe is insufficient, as found for the original probe at 700°C. It was also shown that this problem could be alleviated by using probes with an enhanced cooling capacity, such as the sub-zero probe. Although this may not affect samples collected for regulatory purposes in exit gases, it is of great importance for research conducted in the high-temperature region of the post-combustion zone. Copyright © 2012 Elsevier Ltd. All rights reserved.
A low temperature scanning force microscope for biological samples
Gustafsson, Mats Gustaf Lennart
1993-05-01
An SFM has been constructed capable of operating at 143 K. Two contributions to SFM technology are described: a new method of fabricating tips, and new designs of SFM springs that significantly lower the noise level. The SFM has been used to image several biological samples (including collagen, ferritin, RNA, purple membrane) at 143 K and room temperature. No improvement in resolution resulted from 143 K operation; several possible reasons for this are discussed. Possibly sharper tips may help. The 143 K SFM will allow the study of new categories of samples, such as those prepared by freeze-frame, single molecules (temperature dependence of mechanical properties), etc. The SFM was used to cut single collagen molecules into segments with a precision of {le} 10 nm.
NASA Astrophysics Data System (ADS)
Kaźmierczak-Bałata, Anna; Juszczyk, Justyna; Trefon-Radziejewska, Dominika; Bodzenta, Jerzy
2017-03-01
The purpose of this work is to investigate the influence of a temperature difference through a probe-sample contact on thermal contrast in Scanning Thermal Microscopy imaging. A variety of combinations of temperature differences in the probe-sample system were first analyzed based on an electro-thermal finite element model. The numerical analysis included cooling the sample, as well as heating the sample and the probe. Due to the simplicity in the implementation, experimental verification involved modifying the standard imaging technique by heating the sample. Experiments were carried out in the temperature range between 298 K and 328 K. Contrast in thermal mapping was improved for a low probe current with a heated sample.
Microcalorimetry: Wide Temperature Range, High Field, Small Sample Measurements
NASA Astrophysics Data System (ADS)
Hellman, Frances
2000-03-01
We have used Si micromachining techniques to fabricate devices for measuring specific heat or other calorimetric signals from microgram-quantity samples over a temperature range from 1 to 900K in magnetic fields to date up to 8T. The devices are based on a relatively robust silicon nitride membrane with thin film heaters and thermometers. Different types of thermometers are used for different purposes and in different temperature ranges. These devices are particularly useful for thin film samples (typically 200-400 nm thick at present) deposited directly onto the membrane through a Si micromachined evaporation mask. They have also been used for small single crystal samples attached by conducting grease or solder, and for powder samples dissolved in a solvent and dropped onto devices. The measurement technique used (relaxation method) is particularly suited to high field measurements because the thermal conductance can be measured once in zero field and is field independent, while the time constant of the relaxation does not depend on thermometer calibration. Present development efforts include designs which show promise for time-resolved calorimetry measurements of biological samples in small amounts of water. Samples measured to date include amorphous magnetic thin films (a-TbFe2 and giant negative magnetoresistance a-Gd-Si alloys), empty and filled fullerenes (C_60, K_3C_60, C_82, La@C_82, C_84, and Sc_2@C_84), single crystal manganites (La_1-xSr_xMnO_3), antiferromagnetic multilayers (NiO/CoO, NiO/MgO, and CoO/MgO), and nanoparticle magnetic materials (CoO in a Ag matrix).
Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures
NASA Astrophysics Data System (ADS)
Liu, Y.; Minnett, P. J.
2014-12-01
Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.
Thermal modeling of core sampling in flammable gas waste tanks. Part 1: Push-mode sampling
Unal, C.; Stroh, K.; Pasamehmetoglu, K.O.
1997-08-01
The radioactive waste stored in underground storage tanks at Hanford site is routinely being sampled for waste characterization purposes. The push- and rotary-mode core sampling is one of the sampling methods employed. The waste includes mixtures of sodium nitrate and sodium nitrite with organic compounds that can produce violent exothermic reactions if heated above 160 C during core sampling. A self-propagating waste reaction would produce very high temperatures that eventually result in failure of the tank and radioactive material releases to environment. A two-dimensional thermal model based on a lumped finite volume analysis method is developed. The enthalpy of each node is calculated from the first law of thermodynamics. A flash temperature and effective contact area concept were introduced to account the interface temperature rise. No maximum temperature rise exceeding the critical value of 60 C was found in the cases studied for normal operating conditions. Several accident conditions are also examined. In these cases it was found that the maximum drill bit temperature remained below the critical reaction temperature as long as a 30 scfm purge flow is provided the push-mode drill bit during sampling in rotary mode. The failure to provide purge flow resulted in exceeding the limiting temperatures in a relatively short time.
Temperature field in Graphite-Silicon-Graphite samples heated in monoellipsoidal mirror furnaces
NASA Astrophysics Data System (ADS)
Rivas, Damián; Haya, Rodrigo
1999-01-01
The heating of cylindrical compound samples in monoellipsoidal mirror furnaces is analyzed by means of a conduction-radiation model that includes the radiative exchange between the sample and the mirror, and that takes into account the temperature dependence of the physical properties of the materials that form the sample. Graphite-Silicon-Graphite samples are considered. The melting of the Silicon part, and the temperature difference between the two Graphite rods that hold the Silicon melt zone are analyzed. The relative position of the Silicon part in the compound sample turns out to be a very sensitive parameter: it affects (1) the power needed to melt the Silicon zone, and (2) the temperature difference between the solid Graphite rods.
Advances in downhole sampling of high temperature solutions
Bayhurst, G.K.; Janecky, D.R.
1991-01-01
A fluid sampler capable of sampling hot and/or deep wells has been developed at Los Alamos National Laboratory. In collaboration with Leutert Instruments, an off-the-shelf sampler design was modified to meet gas-tight and minimal chemical reactivity/contamination specifications for use in geothermal wells and deep ocean drillholes. This downhole sampler has been routinely used at temperatures up to 300{degrees}C and hole depths of greater than 5 km. We have tested this sampler in various continental wells, including Valles Caldera VC-2a and VC-2b, German KTB, Cajon Pass, and Yellowstone Y-10. Both the standard commercial and enhanced samplers have also been used to obtain samples from a range of depths in the Ocean Drilling Project's hole 504B and during recent mid-ocean ridge drilling efforts. The sampler has made it possible to collect samples at temperatures and conditions beyond the limits of other tools with the added advantage of chemical corrosion resistance.
High temperature furnace modeling and performance verifications
NASA Technical Reports Server (NTRS)
Smith, James E., Jr.
1992-01-01
Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.
Temperature and flow fields in samples heated in monoellipsoidal mirror furnaces
NASA Astrophysics Data System (ADS)
Rivas, D.; Haya, R.
The temperature field in samples heated in monoellipsoidal mirror furnaces will be analyzed. The radiation heat exchange between the sample and the mirror is formulated analytically, taking into account multiple reflections at the mirror. It will be shown that the effect of these multiple reflections in the heating process is quite important, and, as a consequence, the effect of the mirror reflectance in the temperature field is quite strong. The conduction-radiation model will be used to simulate the heating process in the floating-zone technique in microgravity conditions; important parameters like the Marangoni number (that drives the thermocapillary flow in the melt), and the temperature gradient at the melt-crystal interface will be estimated. The model will be validated comparing with experimental data. The case of samples mounted in a wall-free configuration (as in the MAXUS-4 programme) will be also considered. Application to the case of compound samples (graphite-silicon-graphite) will be made; the melting of the silicon part and the surface temperature distribution in the melt will be analyzed. Of special interest is the temperature difference between the two graphite rods that hold the silicon part, since it drives the thermocapillary flow in the melt. This thermocapillary flow will be studied, after coupling the previous model with the convective effects. The possibility of counterbalancing this flow by the controlled vibration of the graphite rods will be studied as well. Numerical results show that suppressing the thermocapillary flow can be accomplished quite effectively.
Wang-Landau sampling with logarithmic windows for continuous models.
Xie, Y L; Chu, P; Wang, Y L; Chen, J P; Yan, Z B; Liu, J-M
2014-01-01
We present a modified Wang-Landau sampling (MWLS) for continuous statistical models by partitioning the energy space into a set of windows with logarithmically shrinking width. To demonstrate its necessity and advantages, we apply this sampling to several continuous models, including the two-dimensional square XY spin model, triangular J1-J2 spin model, and Lennard-Jones cluster model. Given a finite number of bins for partitioning the energy space, the conventional Wang-Landau sampling may not generate sufficiently accurate density of states (DOS) around the energy boundaries. However, it is demonstrated that much more accurate DOS can be obtained by this MWLS, and thus a precise evaluation of the thermodynamic behaviors of the continuous models at extreme low temperature (kBT<0.1) becomes accessible. The present algorithm also allows efficient computation besides the highly reliable data sampling.
Modeling abundance effects in distance sampling
Royle, J. Andrew; Dawson, D.K.; Bates, S.
2004-01-01
Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.
SQUID Microscopy: Magnetic Images of Room Temperature Samples
NASA Astrophysics Data System (ADS)
Grossman, Helene
1998-10-01
We use a microscope based on a high-Tc Superconducting Quantum Interference Device (SQUID) to study room temperature samples. The SQUID, which measures magnetic flux, is mounted on a sapphire rod and maintained at 77 K inside a vacuum chamber. A sample, separated from the vacuum chamber by a window, is placed above the SQUID, and the entire microscope is enclosed within a magnetic shield. The sample can be scanned over the SQUID to obtain a magnetic image. We have used the microscope to study magnetotactic bacteria, which have a permanent magnetic dipole moment of about 1.5 x 10-16 Am^2. The bacteria, suspended in an aqueous medium, are placed in a cell which is separated from the vacuum chamber by a 3 micron thick SiN membrane. The sample is brought as close as 15 micron to the SQUID, and the magnetic flux noise from the motion of the bacteria is measured. Data from non-motile cells, which undergo Brownian motion, give us information about the distribution of lengths of the bacteria. By applying a magnetic field, we can determine the average dipole moment. Noise measurements of the live bacteria give us the rates of flagellar rotation and body-roll, as well as the amplitudes of the vibrational and precessional motions. Another application of the microscope is non-destructive evaluation of steel. We have investigated the effects of both thermal and mechanical stresses on the remnant magnetization of steel. A third application of the microscope is in studying the properties of ferromagnetic nanocrystals of Co and Fe_3O_4.
Sampling Weights in Latent Variable Modeling
ERIC Educational Resources Information Center
Asparouhov, Tihomir
2005-01-01
This article reviews several basic statistical tools needed for modeling data with sampling weights that are implemented in Mplus Version 3. These tools are illustrated in simulation studies for several latent variable models including factor analysis with continuous and categorical indicators, latent class analysis, and growth models. The…
Monte Carlo Sampling of Negative-temperature Plasma States
John A. Krommes; Sharadini Rath
2002-07-19
A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.
Montaser, A.
1992-01-01
New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
Functional Error Models to Accelerate Nested Sampling
NASA Astrophysics Data System (ADS)
Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.
2014-12-01
The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested
NASA Astrophysics Data System (ADS)
Raicich, F.; Rampazzo, A.
2003-01-01
For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS) is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis) code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT) profiles collected along Volunteer Observing Ship (VOS) tracks, Airborne XBTs (AXBTs) and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000) is also studied.
Sample size planning for classification models.
Beleites, Claudia; Neugebauer, Ute; Bocklitz, Thomas; Krafft, Christoph; Popp, Jürgen
2013-01-14
In biospectroscopy, suitably annotated and statistically independent samples (e.g. patients, batches, etc.) for classifier training and testing are scarce and costly. Learning curves show the model performance as function of the training sample size and can help to determine the sample size needed to train good classifiers. However, building a good model is actually not enough: the performance must also be proven. We discuss learning curves for typical small sample size situations with 5-25 independent samples per class. Although the classification models achieve acceptable performance, the learning curve can be completely masked by the random testing uncertainty due to the equally limited test sample size. In consequence, we determine test sample sizes necessary to achieve reasonable precision in the validation and find that 75-100 samples will usually be needed to test a good but not perfect classifier. Such a data set will then allow refined sample size planning on the basis of the achieved performance. We also demonstrate how to calculate necessary sample sizes in order to show the superiority of one classifier over another: this often requires hundreds of statistically independent test samples or is even theoretically impossible. We demonstrate our findings with a data set of ca. 2550 Raman spectra of single cells (five classes: erythrocytes, leukocytes and three tumour cell lines BT-20, MCF-7 and OCI-AML3) as well as by an extensive simulation that allows precise determination of the actual performance of the models in question. Copyright © 2012 Elsevier B.V. All rights reserved.
Meents, Alke; Gutmann, Sascha; Wagner, Armin; Schulze-Briese, Clemens
2010-01-19
Radiation damage is the major impediment for obtaining structural information from biological samples by using ionizing radiation such as x-rays or electrons. The knowledge of underlying processes especially at cryogenic temperatures is still fragmentary, and a consistent mechanism has not been found yet. By using a combination of single-crystal x-ray diffraction, small-angle scattering, and qualitative and quantitative radiolysis experiments, we show that hydrogen gas, formed inside the sample during irradiation, rather than intramolecular bond cleavage between non-hydrogen atoms, is mainly responsible for the loss of high-resolution information and contrast in diffraction experiments and microscopy. The experiments that are presented in this paper cover a temperature range between 5 and 160 K and reveal that the commonly used temperature in x-ray crystallography of 100 K is not optimal in terms of minimizing radiation damage and thereby increasing the structural information obtainable in a single experiment. At 50 K, specific radiation damage to disulfide bridges is reduced by a factor of 4 compared to 100 K, and samples can tolerate a factor of 2.6 and 3.9 higher dose, as judged by the increase of R(free) values of elastase and cubic insulin crystals, respectively.
Wang, Hongxin; Yoda, Yoshitaka; Kamali, Saeed; Zhou, Zhao Hui; Cramer, Stephen P
2012-03-01
There are several practical and intertangled issues which make the experiments of nuclear resonant vibrational spectroscopy (NRVS) on biological samples difficult to perform. The sample temperature is one of the most important issues. In NRVS the real sample temperatures can be very different from the readings on the temperature sensors. In this study the following have been performed: (i) citing and analyzing various existing NRVS data to assess the real sample temperatures during the NRVS measurements and to understand their trends with the samples' loading conditions; (ii) designing several NRVS measurements with (Et(4)N)[FeCl(4)] to verify these trends; and (iii) proposing a new sample-loading procedure to achieve significantly lower real sample temperatures and to balance among the intertangled experimental issues in biological NRVS measurements.
Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea
NASA Astrophysics Data System (ADS)
Lorentzen, Torbjørn
2014-02-01
The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model
Tissue Sampling Guides for Porcine Biomedical Models.
Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas
2016-04-01
This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.
Magnetic microscopy based on high-Tc SQUIDs for room temperature samples
NASA Astrophysics Data System (ADS)
Wang, H. W.; Kong, X. Y.; Ren, Y. F.; Yu, H. W.; Ding, H. S.; Zhao, S. P.; Chen, G. H.; Zhang, L. H.; Zhou, Y. L.; Yang, Q. S.
2003-11-01
The SQUID microscope is the most suitable instrument for imaging magnetic fields above sample surfaces if one is mainly interested in field sensitivity. In this paper, both the magnetic moment sensitivity and spatial resolution of the SQUID microscope are analysed with a simple point moment model. The result shows that the ratio of SQUID sensor size to sensor-sample distance effectively influences the sensitivity and spatial resolution. In comparison with some experimental results of magnetic images for room temperature samples from our high-Tc SQUID microscope in an unshielded environment, a brief discussion for further improvement is presented.
Adaptive importance sampling for network growth models
Holmes, Susan P.
2016-01-01
Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable. With this in mind, we develop an Adaptive Importance Sampling algorithm for estimating likelihoods of Network Growth Models. We introduce the use of the classic Plackett-Luce model of rankings as a family of importance distributions. Updates to importance distributions are performed iteratively via the Cross-Entropy Method with an additional correction for degeneracy/over-fitting inspired by the Minimum Description Length principle. This correction can be applied to other estimation problems using the Cross-Entropy method for integration/approximate counting, and it provides an interpretation of Adaptive Importance Sampling as iterative model selection. Empirical results for the Preferential Attachment model are given, along with a comparison to an alternative established technique, Annealed Importance Sampling. PMID:27182098
Beam Heating of Samples: Modeling and Verification. Part 2
NASA Technical Reports Server (NTRS)
Kazmierczak, Michael; Gopalakrishnan, Pradeep; Kumar, Raghav; Banerjee Rupak; Snell, Edward; Bellamy, Henry; Rosenbaum, Gerd; vanderWoerd, Mark
2006-01-01
Energy absorbed from the X-ray beam by the sample requires cooling by forced convection (i.e. cryostream) to minimize temperature increase and the damage caused to the sample by the X-ray heating. In this presentation we will first review the current theoretical models and recent studies in the literature, which predict the sample temperature rise for a given set of beam parameters. It should be noted that a common weakness of these previous studies is that none of them provide actual experimental confirmation. This situation is now remedied in our investigation where the problem of x-ray sample heating is taken up once more. We have theoretically investigated, and at the same time, in addition to the numerical computations, performed experiments to validate the predictions. We have modeled, analyzed and experimentally tested the temperature rise of a 1 mm diameter glass sphere (sample surrogate) exposed to an intense synchrotron X-ray beam, while it is being cooled in a uniform flow of nitrogen gas. The heat transfer, including external convection and internal heat conduction was theoretically modeled using CFD to predict the temperature variation in the sphere during cooling and while it was subjected to an undulator (ID sector 19) X-ray beam at the APS. The surface temperature of the sphere during the X-ray beam heating was measured using the infrared camera measurement technique described in a previous talk. The temperatures from the numerical predictions and experimental measurements are compared and discussed. Additional results are reported for the two different sphere sizes and for two different supporting pin orientations.
Beam Heating of Samples: Modeling and Verification. Part 2
NASA Technical Reports Server (NTRS)
Kazmierczak, Michael; Gopalakrishnan, Pradeep; Kumar, Raghav; Banerjee Rupak; Snell, Edward; Bellamy, Henry; Rosenbaum, Gerd; vanderWoerd, Mark
2006-01-01
Energy absorbed from the X-ray beam by the sample requires cooling by forced convection (i.e. cryostream) to minimize temperature increase and the damage caused to the sample by the X-ray heating. In this presentation we will first review the current theoretical models and recent studies in the literature, which predict the sample temperature rise for a given set of beam parameters. It should be noted that a common weakness of these previous studies is that none of them provide actual experimental confirmation. This situation is now remedied in our investigation where the problem of x-ray sample heating is taken up once more. We have theoretically investigated, and at the same time, in addition to the numerical computations, performed experiments to validate the predictions. We have modeled, analyzed and experimentally tested the temperature rise of a 1 mm diameter glass sphere (sample surrogate) exposed to an intense synchrotron X-ray beam, while it is being cooled in a uniform flow of nitrogen gas. The heat transfer, including external convection and internal heat conduction was theoretically modeled using CFD to predict the temperature variation in the sphere during cooling and while it was subjected to an undulator (ID sector 19) X-ray beam at the APS. The surface temperature of the sphere during the X-ray beam heating was measured using the infrared camera measurement technique described in a previous talk. The temperatures from the numerical predictions and experimental measurements are compared and discussed. Additional results are reported for the two different sphere sizes and for two different supporting pin orientations.
40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test for filter temperature control... Class I and Class II Equivalent Methods for PM 2.5 or PM 10-2,5 § 53.57 Test for filter temperature... temperature during a 4-hour period of active sampling as well as during a subsequent 4-hour non-sampling time...
Modeling abundance using hierarchical distance sampling
Royle, Andy; Kery, Marc
2016-01-01
In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.
Mixture models for distance sampling detection functions.
Miller, David L; Thomas, Len
2015-01-01
We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used "key function plus series adjustment" (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.
Exact sampling hardness of Ising spin models
NASA Astrophysics Data System (ADS)
Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.
2017-09-01
We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.
Fast temperature spectrometer for samples under extreme conditions
Zhang, Dongzhou; Jackson, Jennifer M.; Sturhahn, Wolfgang; Zhao, Jiyong; Alp, E. Ercan; Toellner, Thomas S.; Hu, Michael Y.
2015-01-15
We have developed a multi-wavelength Fast Temperature Readout (FasTeR) spectrometer to capture a sample’s transient temperature fluctuations, and reduce uncertainties in melting temperature determination. Without sacrificing accuracy, FasTeR features a fast readout rate (about 100 Hz), high sensitivity, large dynamic range, and a well-constrained focus. Complimenting a charge-coupled device spectrometer, FasTeR consists of an array of photomultiplier tubes and optical dichroic filters. The temperatures determined by FasTeR outside of the vicinity of melting are, generally, in good agreement with results from the charge-coupled device spectrometer. Near melting, FasTeR is capable of capturing transient temperature fluctuations, at least on the order of 300 K/s. A software tool, SIMFaster, is described and has been developed to simulate FasTeR and assess design configurations. FasTeR is especially suitable for temperature determinations that utilize ultra-fast techniques under extreme conditions. Working in parallel with the laser-heated diamond-anvil cell, synchrotron Mössbauer spectroscopy, and X-ray diffraction, we have applied the FasTeR spectrometer to measure the melting temperature of {sup 57}Fe{sub 0.9}Ni{sub 0.1} at high pressure.
Nikolic, M.V. . E-mail: maria@mi.sanu.ac.yu; Paraskevopoulos, K.M.; Aleksic, O.S.; Zorba, T.T.; Savic, S.M.; Lukovic, D.T.
2007-08-07
Single phase complex spinel (Mn, Ni, Co, Fe){sub 3}O{sub 4} samples were sintered at 1050, 1200 and 1300 deg. C for 30 min and at 1200 deg. C for 120 min. Morphological changes of the obtained samples with the sintering temperature and time were analyzed by X-ray diffraction and scanning electron microscope (SEM). Room temperature far infrared reflectivity spectra for all samples were measured in the frequency range between 50 and 1200 cm{sup -1}. The obtained spectra for all samples showed the presence of the same oscillators, but their intensities increased with the sintering temperature and time in correlation with the increase in sample density and microstructure changes during sintering. The measured spectra were numerically analyzed using the Kramers-Kroenig method and the four-parameter model of coupled oscillators. Optical modes were calculated for six observed ionic oscillators belonging to the spinel structure of (Mn, Ni, Co, Fe){sub 3}O{sub 4} of which four were strong and two were weak.
Thermospheric temperature, density, and composition: New models
NASA Technical Reports Server (NTRS)
Jacchia, L. G.
1977-01-01
The models essentially consist of two parts: the basic static models, which give temperature and density profiles for the relevant atmospheric constituents for any specified exospheric temperature, and a set of formulae to compute the exospheric temperature and the expected deviations from the static models as a result of all the recognized types of thermospheric variation. For the basic static models, tables are given for heights from 90 to 2,500 km and for exospheric temperatures from 500 to 2600 K. In the formulae for the variations, an attempt has been made to represent the changes in composition observed by mass spectrometers on the OGO 6 and ESRO 4 satellites.
Fast temperature relaxation model in dense plasmas
NASA Astrophysics Data System (ADS)
Faussurier, Gérald; Blancard, Christophe
2017-01-01
We present a fast model to calculate the temperature-relaxation rates in dense plasmas. The electron-ion interaction-potential is calculated by combining a Yukawa approach and a finite-temperature Thomas-Fermi model. We include the internal energy as well as the excess energy of ions using the QEOS model. Comparisons with molecular dynamics simulations and calculations based on an average-atom model are presented. This approach allows the study of the temperature relaxation in a two-temperature electron-ion system in warm and hot dense matter.
Fitting models to correlated data (large samples)
NASA Astrophysics Data System (ADS)
Féménias, Jean-Louis
2004-03-01
The study of the ordered series of residuals of a fit proved to be useful in evaluating separately the pure experimental error and the model bias leading to a possible improvement of the modeling [J. Mol. Spectrosc. 217 (2003) 32]. In the present work this procedure is extended to homogeneous correlated data. This new method allows a separate estimation of pure experimental error, model bias, and data correlation; furthermore, it brings a new insight into the difference between goodness of fit and model relevance. It can be considered either as a study of 'random systematic errors' or as an extended approach of the Durbin-Watson problem [Biometrika 37 (1950) 409] taking into account the model error. In the present work an empirical approach is proposed for large samples ( n⩾500) where numerical tests are done showing the accuracy and the limits of the method.
Annealed Importance Sampling for Neural Mass Models
Penny, Will; Sengupta, Biswa
2016-01-01
Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606
[Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].
Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan
2005-06-01
Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.
Latin hypercube sampling with the SESOIL model
Hetrick, D.M.; Luxmoore, R.J.; Tharp, M.L.
1994-09-01
The seasonal soil compartment model SESOIL, a one-dimensional vertical transport code for chemicals in the unsaturated soil zone, has been coupled with the Monte Carlo computer code PRISM, which utilizes a Latin hypercube sampling method. Frequency distributions are assigned to each of 64 soil, chemical, and climate input variables for the SESOIL model, and these distributions are randomly sampled to generate N (200, for example) input data sets. The SESOIL model is run by PRISM for each set of input values, and the combined set of model variables and predictions are evaluated statistically by PRISM to summarize the relative influence of input variables on model results. Output frequency distributions for selected SESOIL components are produced. As an initial analysis and to illustrate the PRISM/SESOIL approach, input data were compiled for the model for three sites at different regions of the country (Oak Ridge, Tenn.; Fresno, Calif.; Fargo, N.D.). The chemical chosen for the analysis was trichloroethylene (TCE), which was initially loaded in the soil column at a 60- to 90-cm depth. The soil type at each site was assumed to be identical to the cherty silt loam at Oak Ridge; the only difference in the three data sets was the climatic data. Output distributions for TCE mass flux volatilized, TCE mass flux to groundwater, and residual TCE concentration in the lowest soil layer are vastly different for the three sites.
Multiple temperatures sampled using only one reference junction
NASA Technical Reports Server (NTRS)
Cope, G. W.
1966-01-01
In a multitemperature sampling system where the reference thermocouples are a distance from the test thermocouples, an intermediate thermal junction block is placed between the sets of thermocouples permitting switching between a single reference and the test thermocouples. This reduces the amount of cabling, reference thermocouples, and cost of the sampling system.
Modeling maximum daily temperature using a varying coefficient regression model
Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith
2014-01-01
Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...
NASA Astrophysics Data System (ADS)
Kwiatkowski, Adam; Gryglas-Borysiewicz, Marta; Juszyński, Piotr; Przybytek, Jacek; Sawicki, Maciej; Sadowski, Janusz; Wasik, Dariusz; Baj, Michał
2016-06-01
In this paper, we show that the widely accepted method of the determination of Curie temperature (TC) in (Ga,Mn)As samples, based on the position of the peak in the temperature derivative of the resistivity, completely fails in the case of non-metallic and low-TC unannealed samples. In this case, we propose an alternative method, also based on electric transport measurements, which exploits temperature dependence of the second derivative of the resistivity upon magnetic field.
Limited sampling model for vinblastine pharmacokinetics.
Ratain, M J; Vogelzang, N J
1987-10-01
A limited sampling model was developed for vinblastine to estimate the total area under the concentration time curve (AUC) using only two timepoints. Detailed pharmacokinetic analysis (16 timepoints) was performed in 30 patients treated with a small bolus dose (3 mg/m2) of vinblastine. A model for the total AUC was developed by multiple linear regression, using the first 15 patients as the training data set: AUC = 38.0 C10 + 73.8 C36 - 12.9, where C10 and C36 represent the serum vinblastine concentration at 10 hours and 36 hours, respectively (r = 0.99, P less than 0.0001). The model was validated on the other 15 patients, the test data set (r = 0.94, P less than 0.0001), with a mean predictive error of 13%. Limited sampling models may facilitate large-scale pharmacodynamic studies of new anticancer drugs, in order to relate the estimated AUC to toxicity and/or response without the need for detailed pharmacokinetic analysis.
Modeling maximum daily temperature using a varying coefficient regression model
NASA Astrophysics Data System (ADS)
Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.
2014-04-01
Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.
Problems with neuronal models in temperature regulation.
Jessen, C.
1986-01-01
Neuronal models in temperature regulation are primarily considered explicit statements of assumptions and premises used in design of experiments and development of descriptive equations concerning the relationships between thermal inputs and control actions. Some of the premises of current multiplicative models are discussed in relation to presently available experimental evidence. The results of these experiments suggest that there is no skin temperature compatible with life which completely suppresses a rise of heat production in response to low internal temperature. The slope of heat production versus internal temperature at a given skin temperature is not constant but depends on internal temperature and the level of heat production. Therefore, a concept involving additive interaction of central and peripheral temperature signals appears more flexible in accepting data obtained even under extreme conditions. PMID:3751140
From samples to populations in retinex models
NASA Astrophysics Data System (ADS)
Gianini, Gabriele
2017-05-01
Some spatial color algorithms, such as Brownian Milano retinex (MI-retinex) and random spray retinex (RSR), are based on sampling. In Brownian MI-retinex, memoryless random walks (MRWs) explore the neighborhood of a pixel and are then used to compute its output. Considering the relative redundancy and inefficiency of MRW exploration, the algorithm RSR replaced the walks by samples of points (the sprays). Recent works point to the fact that a mapping from the sampling formulation to the probabilistic formulation of the corresponding sampling process can offer useful insights into the models, at the same time featuring intrinsically noise-free outputs. The paper continues the development of this concept and shows that the population-based versions of RSR and Brownian MI-retinex can be used to obtain analytical expressions for the outputs of some test images. The comparison of the two analytic expressions from RSR and from Brownian MI-retinex demonstrates not only that the two outputs are, in general, different but also that they depend in a qualitatively different way upon the features of the image.
Modeling monthly mean air temperature for Brazil
NASA Astrophysics Data System (ADS)
Alvares, Clayton Alcarde; Stape, José Luiz; Sentelhas, Paulo Cesar; de Moraes Gonçalves, José Leonardo
2013-08-01
Air temperature is one of the main weather variables influencing agriculture around the world. Its availability, however, is a concern, mainly in Brazil where the weather stations are more concentrated on the coastal regions of the country. Therefore, the present study had as an objective to develop models for estimating monthly and annual mean air temperature for the Brazilian territory using multiple regression and geographic information system techniques. Temperature data from 2,400 stations distributed across the Brazilian territory were used, 1,800 to develop the equations and 600 for validating them, as well as their geographical coordinates and altitude as independent variables for the models. A total of 39 models were developed, relating the dependent variables maximum, mean, and minimum air temperatures (monthly and annual) to the independent variables latitude, longitude, altitude, and their combinations. All regression models were statistically significant ( α ≤ 0.01). The monthly and annual temperature models presented determination coefficients between 0.54 and 0.96. We obtained an overall spatial correlation higher than 0.9 between the models proposed and the 16 major models already published for some Brazilian regions, considering a total of 3.67 × 108 pixels evaluated. Our national temperature models are recommended to predict air temperature in all Brazilian territories.
Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong
2014-01-01
An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219
Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong; Zubair, Muhammad
2014-01-01
An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m(2) was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa.
Hussels, Martin; Konrad, Alexander; Brecht, Marc
2012-12-01
The construction of a microscope with fast sample transfer system for single-molecule spectroscopy and microscopy at low temperatures using 2D/3D sample-scanning is reported. The presented construction enables the insertion of a sample from the outside (room temperature) into the cooled (4.2 K) cryostat within seconds. We describe the mechanical and optical design and present data from individual Photosystem I complexes. With the described setup numerous samples can be investigated within one cooling cycle. It opens the possibility to investigate biological samples (i) without artifacts introduced by prolonged cooling procedures and (ii) samples that require preparation steps like plunge-freezing or specific illumination procedures prior to the insertion into the cryostat.
Thermal modeling of core sampling in flammable gas waste tanks. Part 2: Rotary-mode sampling
Unal, C.; Poston, D.; Pasamehmetoglu, K.O.; Witwer, K.S.
1997-08-01
The radioactive waste stored in underground storage tanks at Hanford site includes mixtures of sodium nitrate and sodium nitrite with organic compounds. The waste can produce undesired violent exothermic reactions when heated locally during the rotary-mode sampling. Experiments are performed varying the downward force at a maximum rotational speed of 55 rpm and minimum nitrogen purge flow of 30 scfm. The rotary drill bit teeth-face temperatures are measured. The waste is simulated with a low thermal conductivity hard material, pumice blocks. A torque meter is used to determine the energy provided to the drill string. The exhaust air-chip temperature as well as drill string and drill bit temperatures and other key operating parameters were recorded. A two-dimensional thermal model is developed. The safe operating conditions were determined for normal operating conditions. A downward force of 750 at 55 rpm and 30 scfm nitrogen purge flow was found to yield acceptable substrate temperatures. The model predicted experimental results reasonably well. Therefore, it could be used to simulate abnormal conditions to develop procedures for safe operations.
Modeling daily average stream temperature from air temperature and watershed area
NASA Astrophysics Data System (ADS)
Butler, N. L.; Hunt, J. R.
2012-12-01
Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7
Pretto, J. J.; Rochford, P. D.
1994-01-01
BACKGROUND--Although plastic arterial sampling syringes are now commonly used, the effects of sample storage time and temperature on blood gas tensions are poorly described for samples with a high oxygen partial pressure (PaO2) taken with these high density polypropylene syringes. METHODS--Two ml samples of tonometered whole blood (PaO2 86.7 kPa, PaCO2 4.27 kPa) were placed in glass syringes and in three brands of plastic blood gas syringes. The syringes were placed either at room temperature or in iced water and blood gas analysis was performed at baseline and after 5, 10, 20, 40, 60, 90, and 120 minutes. RESULTS--In the first 10 minutes measured PaO2 in plastic syringes at room temperature fell by an average of 1.21 kPa/min; placing the sample on ice reduced the rate of PaO2 decline to 0.19 kPa/min. The rate of fall of PaO2 in glass at room temperature was 0.49 kPa/min. The changes in PaCO2 were less dramatic and at room temperature averaged increases of 0.47 kPa for plastic syringes and 0.71 kPa for glass syringes over the entire two hour period. These changes in gas tension for plastic syringes would lead to an overestimation of pulmonary shunt measured by the 100% oxygen technique of 0.6% for each minute left at room temperature before analysis. CONCLUSIONS--Glass syringes are superior to plastic syringes in preserving samples with a high PaO2, and prompt and adequate cooling of such samples is essential for accurate blood gas analysis. PMID:8016801
Effects of High-frequency Wind Sampling on Simulated Mixed Layer Depth and Upper Ocean Temperature
NASA Technical Reports Server (NTRS)
Lee, Tong; Liu, W. Timothy
2005-01-01
Effects of high-frequency wind sampling on a near-global ocean model are studied by forcing the model with a 12 hourly averaged wind product and its 24 hourly subsamples in separate experiments. The differences in mixed layer depth and sea surface temperature resulting from these experiments are examined, and the underlying physical processes are investigated. The 24 hourly subsampling not only reduces the high-frequency variability of the wind but also affects the annual mean wind because of aliasing. While the former effect largely impacts mid- to high-latitude oceans, the latter primarily affects tropical and coastal oceans. At mid- to high-latitude regions the subsampled wind results in a shallower mixed layer and higher sea surface temperature because of reduced vertical mixing associated with weaker high-frequency wind. In tropical and coastal regions, however, the change in upper ocean structure due to the wind subsampling is primarily caused by the difference in advection resulting from aliased annual mean wind, which varies with the subsampling time. The results of the study indicate a need for more frequent sampling of satellite wind measurement and have implications for data assimilation in terms of identifying the nature of model errors.
Effects of High-frequency Wind Sampling on Simulated Mixed Layer Depth and Upper Ocean Temperature
NASA Technical Reports Server (NTRS)
Lee, Tong; Liu, W. Timothy
2005-01-01
Effects of high-frequency wind sampling on a near-global ocean model are studied by forcing the model with a 12 hourly averaged wind product and its 24 hourly subsamples in separate experiments. The differences in mixed layer depth and sea surface temperature resulting from these experiments are examined, and the underlying physical processes are investigated. The 24 hourly subsampling not only reduces the high-frequency variability of the wind but also affects the annual mean wind because of aliasing. While the former effect largely impacts mid- to high-latitude oceans, the latter primarily affects tropical and coastal oceans. At mid- to high-latitude regions the subsampled wind results in a shallower mixed layer and higher sea surface temperature because of reduced vertical mixing associated with weaker high-frequency wind. In tropical and coastal regions, however, the change in upper ocean structure due to the wind subsampling is primarily caused by the difference in advection resulting from aliased annual mean wind, which varies with the subsampling time. The results of the study indicate a need for more frequent sampling of satellite wind measurement and have implications for data assimilation in terms of identifying the nature of model errors.
A numerical model for ground temperature determination
NASA Astrophysics Data System (ADS)
Jaszczur, M.; Polepszyc, I.; Biernacka, B.; Sapińska-Śliwa, A.
2016-09-01
The ground surface temperature and the temperature with respect to depth are one of the most important issues for geotechnical and environmental applications as well as for plants and other living organisms. In geothermal systems, temperature is directly related to the energy resources in the ground and it influences the efficiency of the ground source system. The ground temperature depends on a very large number of parameters, but it often needs to be evaluated with good accuracy. In the present work, models for the prediction of the ground temperature with a focus on the surface temperature at which all or selected important ground and environmental phenomena are taken into account have been analysed. It has been found that the simplest models and the most complex model may result in a similar temperature variation, yet at a very low depth and for specific cases only. A detailed analysis shows that taking into account different types of pavement or a greater depth requires more complex and advanced models.
Modeling of global surface air temperature
NASA Astrophysics Data System (ADS)
Gusakova, M. A.; Karlin, L. N.
2012-04-01
A model to assess a number of factors, such as total solar irradiance, albedo, greenhouse gases and water vapor, affecting climate change has been developed on the basis of Earth's radiation balance principle. To develop the model solar energy transformation in the atmosphere was investigated. It's a common knowledge, that part of the incoming radiation is reflected into space from the atmosphere, land and water surfaces, and another part is absorbed by the Earth's surface. Some part of outdoing terrestrial radiation is retained in the atmosphere by greenhouse gases (carbon dioxide, methane, nitrous oxide) and water vapor. Making use of the regression analysis a correlation between concentration of greenhouse gases, water vapor and global surface air temperature was obtained which, it is turn, made it possible to develop the proposed model. The model showed that even smallest fluctuations of total solar irradiance intensify both positive and negative feedback which give rise to considerable changes in global surface air temperature. The model was used both to reconstruct the global surface air temperature for the 1981-2005 period and to predict global surface air temperature until 2030. The reconstructions of global surface air temperature for 1981-2005 showed the models validity. The model makes it possible to assess contribution of the factors listed above in climate change.
Eckels, David E.; Hass, William J.
1989-05-30
A sample transport, sample introduction, and flame excitation system for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses.
Measurement of temperature and temperature gradient in millimeter samples by chlorine NQR
NASA Astrophysics Data System (ADS)
Lužnik, Janko; Pirnat, Janez; Trontelj, Zvonko
2009-09-01
A mini-thermometer based on the 35Cl nuclear quadrupole resonance (NQR) frequency temperature dependence in the chlorates KClO3 and NaClO3 was built and successfully tested by measuring temperature and temperature gradient at 77 K and higher in about 100 mm3 active volume of a mini Joule-Thomson refrigerator. In the design of the tank-circuit coil, an array of small coils connected in series enabled us (a) to achieve a suitable ratio of inductance to capacity in the NQR spectrometer input tank circuit, (b) to use a single crystal of KClO3 or NaClO3 (of 1-2 mm3 size) in one coil as a mini-thermometer with a resolution of 0.03 K and (c) to construct a system for measuring temperature gradients when the spatial coordinates of each chlorate single crystal within an individual coil are known.
40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present inside...
40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present inside...
40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present inside...
40 CFR 53.57 - Test for filter temperature control during sampling and post-sampling periods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... energy distribution and permitted tolerances specified in table E-2 of this subpart. The solar radiation... sequential sample operation. (3) The solar radiant energy source shall be installed in the test chamber such... temperature control system or by the radiant energy from the solar radiation source that may be present inside...
Temperature dependence of standard model CP violation.
Brauner, Tomáš; Taanila, Olli; Tranberg, Anders; Vuorinen, Aleksi
2012-01-27
We analyze the temperature dependence of CP violation effects in the standard model by determining the effective action of its bosonic fields, obtained after integrating out the fermions from the theory and performing a covariant gradient expansion. We find nonvanishing CP violating terms starting at the sixth order of the expansion, albeit only in the C-odd-P-even sector, with coefficients that depend on quark masses, Cabibbo-Kobayashi-Maskawa matrix elements, temperature and the magnitude of the Higgs field. The CP violating effects are observed to decrease rapidly with temperature, which has important implications for the generation of a matter-antimatter asymmetry in the early Universe. Our results suggest that the cold electroweak baryogenesis scenario may be viable within the standard model, provided the electroweak transition temperature is at most of order 1 GeV.
NASA Astrophysics Data System (ADS)
Effertz, Timo; Pernpeintner, Johannes; Schiricke, Björn
2017-06-01
At DLR's QUARZ Center a test bench has been established to measure, using steady state calorimetric method, the total hemispherical emittance of cylindrical solar thermal absorber samples at temperatures up to 450 °C. Emittance measurement of solar absorber surfaces is commonly performed by direct-hemispherical reflectance measurements with spectrophotometers. However, the measurement of cylindrical samples with spectrophotometers can be considered still a challenge as integrating spheres, reference samples and calibration services by national metrology institutions are optimized for flat sample measurement. Additionally samples are typically measured at room temperature. The steady state calorimetric method does not rely on reference samples and the measurement is performed at operating temperature. In the steady state calorimetric method electrical power input used to heat the sample is equated to the radiative heat loss from a heated sample to the environment. The total emittance can be calculated using the Stefan-Boltzmann equation from radiative heat loss power, the defined sample surface area and measured surface temperature. The expanded uncertainty (k=2) of the total hemispherical emittance has been determined to ±13 % for a typical parabolic trough absorber sample at a temperature of 300 °C and a heating power of 100 W. The test bench was validated by the measurement of three samples with the spectrophotometer and the steady state calorimetric method.
Modeling of concrete response at high temperature
Pfeiffer, P.; Marchertas, A.
1984-01-01
A rate-type creep law is implemented into the computer code TEMP-STRESS for high temperature concrete analysis. The disposition of temperature, pore pressure and moisture for the particular structure in question is provided as input for the thermo-mechanical code. The loss of moisture from concrete also induces material shrinkage which is accounted for in the analytical model. Examples are given to illustrate the numerical results.
High Temperature High Pressure Thermodynamic Measurements for Coal Model Compounds
John C. Chen; Vinayak N. Kabadi
1998-11-12
The overall objective of this project is to develop a better thermodynamic model for predicting properties of high-boiling coal derived liquids, especially the phase equilibria of different fractions at elevated temperatures and pressures. The development of such a model requires data on vapor-liquid equilibria (VLE), enthalpy, and heat capacity which would be experimentally determined for binary systems of coal model compounds and compiled into a database. The data will be used to refine existing models such as UNIQUAC and UNIFAC. The flow VLE apparatus designed and built for a previous project was upgraded and recalibrated for data measurements for thk project. The modifications include better and more accurate sampling technique and addition of a digital recorder to monitor temperature, pressure and liquid level inside the VLE cell. VLE data measurements for system benzene-ethylbenzene have been completed. The vapor and liquid samples were analysed using the Perkin-Elmer Autosystem gas chromatography.
NASA Astrophysics Data System (ADS)
Wang, Ruzhuan; Li, Weiguo
2017-08-01
The strength of SiC-depleted layer of ultra-high-temperature ceramics on high temperature oxidation degrades seriously. The research for residual stresses developed within the SiC-depleted layer is important and necessary. In this work, the residual stress evolutions in the SiC-depleted layer and the unoxidized substrate in various stages of oxidation are studied by using the characterization models. The temperature and oxidation time dependent mechanical/thermal properties of each phase in SiC-depleted layer are considered in the models. The study shows that the SiC-depleted layer would suffer from large tensile stresses due to the great temperature changes and the formation of pores on high temperature oxidation. The stresses may lead to the cracking and even the delamination of the oxidation layer.
NASA Astrophysics Data System (ADS)
Wang, Ruzhuan; Li, Weiguo
2016-11-01
The strength of SiC-depleted layer of ultra-high-temperature ceramics on high temperature oxidation degrades seriously. The research for residual stresses developed within the SiC-depleted layer is important and necessary. In this work, the residual stress evolutions in the SiC-depleted layer and the unoxidized substrate in various stages of oxidation are studied by using the characterization models. The temperature and oxidation time dependent mechanical/thermal properties of each phase in SiC-depleted layer are considered in the models. The study shows that the SiC-depleted layer would suffer from large tensile stresses due to the great temperature changes and the formation of pores on high temperature oxidation. The stresses may lead to the cracking and even the delamination of the oxidation layer.
Nahorniak, Matthew
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
NASA Technical Reports Server (NTRS)
Taylor, L. A.
1979-01-01
A technique has been developed for the encapsulation of rock samples in order to prevent the chemical alterations which commonly accompany paleointensity measurements at elevated temperatures. The technique involves vacuum pumping at about 100 C of the sample as placed in a silica tube. The tube containing the sample and a Ti 'getter' are sealed under vacuum. Measurements can be made at 200 and 300 C. Immediately after this, the sample is sealed-off from the getter. The sample is now ready for measurements at higher temperatures.
Global modeling of fresh surface water temperature
NASA Astrophysics Data System (ADS)
Bierkens, M. F.; Eikelboom, T.; van Vliet, M. T.; Van Beek, L. P.
2011-12-01
Temperature determines a range of water physical properties, the solubility of oxygen and other gases and acts as a strong control on fresh water biogeochemistry, influencing chemical reaction rates, phytoplankton and zooplankton composition and the presence or absence of pathogens. Thus, in freshwater ecosystems the thermal regime affects the geographical distribution of aquatic species through their growth and metabolism, tolerance to parasites, diseases and pollution and life history. Compared to statistical approaches, physically-based models of surface water temperature have the advantage that they are robust in light of changes in flow regime, river morphology, radiation balance and upstream hydrology. Such models are therefore better suited for projecting the effects of global change on water temperature. Till now, physically-based models have only been applied to well-defined fresh water bodies of limited size (e.g., lakes or stream segments), where the numerous parameters can be measured or otherwise established, whereas attempts to model water temperature over larger scales has thus far been limited to regression type of models. Here, we present a first attempt to apply a physically-based model of global fresh surface water temperature. The model adds a surface water energy balance to river discharge modelled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by short and long-wave radiation and sensible and latent heat fluxes. Also included are ice-formation and its effect on heat storage and river hydraulics. We used the coupled surface water and energy balance model to simulate global fresh surface water temperature at daily time steps on a 0.5x0.5 degree grid for the period 1970-2000. Meteorological forcing was obtained from the CRU data set, downscaled to daily values with ECMWF
High temperature furnace modeling and performance verifications
NASA Technical Reports Server (NTRS)
Smith, James E., Jr.
1991-01-01
A two dimensional conduction/radiation problem for an alumina crucible in a zirconia heater/muffle tube enclosing a liquid iron sample was solved numerically. Variations in the crucible wall thickness were numerically examined. The results showed that the temperature profiles within the liquid iron sample were significantly affected by the crucible wall thicknesses. New zirconia heating elements are under development that will permit continued experimental investigations of the zirconia furnace. These elements have been designed to work with the existing furnace and have been shown to have longer lifetimes than commercially available zirconia heating elements. The first element has been constructed and tested successfully.
The effectiveness of cooling conditions on temperature of canine EDTA whole blood samples
Sun, Xiaocun; Flatland, Bente
2016-01-01
Background Preanalytic factors such as time and temperature can have significant effects on laboratory test results. For example, ammonium concentration will increase 31% in blood samples stored at room temperature for 30 min before centrifugation. To reduce preanalytic error, blood samples may be placed in precooled tubes and chilled on ice or in ice water baths; however, the effectiveness of these modalities in cooling blood samples has not been formally evaluated. The purpose of this study was to evaluate the effectiveness of various cooling modalities on reducing temperature of EDTA whole blood samples. Methods Pooled samples of canine EDTA whole blood were divided into two aliquots. Saline was added to one aliquot to produce a packed cell volume (PCV) of 40% and to the second aliquot to produce a PCV of 20% (simulated anemia). Thirty samples from each aliquot were warmed to 37.7 °C and cooled in 2 ml allotments under one of three conditions: in ice, in ice after transfer to a precooled tube, or in an ice water bath. Temperature of each sample was recorded at one minute intervals for 15 min. Results Within treatment conditions, sample PCV had no significant effect on cooling. Cooling in ice water was significantly faster than cooling in ice only or transferring the sample to a precooled tube and cooling it on ice. Mean temperature of samples cooled in ice water was significantly lower at 15 min than mean temperatures of those cooled in ice, whether or not the tube was precooled. By 4 min, samples cooled in an ice water bath had reached mean temperatures less than 4 °C (refrigeration temperature), while samples cooled in other conditions remained above 4.0 °C for at least 11 min. For samples with a PCV of 40%, precooling the tube had no significant effect on rate of cooling on ice. For samples with a PCV of 20%, transfer to a precooled tube resulted in a significantly faster rate of cooling than direct placement of the warmed tube onto ice. Discussion Canine
The effectiveness of cooling conditions on temperature of canine EDTA whole blood samples.
Tobias, Karen M; Serrano, Leslie; Sun, Xiaocun; Flatland, Bente
2016-01-01
Preanalytic factors such as time and temperature can have significant effects on laboratory test results. For example, ammonium concentration will increase 31% in blood samples stored at room temperature for 30 min before centrifugation. To reduce preanalytic error, blood samples may be placed in precooled tubes and chilled on ice or in ice water baths; however, the effectiveness of these modalities in cooling blood samples has not been formally evaluated. The purpose of this study was to evaluate the effectiveness of various cooling modalities on reducing temperature of EDTA whole blood samples. Pooled samples of canine EDTA whole blood were divided into two aliquots. Saline was added to one aliquot to produce a packed cell volume (PCV) of 40% and to the second aliquot to produce a PCV of 20% (simulated anemia). Thirty samples from each aliquot were warmed to 37.7 °C and cooled in 2 ml allotments under one of three conditions: in ice, in ice after transfer to a precooled tube, or in an ice water bath. Temperature of each sample was recorded at one minute intervals for 15 min. Within treatment conditions, sample PCV had no significant effect on cooling. Cooling in ice water was significantly faster than cooling in ice only or transferring the sample to a precooled tube and cooling it on ice. Mean temperature of samples cooled in ice water was significantly lower at 15 min than mean temperatures of those cooled in ice, whether or not the tube was precooled. By 4 min, samples cooled in an ice water bath had reached mean temperatures less than 4 °C (refrigeration temperature), while samples cooled in other conditions remained above 4.0 °C for at least 11 min. For samples with a PCV of 40%, precooling the tube had no significant effect on rate of cooling on ice. For samples with a PCV of 20%, transfer to a precooled tube resulted in a significantly faster rate of cooling than direct placement of the warmed tube onto ice. Canine EDTA whole blood samples cool most
A study of sampling phenomena in a probe nozzle for high temperature MBMS
Smirnov, V.I.
1995-03-01
One important and difficult problem of MBMS is sampling from high temperature systems containing condensable high temperature molecules, for example oxides and hydroxides of d- and f-elements. Condensation in the orifice of a probe nozzle is one of the main difficulties in a practical mass sectrometric, analysis of environments which contain highly volatile compounds. Due to condensation the concentrations of compounds in a probe may be changed, therefore the composition in a source may significantly differ from the composition in the mass spectrometric detector. The orifice of a nozzle is plugged in a short time if condensation on the surface of a nozzle is very intensive. Plugging significantly reduces the signal registration time and the sensitivity. There are many classes of important compounds which cannot be analyzed by MBMS due to these limitations. For example, oxides and hydroxides of d- and f-elements, play the main role in many application systems. To develop a nozzle with a low level of condensation in it, it is essential to understand the quantitative picture of the sampling phenomena - gas dynamic structure, chemical and energetic relaxation, kinetics of homogeneous and heterogeneous condensation etc. To define the state and the composition of a probe in a mass spectrometer ion source it is necessary to build a quantitative model which describes all prehistory of the gas probe from the starting point of acceleration in front of the nozzle to the range of a free molecular flow.
Energy based model for temperature dependent behavior of ferromagnetic materials
NASA Astrophysics Data System (ADS)
Sah, Sanjay; Atulasimha, Jayasimha
2017-03-01
An energy based model for temperature dependent anhysteretic magnetization curves of ferromagnetic materials is proposed and benchmarked against experimental data. This is based on the calculation of macroscopic magnetic properties by performing an energy weighted average over all possible orientations of the magnetization vector. Most prior approaches that employ this method are unable to independently account for the effect of both inhomogeneity and temperature in performing the averaging necessary to model experimental data. Here we propose a way to account for both effects simultaneously and benchmark the model against experimental data from 5 K to 300 K for two different materials in both annealed (fewer inhomogeneities) and deformed (more inhomogeneities) samples. This demonstrates that this framework is well suited to simulate temperature dependent experimental magnetic behavior.
Surducan, V.; Surducan, E.; Dadarlat, D.
2013-11-13
Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved.
Temperature recommendation to preserve a chemical status quo of cometary samples.
NASA Astrophysics Data System (ADS)
Roessler, K.; Nebeling, B.
1986-12-01
The preservation of a "chemical status quo" is an important postulate for a comet nucleus sample return mission. On the basis of experiments on radiation induced reduction and oxidation of organics in ice and frozen NH3, a sampling and return temperature of T ≈ 100K is recommended. The temperature should at least not exceed that of the surface near interior (some meters depth), i.e. about 130K.
The XXL Survey . IV. Mass-temperature relation of the bright cluster sample
NASA Astrophysics Data System (ADS)
Lieu, M.; Smith, G. P.; Giles, P. A.; Ziparo, F.; Maughan, B. J.; Démoclès, J.; Pacaud, F.; Pierre, M.; Adami, C.; Bahé, Y. M.; Clerc, N.; Chiappetti, L.; Eckert, D.; Ettori, S.; Lavoie, S.; Le Fevre, J. P.; McCarthy, I. G.; Kilbinger, M.; Ponman, T. J.; Sadibekova, T.; Willis, J. P.
2016-06-01
Context. The XXL Survey is the largest survey carried out by XMM-Newton. Covering an area of 50 deg2, the survey contains ~450 galaxy clusters out to a redshift ~2 and to an X-ray flux limit of ~ 5 × 10-15 erg s-1 cm-2. This paper is part of the first release of XXL results focussed on the bright cluster sample. Aims: We investigate the scaling relation between weak-lensing mass and X-ray temperature for the brightest clusters in XXL. The scaling relation discussed in this article is used to estimate the mass of all 100 clusters in XXL-100-GC. Methods: Based on a subsample of 38 objects that lie within the intersection of the northern XXL field and the publicly available CFHTLenS shear catalog, we derive the weak-lensing mass of each system with careful considerations of the systematics. The clusters lie at 0.1
Anton, Gabriele; Wilson, Rory; Yu, Zhong-Hao; Prehn, Cornelia; Zukunft, Sven; Adamski, Jerzy; Heier, Margit; Meisinger, Christa; Römisch-Margl, Werner; Wang-Sattler, Rui; Hveem, Kristian; Wolfenbuttel, Bruce; Peters, Annette; Kastenmüller, Gabi; Waldenberger, Melanie
2015-01-01
Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT) and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation.
Tycko, Robert
2014-01-01
Knowledge of sample temperatures during nuclear magnetic resonance (NMR) measurements is important for acquisition of optimal NMR data and proper interpretation of the data. Sample temperatures can be difficult to measure accurately for a variety of reasons, especially because it is generally not possible to make direct contact to the NMR sample during the measurements. Here I show that sample temperatures during magic-angle spinning (MAS) NMR measurements can be determined from temperature-dependent photoluminescence signals of semiconductor quantum dots that are deposited in a thin film on the outer surface of the MAS rotor, using a simple optical fiber-based setup to excite and collect photoluminescence. The accuracy and precision of such temperature measurements can be better than ±5 K over a temperature range that extends from approximately 50 K (−223° C) to well above 310 K (37° C). Importantly, quantum dot photoluminescence can be monitored continuously while NMR measurements are in progress. While this technique is likely to be particularly valuable in low-temperature MAS NMR experiments, including experiments involving dynamic nuclear polarization, it may also be useful in high-temperature MAS NMR and other forms of magnetic resonance. PMID:24859817
Tycko, Robert
2014-07-01
Knowledge of sample temperatures during nuclear magnetic resonance (NMR) measurements is important for acquisition of optimal NMR data and proper interpretation of the data. Sample temperatures can be difficult to measure accurately for a variety of reasons, especially because it is generally not possible to make direct contact to the NMR sample during the measurements. Here I show that sample temperatures during magic-angle spinning (MAS) NMR measurements can be determined from temperature-dependent photoluminescence signals of semiconductor quantum dots that are deposited in a thin film on the outer surface of the MAS rotor, using a simple optical fiber-based setup to excite and collect photoluminescence. The accuracy and precision of such temperature measurements can be better than ±5K over a temperature range that extends from approximately 50K (-223°C) to well above 310K (37°C). Importantly, quantum dot photoluminescence can be monitored continuously while NMR measurements are in progress. While this technique is likely to be particularly valuable in low-temperature MAS NMR experiments, including experiments involving dynamic nuclear polarization, it may also be useful in high-temperature MAS NMR and other forms of magnetic resonance. Published by Elsevier Inc.
Modelling Brain Temperature and Cerebral Cooling Methods
NASA Astrophysics Data System (ADS)
Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael
2014-11-01
Direct measurement of cerebral temperature is invasive and impractical meaning treatments for reduction of core brain temperature rely on predictive mathematical models. Current models rely on continuum equations which heavily simplify thermal interactions between blood and tissue. A novel two-phase 3D porous-fluid model is developed to address these limitations. The model solves porous flow equations in 3D along with energy transport equation in both the blood and tissue phases including metabolic generation. By incorporating geometry data extracted from MRI scans, 3D vasculature can be inserted into a porous brain structure to realistically represent blood distribution within the brain. Therefore, thermal transport and convective heat transfer of blood are solved by means of direct numerical simulations. In application, results show that external scalp cooling has a higher impact on both maximum and average core brain temperatures than previously predicted. Additionally, the extent of alternative treatment methods such as pharyngeal cooling and carotid infusion can be investigated using this model. Acknowledgement: EPSRC DTA.
Apparatus Measures Thermal Conductance Through a Thin Sample from Cryogenic to Room Temperature
NASA Technical Reports Server (NTRS)
Tuttle, James G.
2009-01-01
An apparatus allows the measurement of the thermal conductance across a thin sample clamped between metal plates, including thermal boundary resistances. It allows in-situ variation of the clamping force from zero to 30 lb (133.4 N), and variation of the sample temperature between 40 and 300 K. It has a special design feature that minimizes the effect of thermal radiation on this measurement. The apparatus includes a heater plate sandwiched between two identical thin samples. On the side of each sample opposite the heater plate is a cold plate. In order to take data, the heater plate is controlled at a slightly higher temperature than the two cold plates, which are controlled at a single lower temperature. The steady-state controlling power supplied to the hot plate, the area and thickness of samples, and the temperature drop across the samples are then used in a simple calculation of the thermal conductance. The conductance measurements can be taken at arbitrary temperatures down to about 40 K, as the entire setup is cooled by a mechanical cryocooler. The specific geometry combined with the pneumatic clamping force control system and the steady-state temperature control approach make this a unique apparatus.
Spatiotemporal modeling of node temperatures in supercomputers
Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...
2016-06-10
Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less
Meth math: modeling temperature responses to methamphetamine.
Molkov, Yaroslav I; Zaretskaia, Maria V; Zaretsky, Dmitry V
2014-04-15
Methamphetamine (Meth) can evoke extreme hyperthermia, which correlates with neurotoxicity and death in laboratory animals and humans. The objective of this study was to uncover the mechanisms of a complex dose dependence of temperature responses to Meth by mathematical modeling of the neuronal circuitry. On the basis of previous studies, we composed an artificial neural network with the core comprising three sequentially connected nodes: excitatory, medullary, and sympathetic preganglionic neuronal (SPN). Meth directly stimulated the excitatory node, an inhibitory drive targeted the medullary node, and, in high doses, an additional excitatory drive affected the SPN node. All model parameters (weights of connections, sensitivities, and time constants) were subject to fitting experimental time series of temperature responses to 1, 3, 5, and 10 mg/kg Meth. Modeling suggested that the temperature response to the lowest dose of Meth, which caused an immediate and short hyperthermia, involves neuronal excitation at a supramedullary level. The delay in response after the intermediate doses of Meth is a result of neuronal inhibition at the medullary level. Finally, the rapid and robust increase in body temperature induced by the highest dose of Meth involves activation of high-dose excitatory drive. The impairment in the inhibitory mechanism can provoke a life-threatening temperature rise and makes it a plausible cause of fatal hyperthermia in Meth users. We expect that studying putative neuronal sites of Meth action and the neuromediators involved in a detailed model of this system may lead to more effective strategies for prevention and treatment of hyperthermia induced by amphetamine-like stimulants.
Spatiotemporal modeling of node temperatures in supercomputers
Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.
2016-06-10
Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this and other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.
Spatiotemporal modeling of node temperatures in supercomputers
Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.
2016-06-10
Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this and other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.
Meth math: modeling temperature responses to methamphetamine
Molkov, Yaroslav I.; Zaretskaia, Maria V.
2014-01-01
Methamphetamine (Meth) can evoke extreme hyperthermia, which correlates with neurotoxicity and death in laboratory animals and humans. The objective of this study was to uncover the mechanisms of a complex dose dependence of temperature responses to Meth by mathematical modeling of the neuronal circuitry. On the basis of previous studies, we composed an artificial neural network with the core comprising three sequentially connected nodes: excitatory, medullary, and sympathetic preganglionic neuronal (SPN). Meth directly stimulated the excitatory node, an inhibitory drive targeted the medullary node, and, in high doses, an additional excitatory drive affected the SPN node. All model parameters (weights of connections, sensitivities, and time constants) were subject to fitting experimental time series of temperature responses to 1, 3, 5, and 10 mg/kg Meth. Modeling suggested that the temperature response to the lowest dose of Meth, which caused an immediate and short hyperthermia, involves neuronal excitation at a supramedullary level. The delay in response after the intermediate doses of Meth is a result of neuronal inhibition at the medullary level. Finally, the rapid and robust increase in body temperature induced by the highest dose of Meth involves activation of high-dose excitatory drive. The impairment in the inhibitory mechanism can provoke a life-threatening temperature rise and makes it a plausible cause of fatal hyperthermia in Meth users. We expect that studying putative neuronal sites of Meth action and the neuromediators involved in a detailed model of this system may lead to more effective strategies for prevention and treatment of hyperthermia induced by amphetamine-like stimulants. PMID:24500434
NASA Technical Reports Server (NTRS)
Nastrom, G. D.; Jasperson, W. H.
1983-01-01
Temperature data obtained by the Global Atmospheric Sampling Program (GASP) during the period March 1975 to July 1979 are compiled to form flight summaries of static air temperature and a geographic temperature climatology. The flight summaries include the height and location of the coldest observed temperature and the mean flight level, temperature and the standard deviation of temperature for each flight as well as for flight segments. These summaries are ordered by route and month. The temperature climatology was computed for all statistically independent temperture data for each flight. The grid used consists of 5 deg latitude, 30 deg longitude and 2000 feet vertical resolution from FL270 to FL430 for each month of the year. The number of statistically independent observations, their mean, standard deviation and the empirical 98, 50, 16, 2 and .3 probability percentiles are presented.
Automated sample exchange and tracking system for neutron research at cryogenic temperatures.
Rix, J E; Weber, J K R; Santodonato, L J; Hill, B; Walker, L M; McPherson, R; Wenzel, J; Hammons, S E; Hodges, J; Rennich, M; Volin, K J
2007-01-01
An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10 mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of approximately 25 K, after several minutes, it was moved onto a "landing pad" at approximately 10 K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2 min. The time to cool the sample from ambient temperature to approximately 10 K was approximately 7 min including precooling time. The cooling time increases to approximately 12 min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to approximately 350 K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to approximately 10 K after it has been heated to approximately 240 K was approximately 20 min.
Modeling quantum fluid dynamics at nonzero temperatures
Berloff, Natalia G.; Brachet, Marc; Proukakis, Nick P.
2014-01-01
The detailed understanding of the intricate dynamics of quantum fluids, in particular in the rapidly growing subfield of quantum turbulence which elucidates the evolution of a vortex tangle in a superfluid, requires an in-depth understanding of the role of finite temperature in such systems. The Landau two-fluid model is the most successful hydrodynamical theory of superfluid helium, but by the nature of the scale separations it cannot give an adequate description of the processes involving vortex dynamics and interactions. In our contribution we introduce a framework based on a nonlinear classical-field equation that is mathematically identical to the Landau model and provides a mechanism for severing and coalescence of vortex lines, so that the questions related to the behavior of quantized vortices can be addressed self-consistently. The correct equation of state as well as nonlocality of interactions that leads to the existence of the roton minimum can also be introduced in such description. We review and apply the ideas developed for finite-temperature description of weakly interacting Bose gases as possible extensions and numerical refinements of the proposed method. We apply this method to elucidate the behavior of the vortices during expansion and contraction following the change in applied pressure. We show that at low temperatures, during the contraction of the vortex core as the negative pressure grows back to positive values, the vortex line density grows through a mechanism of vortex multiplication. This mechanism is suppressed at high temperatures. PMID:24704874
Modeling quantum fluid dynamics at nonzero temperatures.
Berloff, Natalia G; Brachet, Marc; Proukakis, Nick P
2014-03-25
The detailed understanding of the intricate dynamics of quantum fluids, in particular in the rapidly growing subfield of quantum turbulence which elucidates the evolution of a vortex tangle in a superfluid, requires an in-depth understanding of the role of finite temperature in such systems. The Landau two-fluid model is the most successful hydrodynamical theory of superfluid helium, but by the nature of the scale separations it cannot give an adequate description of the processes involving vortex dynamics and interactions. In our contribution we introduce a framework based on a nonlinear classical-field equation that is mathematically identical to the Landau model and provides a mechanism for severing and coalescence of vortex lines, so that the questions related to the behavior of quantized vortices can be addressed self-consistently. The correct equation of state as well as nonlocality of interactions that leads to the existence of the roton minimum can also be introduced in such description. We review and apply the ideas developed for finite-temperature description of weakly interacting Bose gases as possible extensions and numerical refinements of the proposed method. We apply this method to elucidate the behavior of the vortices during expansion and contraction following the change in applied pressure. We show that at low temperatures, during the contraction of the vortex core as the negative pressure grows back to positive values, the vortex line density grows through a mechanism of vortex multiplication. This mechanism is suppressed at high temperatures.
NASA Technical Reports Server (NTRS)
Johnston, Alan R.; Hartmayer, Ron; Bergman, Larry A.
1992-01-01
This paper will concentrate on results obtained from the Jet Propulsion Lab (JPL) Fiber Optics Long Duration Exposure Facility (LDEF) Experiment since the June 1991 Experimenters Workshop. Radiation darkening of the laboratory control samples will be compared with the LDEF flight samples. The results of laboratory temperature tests on the flight samples extending over a period of about nine years including the preflight and postflight analysis periods will be described.
NASA Technical Reports Server (NTRS)
Thorpe, A. N.; Sullivan, S.; Alexander, C. C.; Senftle, F. E.; Dwornik, E. J.
1972-01-01
Magnetic susceptibility of 11 glass spherules from the Apollo 14 lunar fines have been measured from room temperature to 4 K. Data taken at room temperature, 77 K, and 4.2 K, show that the soft saturation magnetization was temperature independent. In the temperature range 300 to 77 K the temperature-dependent component of the magnetic susceptibility obeys the Curie law. Susceptibility measurements on these same specimens and in addition 14 similar spherules from the Apollo 11 and 12 mission show a Curie-Weiss relation at temperatures less than 77 K with a Weiss temperature of 3-7 degrees in contrast to 2-3 degrees found for tektites and synthetic glasses of tektite composition. A proposed model and a theoretical expression closely predict the variation of the susceptibility of the glass spherules with temperature.
Kury, P.; Zahl, P.; Horn-von Hoegen, M.; Voges, C.; Frischat, H.; Guenter, H.-L.; Pfnuer, H.; Henzler, M.
2004-11-01
Spot profile analysis low energy electron diffraction (SPA-LEED) is one of the most versatile and powerful methods for the determination of the structure and morphology of surfaces even at elevated temperatures. In setups where the sample is heated directly by an electric current, the resolution of the diffraction images at higher temperatures can be heavily degraded due to the inhomogeneous electric and magnetic fields around the sample. Here we present an easily applicable modification of the common data acquisition hardware of the SPA-LEED, which enables the system to work in a pulsed heating mode: Instead of heating the sample with a constant current, a square wave is used and electron counting is only performed when the current through the sample vanishes. Thus, undistorted diffration images can be acquired at high temperatures.
Modeling Low-temperature Geochemical Processes
NASA Astrophysics Data System (ADS)
Nordstrom, D. K.
2003-12-01
Geochemical modeling has become a popular and useful tool for a wide number of applications from research on the fundamental processes of water-rock interactions to regulatory requirements and decisions regarding permits for industrial and hazardous wastes. In low-temperature environments, generally thought of as those in the temperature range of 0-100 °C and close to atmospheric pressure (1 atm=1.01325 bar=101,325 Pa), complex hydrobiogeochemical reactions participate in an array of interconnected processes that affect us, and that, in turn, we affect. Understanding these complex processes often requires tools that are sufficiently sophisticated to portray multicomponent, multiphase chemical reactions yet transparent enough to reveal the main driving forces. Geochemical models are such tools. The major processes that they are required to model include mineral dissolution and precipitation; aqueous inorganic speciation and complexation; solute adsorption and desorption; ion exchange; oxidation-reduction; or redox; transformations; gas uptake or production; organic matter speciation and complexation; evaporation; dilution; water mixing; reaction during fluid flow; reaction involving biotic interactions; and photoreaction. These processes occur in rain, snow, fog, dry atmosphere, soils, bedrock weathering, streams, rivers, lakes, groundwaters, estuaries, brines, and diagenetic environments. Geochemical modeling attempts to understand the redistribution of elements and compounds, through anthropogenic and natural means, for a large range of scale from nanometer to global. "Aqueous geochemistry" and "environmental geochemistry" are often used interchangeably with "low-temperature geochemistry" to emphasize hydrologic or environmental objectives.Recognition of the strategy or philosophy behind the use of geochemical modeling is not often discussed or explicitly described. Plummer (1984, 1992) and Parkhurst and Plummer (1993) compare and contrast two approaches for
Improved Estimation Model of Lunar Surface Temperature
NASA Astrophysics Data System (ADS)
Zheng, Y.
2015-12-01
Lunar surface temperature (LST) is of great scientific interest both uncovering the thermal properties and designing the lunar robotic or manned landing missions. In this paper, we proposed the improved LST estimation model based on the one-dimensional partial differential equation (PDE). The shadow and surface tilts effects were combined into the model. Using the Chang'E (CE-1) DEM data from the Laser Altimeter (LA), the topographic effect can be estimated with an improved effective solar irradiance (ESI) model. In Fig. 1, the highest LST of the global Moon has been estimated with the spatial resolution of 1 degree /pixel, applying the solar albedo data derived from Clementine UV-750nm in solving the PDE function. The topographic effect is significant in the LST map. It can be identified clearly the maria, highland, and craters. The maximum daytime LST presents at the regions with low albedo, i.g. mare Procellarum, mare Serenitatis and mare Imbrium. The results are consistent with the Diviner's measurements of the LRO mission. Fig. 2 shows the temperature variations at the center of the disk in one year, assuming the Moon to be standard spherical. The seasonal variation of LST at the equator is about 10K. The highest LST occurs in early May. Fig.1. Estimated maximum surface temperatures of the global Moon in spatial resolution of 1 degree /pixel
The XXL Survey. III. Luminosity-temperature relation of the bright cluster sample
NASA Astrophysics Data System (ADS)
Giles, P. A.; Maughan, B. J.; Pacaud, F.; Lieu, M.; Clerc, N.; Pierre, M.; Adami, C.; Chiappetti, L.; Démoclés, J.; Ettori, S.; Le Févre, J. P.; Ponman, T.; Sadibekova, T.; Smith, G. P.; Willis, J. P.; Ziparo, F.
2016-06-01
Context. The XXL Survey is the largest homogeneous survey carried out with XMM-Newton. Covering an area of 50 deg2, the survey contains several hundred galaxy clusters out to a redshift of ~2 above an X-ray flux limit of ~5 × 10-15 erg cm-2 s-1. This paper belongs to the first series of XXL papers focusing on the bright cluster sample. Aims: We investigate the luminosity-temperature (LT) relation for the brightest clusters detected in the XXL Survey, taking fully into account the selection biases. We investigate the form of the LT relation, placing constraints on its evolution. Methods: We have classified the 100 brightest clusters in the XXL Survey based on their measured X-ray flux. These 100 clusters have been analysed to determine their luminosity and temperature to evaluate the LT relation. We used three methods to fit the form of the LT relation, with two of these methods providing a prescription to fully take into account the selection effects of the survey. We measure the evolution of the LT relation internally using the broad redshift range of the sample. Results: Taking fully into account selection effects, we find a slope of the bolometric LT relation of BLT = 3.08 ± 0.15, steeper than the self-similar expectation (BLT = 2). Our best-fit result for the evolution factor is E(z)1.64 ± 0.77, fully consistent with "strong self-similar" evolution where clusters scale self-similarly with both mass and redshift. However, this result is marginally stronger than "weak self-similar" evolution, where clusters scale with redshift alone. We investigate the sensitivity of our results to the assumptions made in our fitting model, finding that using an external LT relation as a low-z baseline can have a profound effect on the measured evolution. However, more clusters are needed in order to break the degeneracy between the choice of likelihood model and mass-temperature relation on the derived evolution. Based on observations obtained with XMM-Newton, an ESA science
Exploring HP protein models using Wang-Landau sampling
NASA Astrophysics Data System (ADS)
Wuest, Thomas; Landau, David P.
2008-03-01
The hydrophobic-polar (HP) protein model has become a standard in assessing the efficiency of computational methods for protein structure prediction as well as for exploring the statistical physics of protein folding in general. Numerous methods have been proposed to address the challenges of finding minimal energy conformations within the rough energy landscape of this lattice heteropolymer model. However, only a few studies have been dedicated to the more revealing - but also more demanding - problem of estimating the density of states which allows access to thermodynamic properties of a system at any temperature. Here, we show that Wang-Landau sampling, in connection with a suitable move set (``pull moves''), provides a powerful route for the ground state search and the precise determination of the density of states for HP sequences (with up to 100 monomers) in both, two and three dimensions. Our procedure possesses an intrinsic simplicity and overcomes the inevitable limitations inherent in other more tailored approaches. The main advantage lies in its general applicability to a broad range of lattice protein models that go beyond the scope of the HP model.
Teaching ANOVA Models via Miniature Numerical Samples
ERIC Educational Resources Information Center
Bolton, Brian
1975-01-01
On the premise that the more formal algebraic presentation of statistics must be placed in a concrete context to facilitate student understanding, the author presents a pedagogical device involving the construction of miniature numerical examples that illustrate how the statistical model imposes structure on empirical data. (JT)
Symonds, Erin L; Cole, Stephen R; Bastin, Dawn; Fraser, Robert Jl; Young, Graeme P
2017-01-01
Objectives Faecal immunochemical test accuracy may be adversely affected when samples are exposed to high temperatures. This study evaluated the effect of two sample collection buffer formulations (OC-Sensor, Eiken) and storage temperatures on faecal haemoglobin readings. Methods Faecal immunochemical test samples returned in a screening programme and with ≥10 µg Hb/g faeces in either the original or new formulation haemoglobin stabilizing buffer were stored in the freezer, refrigerator, or at room temperature (22℃-24℃), and reanalysed after 1-14 days. Samples in the new buffer were also reanalysed after storage at 35℃ and 50℃. Results were expressed as percentage of the initial concentration, and the number of days that levels were maintained to at least 80% was calculated. Results Haemoglobin concentrations were maintained above 80% of their initial concentration with both freezer and refrigerator storage, regardless of buffer formulation or storage duration. Stability at room temperature was significantly better in the new buffer, with haemoglobin remaining above 80% for 20 days compared with six days in the original buffer. Storage at 35℃ or 50℃ in the new buffer maintained haemoglobin above 80% for eight and two days, respectively. Conclusion The new formulation buffer has enhanced haemoglobin stabilizing properties when samples are exposed to temperatures greater than 22℃.
NASA Astrophysics Data System (ADS)
Portner, H.; Bugmann, H.; Wolf, A.
2010-11-01
Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not well understood. Thus, we performed an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS. We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff). We determined the parameter confidence ranges of the formulations by nonlinear regression analysis based on eight experimental datasets from Northern Hemisphere ecosystems. We sampled over the confidence ranges of the parameters and ran simulations for each pair of temperature response function and calibration site. We analyzed both the long-term and the short-term heterotrophic soil carbon dynamics over a virtual elevation gradient in southern Switzerland. The temperature relationship of Lloyd-Taylor fitted the overall data set best as the other functions either resulted in poor fits (Exponential, Arrhenius) or were not applicable for all datasets (Gaussian, Van't Hoff). There were two main sources of uncertainty for model simulations: (1) the lack of confidence in the parameter estimates of the temperature response, which increased with increasing temperature, and (2) the size of the simulated soil carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. Our results therefore indicate that such projections are more uncertain for higher elevations and hence also higher latitudes, which are of key importance for the global terrestrial carbon budget.
Temperature influences in receiver clock modelling
NASA Astrophysics Data System (ADS)
Wang, Kan; Meindl, Michael; Rothacher, Markus; Schoenemann, Erik; Enderle, Werner
2016-04-01
In Precise Point Positioning (PPP), hardware delays at the receiver site (receiver, cables, antenna, …) are always difficult to be separated from the estimated receiver clock parameters. As a result, they are partially or fully contained in the estimated "apparent" clocks and will influence the deterministic and stochastic modelling of the receiver clock behaviour. In this contribution, using three years of data, the receiver clock corrections of a set of high-precision Hydrogen Masers (H-Masers) connected to stations of the ESA/ESOC network and the International GNSS Service (IGS) are firstly characterized concerning clock offsets, drifts, modified Allan deviations and stochastic parameters. In a second step, the apparent behaviour of the clocks is modelled with the help of a low-order polynomial and a known temperature coefficient (Weinbach, 2013). The correlations between the temperature and the hardware delays generated by different types of antennae are then analysed looking at daily, 3-day and weekly time intervals. The outcome of these analyses is crucial, if we intend to model the receiver clocks in the ground station network to improve the estimation of station-related parameters like coordinates, troposphere zenith delays and ambiguities. References: Weinbach, U. (2013) Feasibility and impact of receiver clock modeling in precise GPS data analysis. Dissertation, Leibniz Universität Hannover, Germany.
Stratospheric Temperature Changes: Observations and Model Simulations
NASA Technical Reports Server (NTRS)
Ramaswamy, V.; Chanin, M.-L.; Angell, J.; Barnett, J.; Gaffen, D.; Gelman, M.; Keckhut, P.; Koshelkov, Y.; Labitzke, K.; Lin, J.-J. R.
1999-01-01
This paper reviews observations of stratospheric temperatures that have been made over a period of several decades. Those observed temperatures have been used to assess variations and trends in stratospheric temperatures. A wide range of observation datasets have been used, comprising measurements by radiosonde (1940s to the present), satellite (1979 - present), lidar (1979 - present) and rocketsonde (periods varying with location, but most terminating by about the mid-1990s). In addition, trends have also been assessed from meteorological analyses, based on radiosonde and/or satellite data, and products based on assimilating observations into a general circulation model. Radiosonde and satellite data indicate a cooling trend of the annual-mean lower stratosphere since about 1980. Over the period 1979-1994, the trend is 0.6K/decade. For the period prior to 1980, the radiosonde data exhibit a substantially weaker long-term cooling trend. In the northern hemisphere, the cooling trend is about 0.75K/decade in the lower stratosphere, with a reduction in the cooling in mid-stratosphere (near 35 km), and increased cooling in the upper stratosphere (approximately 2 K per decade at 50 km). Model simulations indicate that the depletion of lower stratospheric ozone is the dominant factor in the observed lower stratospheric cooling. In the middle and upper stratosphere both the well-mixed greenhouse gases (such as CO) and ozone changes contribute in an important manner to the cooling.
NASA Astrophysics Data System (ADS)
Portner, H.; Bugmann, H.; Wolf, A.
2009-08-01
Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not understood. Thus, we made an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS. We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff). We determined the parameter uncertainty ranges of the functions by nonlinear regression analysis based on eight experimental datasets from northern hemisphere ecosystems. We sampled over the uncertainty bounds of the parameters and run simulations for each pair of temperature response function and calibration site. The uncertainty in both long-term and short-term soil carbon dynamics was analyzed over an elevation gradient in southern Switzerland. The function of Lloyd-Taylor turned out to be adequate for modelling the temperature dependency of soil organic matter decomposition, whereas the other functions either resulted in poor fits (Exponential, Arrhenius) or were not applicable for all datasets (Gaussian, Van't Hoff). There were two main sources of uncertainty for model simulations: (1) the uncertainty in the parameter estimates of the response functions, which increased with increasing temperature and (2) the uncertainty in the simulated size of carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. The higher uncertainty in carbon pools with slow turn-over rates has important implications for the uncertainty in the projection of the change of soil carbon stocks driven by climate change, which turned out to be more uncertain for higher elevations and hence higher latitudes, which are of key importance for the global terrestrial carbon budget.
Sesé, J; Bartolomé, J; Rillo, C
2007-04-01
A sample holder for high temperature (300 K
Gagner, R.V.; Hrudey, S.E.
1997-12-31
An evaluation was made of the performance of the 3M Organic Vapor Monitor No. 3500 through experiments conducted under permeation tube generated atmospheres in a controlled chamber environment. A range of typical ambient benzene and toluene concentrations were produced in the chamber to test the consistency of the sampling rate under different exposure levels. All tests were repeated at room temperature, and under subzero Celsius conditions to determine the effect of lowered temperatures on the performance of the badge. As expected, relatively low concentrations of benzene and toluene produced small incremental increases in analyte above the background levels inherent to the badge and analytical methods resulting in a loss of method precision. The badge sampling rate was not significantly affected by decreases in temperature to minus fifteen degrees Celsius. This finding was not consistent with the theoretically-based temperature correction factors identified in the product literature.
Johnston, James D.; Magnusson, Brianna M.; Eggett, Dennis; Collingwood, Scott C.; Bernhardt, Scott A.
2016-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hrs, and 12-days) in 9 northern Utah homes, from March – June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
Frequency sampling in microhistological studies: An alternative model
Williams, B.K.
1987-01-01
Frequency sampling in microhistological studies is discussed in terms of sampling procedures, statistical properties, and biological inferences. Two sampling approaches are described and con-trasted, and some standard methods for improving the stability of density estimators are discussed. Possible sources of difficulty are highlighted in terms of sampling design and statistical analysis. An alternative model is proposed that accounts for 2-stage sampling, and yields reasonable, we!!-behaved estimates of relative densities.
The effects of storage temperature and duration of blood samples on DNA and RNA qualities.
Huang, Lien-Hung; Lin, Pei-Hsien; Tsai, Kuo-Wang; Wang, Liang-Jen; Huang, Ying-Hsien; Kuo, Ho-Chang; Li, Sung-Chou
2017-01-01
DNA and RNA samples from blood are the common examination target for non-invasive physical tests and/or biomedical studies. Since high-quality DNA and RNA samples guarantee the correctness of these tests and/or studies, we investigated the effects of storage temperature and storage duration of whole blood on DNA and RNA qualities. Subjects were enrolled to donate blood samples which were stored for different durations and at different temperatures, followed by the examinations on RNA quality, qPCR, DNA quality and DNA methylation. For RNA, we observed obvious quality decline with storage duration longer than 24 hours. Storage at low temperature does not keep RNA samples from degradation. And, storing whole blood samples in freezer dramatically damage RNA. For DNA, quality decline was not observed even with storage duration for 15 days. However, DNA methylation significantly altered with storage duration longer than three days. Storage duration within 24 hours is critical for collecting high-quality RNA samples for next-generation sequencing (NGS) assays (RIN≧8). If microarray assays are expected (RIN≧7), storage duration within 32 hours is acceptable. Although DNA is resistant within 15 days when kept in whole blood, DNA quantity dramatically decreases owing to WBC lysis. In addition, duration for more than three days significantly alter DNA methylation status, globally and locally. Our result provides a reference for dealing with blood samples.
Determination of the thermal desorption kinetic parameters for samples with a temperature gradient
NASA Astrophysics Data System (ADS)
Kurenyova, T. Y.; Ryskin, M. E.; Shub, B. R.
1981-08-01
An application of the thermal desorption technique to the study of desorption from the samples with a temperature gradient is discussed. The kinetics of first- and second-order desorption from linearly and exponentially heated samples with a parabolic temperature profile is considered. It is shown that the low-temperature part of the thermal desorption curve is described by the same equations as those for the desorption from the nongradient surface with the less effective area and with the temperature equal to that at the center of the nonuniformly heated sample. The approximate analytical expressions for the amount of adsorbed surface species as a function of time are derived. These expressions enable to find the kinetics order, the activation energy E and the preexponential factor k0 for the desorption process from thermal desorption spectra. In a first approximation the corrections for the nonuniformity of the sample temperature do not substantially change the value of E but slightly increase the value of k0. The correction procedure for k0 is described in detail. The possible application of the proposed method to various experimental conditions is discussed.
The use of ESR technique for assessment of heating temperatures of archaeological lentil samples.
Aydaş, Canan; Engin, Birol; Dönmez, Emel Oybak; Belli, Oktay
2010-01-01
Heat-induced paramagnetic centers in modern and archaeological lentils (Lens culinaris, Medik.) were studied by X-band (9.3GHz) electron spin resonance (ESR) technique. The modern red lentil samples were heated in an electrical furnace at increasing temperatures in the range 70-500 degrees C. The ESR spectral parameters (the intensity, g-value and peak-to-peak line width) of the heat-induced organic radicals were investigated for modern red lentil (Lens culinaris, Medik.) samples. The obtained ESR spectra indicate that the relative number of heat-induced paramagnetic species and peak-to-peak line widths depends on the temperature and heating time of the modern lentil. The g-values also depend on the heating temperature but not heating time. Heated modern red lentils produced a range of organic radicals with g-values from g=2.0062 to 2.0035. ESR signals of carbonised archaeological lentil samples from two archaeological deposits of the Van province in Turkey were studied and g-values, peak-to-peak line widths, intensities and elemental compositions were compared with those obtained for modern samples in order to assess at which temperature these archaeological lentils were heated in prehistoric sites. The maximum temperatures of the previous heating of carbonised UA5 and Y11 lentil seeds are as follows about 500 degrees C and above 500 degrees C, respectively. Copyright 2009 Elsevier B.V. All rights reserved.
The use of ESR technique for assessment of heating temperatures of archaeological lentil samples
NASA Astrophysics Data System (ADS)
Aydaş, Canan; Engin, Birol; Dönmez, Emel Oybak; Belli, Oktay
2010-01-01
Heat-induced paramagnetic centers in modern and archaeological lentils ( Lens culinaris, Medik.) were studied by X-band (9.3 GHz) electron spin resonance (ESR) technique. The modern red lentil samples were heated in an electrical furnace at increasing temperatures in the range 70-500 °C. The ESR spectral parameters (the intensity, g-value and peak-to-peak line width) of the heat-induced organic radicals were investigated for modern red lentil ( Lens culinaris, Medik.) samples. The obtained ESR spectra indicate that the relative number of heat-induced paramagnetic species and peak-to-peak line widths depends on the temperature and heating time of the modern lentil. The g-values also depend on the heating temperature but not heating time. Heated modern red lentils produced a range of organic radicals with g-values from g = 2.0062 to 2.0035. ESR signals of carbonised archaeological lentil samples from two archaeological deposits of the Van province in Turkey were studied and g-values, peak-to-peak line widths, intensities and elemental compositions were compared with those obtained for modern samples in order to assess at which temperature these archaeological lentils were heated in prehistoric sites. The maximum temperatures of the previous heating of carbonised UA5 and Y11 lentil seeds are as follows about 500 °C and above 500 °C, respectively.
NASA Astrophysics Data System (ADS)
Yildirim, N.; Dogan, H.; Korkut, H.; Turut, A.
We have prepared the sputtered Ni/n-GaAs Schottky diodes which consist of as-deposited, and diodes annealed at 200 and 400°C for 2 min. The effect of thermal annealing on the temperature-dependent current-voltage (I-V) characteristics of the diodes has been experimentally investigated. Their I-V characteristics have been measured in the temperature range of 60-320 K with steps of 20 K. It has been seen that the barrier height (BH) slightly increased from 0.84 (as-deposited sample) to 0.88 eV at 300 K when the contact has been annealed at 400°C. The SBH increased whereas the ideality factor decreased with increasing annealing temperature for each sample temperature. The I-V measurements showed a dependence of ideality factor n and BH on the measuring temperature that cannot be explained by the classical thermionic emission theory. The experimental data are consistent with the presence of an inhomogeneity of the SBHs. Therefore, the temperature dependent I-V characteristics of the diodes have been discussed in terms of the multi-Gaussian distribution model. The experimental data good have agree with the fitting curves over whole measurement temperature range indicating that the SBH inhomogeneity of our as-deposited and annealed Ni/n-GaAs SBDs can be well-described by a double-Gaussian distribution. The slope of the nT versus T plot for the samples has approached to unity with increasing annealing temperature and becomes parallel to that of the ideal Schottky contact behavior for the 400°C annealed diode. Thus, it has been concluded that the thermal annealing process translates the metal-semiconductor contacts into thermally stable Schottky contacts.
Method for determining temperatures and heat transfer coefficients with a superconductive sample
Gentile, D.; Hassenzahl, W.; Polak, M.
1980-05-01
The method that is described here uses the current-sharing characteristic of a copper-stabilized, superconductive NbTi wire to determine the temperature. The measurements were made for magnetic fields up to 6 T and the precision actually attained with this method is about 0.1 K. It is an improvement over one that has been used at 4.2 K to measure transient heat transfer in that all the parameters of the sample are well known and the current in the sample is measured directly. The response time of the probe is less than 5 ..mu..s and it has been used to measure temperatures during heat pulses as short as 20 ..mu..s. Temperature measurements between 1.6 and 8.5 K are described. An accurate formula based on the current and electric field along the sample has been developed for temperatures between 2.5 K and the critical temperature of the conductor, which, of course, depends on the applied field. Also described is a graphical method that must be used below 2.5 K, where the critical current is not a linear function of temperature.
Yager, Kevin G.; Tanchak, Oleh M.; Barrett, Christopher J.; Watson, Mike J.; Fritzsche, Helmut
2006-04-15
We describe a novel cell design intended for the study of photoactive materials using neutron reflectometry. The cell can maintain sample temperature and control of ambient atmospheric environment. Critically, the cell is built with an optical port, enabling light irradiation or light probing of the sample, simultaneous with neutron reflectivity measurements. The ability to measure neutron reflectivity with simultaneous temperature ramping and/or light illumination presents unique opportunities for measuring photoactive materials. To validate the cell design, we present preliminary results measuring the photoexpansion of thin films of azobenzene polymer.
The use of variable temperature and magic-angle sample spinning in studies of fulvic acids
Earl, W.L.; Wershaw, R. L.; Thorn, K.A.
1987-01-01
Intensity distortions and poor signal to noise in the cross-polarization magic-angle sample spinning NMR of fulvic acids were investigated and attributed to molecular mobility in these ostensibly "solid" materials. We have shown that inefficiencies in cross polarization can be overcome by lowering the sample temperature to about -60??C. These difficulties can be generalized to many other synthetic and natural products. The use of variable temperature and cross-polarization intensity as a function of contact time can yield valuable qualitative information which can aid in the characterization of many materials. ?? 1987.
Hu, Yue; Hong, Wei; Shi, Yunyu; Liu, Haiyan
2012-10-09
In molecular simulations, accelerated sampling can be achieved efficiently by raising the temperature of a small number of coordinates. For collective coordinates, the temperature-accelerated molecular dynamics method or TAMD has been previously proposed, in which the system is extended by introducing virtual variables that are coupled to these coordinates and simulated at higher temperatures (Maragliano, L.; Vanden-Eijnden, E. Chem. Phys. Lett.2005, 426, 168-175). In such accelerated simulations, steady state or equilibrium distributions may exist but deviate from the canonical Boltzmann one. We show that by assuming adiabatic decoupling between the subsystems simulated at different temperatures, correct canonical distributions and ensemble averages can be obtained through reweighting. The method makes use of the low-dimensional free energy surfaces that are estimated as Gaussian mixture probability densities through maximum likelihood and expectation maximization. Previously, we proposed the amplified collective motion method or ACM. The method employs the coarse-grained elastic network model or ANM to extract collective coordinates for accelerated sampling. Here, we combine the ideas of ACM and of TAMD to develop a general technique that can achieve canonical sampling through reweighting under the adiabatic approximation. To test the validity and accuracy of adiabatic reweighting, first we consider a single n-butane molecule in a canonical stochastic heat bath. Then, we use explicitly solvated alanine dipeptide and GB1 peptide as model systems to demonstrate the proposed approaches. With alanine dipeptide, it is shown that sampling can be accelerated by more than an order of magnitude with TAMD while correct distributions and canonical ensemble averages can be recovered, necessarily through adiabatic reweighting. For the GB1 peptide, the conformational distribution sampled by ACM-TAMD, after adiabatic reweighting, suggested that a normal simulation suffered
Effects of different temperature treatments on biological ice nuclei in snow samples
NASA Astrophysics Data System (ADS)
Hara, Kazutaka; Maki, Teruya; Kakikawa, Makiko; Kobayashi, Fumihisa; Matsuki, Atsushi
2016-09-01
The heat tolerance of biological ice nucleation activity (INA) depends on their types. Different temperature treatments may cause varying degrees of inactivation on biological ice nuclei (IN) in precipitation samples. In this study, we measured IN concentration and bacterial INA in snow samples using a drop freezing assay, and compared the results for unheated snow and snow treated at 40 °C and 90 °C. At a measured temperature of -7 °C, the concentration of IN in untreated snow was 100-570 L-1, whereas the concentration in snow treated at 40 °C and 90 °C was 31-270 L-1 and 2.5-14 L-1, respectively. In the present study, heat sensitive IN inactivated by heating at 40 °C were predominant, and ranged 23-78% of IN at -7 °C compared with untreated samples. Ice nucleation active Pseudomonas strains were also isolated from the snow samples, and heating at 40 °C and 90 °C inactivated these microorganisms. Consequently, different temperature treatments induced varying degrees of inactivation on IN in snow samples. Differences in the concentration of IN across a range of treatment temperatures might reflect the abundance of different heat sensitive biological IN components.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2016-06-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Delbo, Marco; Michel, Patrick
2011-02-20
It has been recently shown that near-Earth objects (NEOs) have a temperature history-due to the radiative heating by the Sun-non-trivially correlated to their present orbits. This is because the perihelion distance of NEOs varies as a consequence of dynamical mechanisms, such as resonances and close encounters with planets. Thus, it is worth investigating the temperature history of NEOs that are potential targets of space missions devoted to return samples of prebiotic organic compounds. Some of these compounds, expected to be found on NEOs of primitive composition, break up at moderate temperatures, e.g., 300-670 K. Using a model of the orbital evolution of NEOs and thermal models, we studied the temperature history of (101955) 1999 RQ{sub 36} (the primary target of the mission OSIRIS-REx, proposed in the program New Frontiers of NASA). Assuming that the same material always lies on the surface (i.e., there is no regolith turnover), our results suggest that the temperatures reached during its past evolution affected the stability of some organic compounds at the surface (e.g., there is 50% probability that the surface of 1999 RQ{sub 36} was heated at temperatures {>=}500 K). However, the temperature drops rapidly with depth: the regolith at a depth of 3-5 cm, which is not considered difficult to reach with the current designs of sampling devices, has experienced temperatures about 100 K below those at the surface. This is sufficient to protect some subsurface organics from thermal breakup.
Note: Heated sample platform for in situ temperature-programmed XPS.
Samokhvalov, Alexander; Tatarchuk, Bruce J
2011-07-01
We present the design, fabrication, and performance of the multi-specimen heated platform for linear in situ heating during the Temperature-Programmed XPS (TPXPS). The platform is versatile, compatible with high vacuum (HV) and bakeout. The heater platform is tested under in situ linear heating of typical high surface area sorbent∕catalyst support--nanoporous TiO(2). The platform allows the TPXPS of multiple samples located on specimen disk that can be transferred in and out of the TPXPS chamber. Electric characteristics, temperature and pressure curves are provided. Heating power supply, PID temperature controller, data-logging hardware and software are described.
Eckels, D.E.; Hass, W.J.
1989-05-30
A sample transport, sample introduction, and flame excitation system is described for spectrometric analysis of high temperature gas streams which eliminates degradation of the sample stream by condensation losses. 4 figs.
Lorenzo, R A; Carro, A; Rubí, E; Casais, C; Cela, R
1993-01-01
A programmed temperature gas chromatographic method is presented by which it is possible to carry out routine analysis of methyl mercury in biological samples prepared according to the AOAC official first action recommendations without the need for preliminary treatment of the columns. This method greatly extends the life of the columns as well as the useful time for analysis; it has good linearity and repeatability. With the proposed method a total of 36 samples can be analyzed daily.
NASA Astrophysics Data System (ADS)
Wiederhold, A.; Koblischka, M. R.; Inoue, K.; Muralidhar, M.; Murakami, M.; Hartmann, U.
2016-03-01
A series of disk-shaped, bulk MgB2 superconductors (sample diameter up to 4 cm) was prepared in order to improve the performance for superconducting super-magnets. Several samples were fabricated using a solid state reaction in pure Ar atmosphere from 750 to 950oC in order to determine the optimum processing parameters to obtain the highest critical current density as well as large trapped field values. Additional samples were prepared with added silver (up to 10 wt.-%) to the Mg and B powder. Magneto-resistance data and I/V-characteristics were recorded using an Oxford Instruments Teslatron system. From Arrhenius plots, we determine the TAFF pinning potential, U 0. The I/V-characteristics yield detailed information on the current flow through the polycrystalline samples. The current flow is influenced by the presence of pores in the samples. Our analysis of the achieved critical currents together with a thorough microstructure investigation reveals that the samples prepared at temperatures between 775°C and 805°C exhibit the smallest grains and the best connectivity between them, while the samples fabricated at higher reaction temperatures show a reduced connectivity and lower pinning potential. Doping the samples with silver leads to a considerable increase of the pinning potential and hence, the critical current densities.
NASA Astrophysics Data System (ADS)
Dhiman, Indu; Ebrahimi, O.; Karakas, N.; Höppner, H.; Ziesche, R.; Treimer, Wolfgang
The evolution of flux trap behavior at low temperature (the intermediate state) in high purity Lead samples, both in single crystal with < 100 > orientation and polycrystalline form, is investigated using field cooled (FC) neutron tomography measurements. Reported measurements are carried out for 0∘ and 90∘ sample axis orientation with respect to the external magnetic field. For both < 100 > Pb single crystal as well as polycrystalline sample development of fringe pattern below T
Estimation of Lunar Surface Temperatures: a Numerical Model
NASA Astrophysics Data System (ADS)
Bauch, K.; Hiesinger, H.; Helbert, J.
2009-04-01
About 40 years after the Apollo and other lunar missions, several nations return to the Moon. Indian, Chinese, Japanese and American missions are already in orbit or will soon be launched, and the possibility of a "Made in Germany" mission (Lunar Exploration Orbiter - LEO) looms on the horizon [1]. In preparation of this mission, which will include a thermal infrared spectrometer (SERTIS - SElenological Radiometer and Thermal infrared Imaging Spectrometer), accurate temperature maps of the lunar surface are required. Because the orbiter will be imaging the Moon's surface at different times of the lunar day, an accurate estimation of the thermal variations of the surface with time is necessary to optimize signal-to-noise ratios and define optimal measurement areas. In this study we present new global temperature estimates for sunrise, noontime and sunset. This work provides new and updated research on the temperature variations of the lunar surface, by taking into account the surface and subsurface bulk thermophysical properties, namely their bulk density, heat capacity, thermal conductivity, emissivity and albedo. These properties have been derived from previous spacecraft-based observations, in-situ measurements and returned samples [e.g. 2-4]. In order to determine surface and subsurface temperatures, the one-dimensional heat conduction equation is solved for a resolution of about 0.4°, which is better by a factor of 2 compared to the Clementine measurement and temperature modeling described in [2]. Our work expands on the work of Lawson et al. [2], who calculated global brightness temperatures of subsolar points from the instantaneous energy balance equation assuming the Moon to be a spherical object [2]. Surface daytime temperatures are mainly controlled by their surface albedo and angle of incidence. On the other hand nighttime temperatures are affected by the thermal inertia of the observed surface. Topographic effects are expected to cause earlier or later
Effects of sample size on estimation of rainfall extremes at high temperatures
NASA Astrophysics Data System (ADS)
Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik
2017-09-01
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
A slab model for computing ground temperature in climate models
NASA Technical Reports Server (NTRS)
Lebedeff, S.; Crane, G.; Russell, G.
1979-01-01
A method is developed for computing the ground temperature accurately over both the diurnal and annual cycles. The ground is divided vertically into only two or three slabs, resulting in very efficient computation. Seasonal storage and release of heat is incorporated, and thus the method is well suited for use in climate models.
Graphite sample preparation for AMS in a high pressure and temperature press
Rubin, Meyer; Mysen, Bjorn O.; Polach, Henry
1984-01-01
A high pressure-temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical **1**4C samples including oxalic acid and carbonates. Beam strength of **1**2C was generally adequate, but random radioactive contamination by **1**4C made age measurements impractical.
Graphite sample preparation for AMS in a high pressure and temperature press
Rubin, M.; Mysen, B.O.; Polach, H.
1984-01-01
A high pressure-high temperature press is used to make target material for accelerator mass spectrometry. Graphite was produced from typical 14C samples including oxalic acid and carbonates. Beam strength of 12C was generally adequate, but random radioactive contamination by 14C made age measurements impractical. ?? 1984.
Long-term storage of salivary cortisol samples at room temperature
NASA Technical Reports Server (NTRS)
Chen, Yu-Ming; Cintron, Nitza M.; Whitson, Peggy A.
1992-01-01
Collection of saliva samples for the measurement of cortisol during space flights provides a simple technique for studying changes in adrenal function due microgravity. In the present work, several methods for preserving saliva cortisol at room temperature were investigated using radioimmunoassays for determining cortisol in saliva samples collected on a saliva-collection device called Salivettes. It was found that a pretreatment of Salivettes with citric acid resulted in preserving more than 85 percent of the salivary cortisol for as long as six weeks. The results correlated well with those for a sample stored in a freezer on an untreated Salivette.
Long-term storage of salivary cortisol samples at room temperature
NASA Technical Reports Server (NTRS)
Chen, Yu-Ming; Cintron, Nitza M.; Whitson, Peggy A.
1992-01-01
Collection of saliva samples for the measurement of cortisol during space flights provides a simple technique for studying changes in adrenal function due microgravity. In the present work, several methods for preserving saliva cortisol at room temperature were investigated using radioimmunoassays for determining cortisol in saliva samples collected on a saliva-collection device called Salivettes. It was found that a pretreatment of Salivettes with citric acid resulted in preserving more than 85 percent of the salivary cortisol for as long as six weeks. The results correlated well with those for a sample stored in a freezer on an untreated Salivette.
Thermal Response Modeling System for a Mars Sample Return Vehicle
NASA Technical Reports Server (NTRS)
Chen, Y.-K.; Milos, F. S.
2002-01-01
A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas generation and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.
Thermal Response Modeling System for a Mars Sample Return Vehicle
NASA Technical Reports Server (NTRS)
Chen, Y.-K.; Miles, Frank S.; Arnold, Jim (Technical Monitor)
2001-01-01
A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite-element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas eneration and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.
Fast sweep-rate plastic Faraday force magnetometer with simultaneous sample temperature measurement.
Slobinsky, D; Borzi, R A; Mackenzie, A P; Grigera, S A
2012-12-01
We present a design for a magnetometer capable of operating at temperatures down to 50 mK and magnetic fields up to 15 T with integrated sample temperature measurement. Our design is based on the concept of a Faraday force magnetometer with a load-sensing variable capacitor. A plastic body allows for fast sweep rates and sample temperature measurement, and the possibility of regulating the initial capacitance simplifies the initial bridge balancing. Under moderate gradient fields of ~1 T/m our prototype performed with a resolution better than 1 × 10(-5) emu. The magnetometer can be operated either in a dc mode, or in an oscillatory mode which allows the determination of the magnetic susceptibility. We present measurements on Dy(2)Ti(2)O(7) and Sr(3)Ru(2)O(7) as an example of its performance.
Flow Through a Laboratory Sediment Sample by Computer Simulation Modeling
2006-09-07
Flow through a laboratory sediment sample by computer simulation modeling R.B. Pandeya’b*, Allen H. Reeda, Edward Braithwaitea, Ray Seyfarth0, J.F...through a laboratory sediment sample by computer simulation modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
Preliminary Proactive Sample Size Determination for Confirmatory Factor Analysis Models
ERIC Educational Resources Information Center
Koran, Jennifer
2016-01-01
Proactive preliminary minimum sample size determination can be useful for the early planning stages of a latent variable modeling study to set a realistic scope, long before the model and population are finalized. This study examined existing methods and proposed a new method for proactive preliminary minimum sample size determination.
NASA Astrophysics Data System (ADS)
Heckmann, Tobias; Gegg, Katharina; Becht, Michael
2013-04-01
Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size
Correcting for Microbial Blooms in Fecal Samples during Room-Temperature Shipping
Amir, Amnon; McDonald, Daniel; Navas-Molina, Jose A.; Debelius, Justine; Morton, James T.; Hyde, Embriette; Robbins-Pianka, Adam
2017-01-01
ABSTRACT The use of sterile swabs is a convenient and common way to collect microbiome samples, and many studies have shown that the effects of room-temperature storage are smaller than physiologically relevant differences between subjects. However, several bacterial taxa, notably members of the class Gammaproteobacteria, grow at room temperature, sometimes confusing microbiome results, particularly when stability is assumed. Although comparative benchmarking has shown that several preservation methods, including the use of 95% ethanol, fecal occult blood test (FOBT) and FTA cards, and Omnigene-GUT kits, reduce changes in taxon abundance during room-temperature storage, these techniques all have drawbacks and cannot be applied retrospectively to samples that have already been collected. Here we performed a meta-analysis using several different microbiome sample storage condition studies, showing consistent trends in which specific bacteria grew (i.e., “bloomed”) at room temperature, and introduce a procedure for removing the sequences that most distort analyses. In contrast to similarity-based clustering using operational taxonomic units (OTUs), we use a new technique called “Deblur” to identify the exact sequences corresponding to blooming taxa, greatly reducing false positives and also dramatically decreasing runtime. We show that applying this technique to samples collected for the American Gut Project (AGP), for which participants simply mail samples back without the use of ice packs or other preservatives, yields results consistent with published microbiome studies performed with frozen or otherwise preserved samples. IMPORTANCE In many microbiome studies, the necessity to store samples at room temperature (i.e., remote fieldwork) and the ability to ship samples without hazardous materials that require special handling training, such as ethanol (i.e., citizen science efforts), is paramount. However, although room-temperature storage for a few days has
Correcting for Microbial Blooms in Fecal Samples during Room-Temperature Shipping.
Amir, Amnon; McDonald, Daniel; Navas-Molina, Jose A; Debelius, Justine; Morton, James T; Hyde, Embriette; Robbins-Pianka, Adam; Knight, Rob
2017-01-01
The use of sterile swabs is a convenient and common way to collect microbiome samples, and many studies have shown that the effects of room-temperature storage are smaller than physiologically relevant differences between subjects. However, several bacterial taxa, notably members of the class Gammaproteobacteria, grow at room temperature, sometimes confusing microbiome results, particularly when stability is assumed. Although comparative benchmarking has shown that several preservation methods, including the use of 95% ethanol, fecal occult blood test (FOBT) and FTA cards, and Omnigene-GUT kits, reduce changes in taxon abundance during room-temperature storage, these techniques all have drawbacks and cannot be applied retrospectively to samples that have already been collected. Here we performed a meta-analysis using several different microbiome sample storage condition studies, showing consistent trends in which specific bacteria grew (i.e., "bloomed") at room temperature, and introduce a procedure for removing the sequences that most distort analyses. In contrast to similarity-based clustering using operational taxonomic units (OTUs), we use a new technique called "Deblur" to identify the exact sequences corresponding to blooming taxa, greatly reducing false positives and also dramatically decreasing runtime. We show that applying this technique to samples collected for the American Gut Project (AGP), for which participants simply mail samples back without the use of ice packs or other preservatives, yields results consistent with published microbiome studies performed with frozen or otherwise preserved samples. IMPORTANCE In many microbiome studies, the necessity to store samples at room temperature (i.e., remote fieldwork) and the ability to ship samples without hazardous materials that require special handling training, such as ethanol (i.e., citizen science efforts), is paramount. However, although room-temperature storage for a few days has been shown not to
Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.
Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale
2016-08-01
Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.
Xiao, Binping P.; Kelley, Michael J.; Reece, Charles E.; Phillips, H. L.
2012-12-01
Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the surface impedance characterization (SIC) system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm dia. disk sample which is thermally isolated from the RF portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency (SRF) materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analysed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al{sub 2}O{sub 3}, Cu, MgO, Nb and Si.
Xiao, B P; Reece, C E; Phillips, H L; Kelley, M J
2012-12-01
Two calorimeters, with stainless steel and Cu as the thermal path material for high precision and high power versions, respectively, have been designed and commissioned for the 7.5 GHz surface impedance characterization system at Jefferson Lab to provide low temperature control and measurement for CW power up to 22 W on a 5 cm diameter disk sample which is thermally isolated from the radiofrequency (RF) portion of the system. A power compensation method has been developed to measure the RF induced power on the sample. Simulation and experimental results show that with these two calorimeters, the whole thermal range of interest for superconducting radiofrequency materials has been covered. The power measurement error in the interested power range is within 1.2% and 2.7% for the high precision and high power versions, respectively. Temperature distributions on the sample surface for both versions have been simulated and the accuracy of sample temperature measurements have been analyzed. Both versions have the ability to accept bulk superconductors and thin film superconducting samples with a variety of substrate materials such as Al, Al(2)O(3), Cu, MgO, Nb, and Si.
Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; Lauridsen, Erik M.; Bilheux, Hassina Z.; Molaison, Jamie J.; Santodonato, Louis J.; Tremsin, Anton S.; Grosse, Mirco; Morgano, Manuel; Kabra, Saurabh; Strobl, Markus
2015-12-15
High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components.
An environmental sampling model for combining judgment and randomly placed samples
Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.
2007-08-23
In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.
Integrated research in constitutive modelling at elevated temperatures, part 1
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Allen, D. H.
1986-01-01
Topics covered include: numerical integration techniques; thermodynamics and internal state variables; experimental lab development; comparison of models at room temperature; comparison of models at elevated temperature; and integrated software development.
NASA Astrophysics Data System (ADS)
Rhim, Won-Kyu
1998-01-01
Investigation of deeply undercooled melts will open up the possibilities of studying basic phenomena of thermodynamics, nucleation and solidification processes. Such research work is particularly important to understand the processes of metastable solids which are formed from the non-equilibrium state of undercooled melt. There are a wide variety of metastable states, ranging from metastable crystalline phases (supersaturated and grain-refined alloys) to amorphous metals. A detailed understanding of the thermodynamics, the nucleation and crystal growth conditions can lead to comprehensive understanding of the criteria for the formation of such metastable states. However, at the present time, only limited information is available about the thermophysical parameters as a function of undercooling. The demands for accurate thermophysical property values have also been strong in the electronics industry which constantly demands high quality semiconductor materials for high density integrated circuit devices. In order to simulate the crystal growth for optimization of the growth process, the accurate thermophysical properties of molten semiconductors are essential input parameters. Thermophysical properties of high temperature molten materials are difficult to determine accurately because of the experimental problems associated with taking measurements at high temperatures in the presence of gravity. In the presence of gravity, convective flows are generated if there exist density gradients in a melt. In the high temperature materials processing, for reasons of maintaining purity of sample materials and attaining deeply undercooled states of melts, the sample has to be isolated from the container walls using some kind of levitators. However, the levitation of a high density melt against the gravity requires strong levitation forces which in turn induce undesirable flows in the melt. Such flows in melts would make measurements of certain thermophysical properties either
NASA Astrophysics Data System (ADS)
Ireland, M. J.; Scholz, M.; Wood, P. R.
2008-12-01
We describe the Cool Opacity-sampling Dynamic EXtended (CODEX) atmosphere models of Mira variable stars, and examine in detail the physical and numerical approximations that go in-to the model creation. The CODEX atmospheric models are obtained by computing the temperature and the chemical and radiative states of the atmospheric layers, assuming gas pressure and velocity profiles from Mira pulsation models, which extend from near the H-burning shell to the outer layers of the atmosphere. Although the code uses the approximation of Local Thermodynamic Equilibrium (LTE) and a grey approximation in the dynamical atmosphere code, many key observable quantities, such as infrared diameters and low-resolution spectra, are predicted robustly in spite of these approximations. We show that in visible light, radiation from Mira variables is dominated by fluorescence scattering processes, and that the LTE approximation likely underpredicts visible-band fluxes by a factor of 2.
Bokowa, A H
2012-01-01
Odours present in new Tedlar bags can impact the assessment of emissions from sewer collection systems and wastewater treatment plants. Conditioning protocols are needed to minimise the impact of background materials emissions on the sampling and assessment of odourous emissions. Olfactometry analysis has shown that background odour concentrations for new Tedlar bags can be as high as 130 OU(E)/m(3). Experimental studies were undertaken to investigate the impact of different conditioning temperatures in order to determine the optimum temperature for cleaning new Tedlar bags to a level when no detectable odours were present in the sampling bags via dilution olfactometry. For the purpose of this study, new Tedlar bags were cleaned in a temperature-controlled oven that had a constant filtered air flow-rate. From the analysis of odour and volatile organic compounds (VOCs) concentrations found in new Tedlar bags during the cleaning process, it was observed that odour and VOCs concentrations decreased with time. It was also found that the temperature setting plays a significant role in the cleaning of the Tedlar bags as large concentrations of phenols and acetamide, N,N-dimethyl were found in new Tedlar bags and their concentrations decreased following the temperature pre-conditioning.
A new two-temperature dissociation model for reacting flows
NASA Technical Reports Server (NTRS)
Olynick, David R.; Hassan, H. A.
1992-01-01
A new two-temperature dissociation model for flows undergoing compression is derived from kinetic theory. The model minimizes uncertainties associated with the two-temperature model of Park. The effects of the model on AOTV type flowfields are examined and compared with the Park model. Calculations are carried out for flows with and without ionization. When considering flows with ionization, a four temperature model is employed. For Fire II conditions, the assumption of equilibrium between the vibrational and electron-electronic temperatures is somewhat poor. A similar statement holds for the translational and rotational temperatures. These trends are consistent with results obtained using the direct simulation Monte Carlo method.
Salter, Tara La Roche; Bunch, Josephine; Gilmore, Ian S
2014-09-16
Many different types of samples have been analyzed in the literature using plasma-based ambient mass spectrometry sources; however, comprehensive studies of the important parameters for analysis are only just beginning. Here, we investigate the effect of the sample form and surface temperature on the signal intensities in plasma-assisted desorption ionization (PADI). The form of the sample is very important, with powders of all volatilities effectively analyzed. However, for the analysis of thin films at room temperature and using a low plasma power, a vapor pressure of greater than 10(-4) Pa is required to achieve a sufficiently good quality spectrum. Using thermal desorption, we are able to increase the signal intensity of less volatile materials with vapor pressures less than 10(-4) Pa, in thin film form, by between 4 and 7 orders of magnitude. This is achieved by increasing the temperature of the sample up to a maximum of 200 °C. Thermal desorption can also increase the signal intensity for the analysis of powders.
Temperature effect on femtosecond laser-induced breakdown spectroscopy of glass sample
NASA Astrophysics Data System (ADS)
Wang, Ying; Chen, Anmin; Jiang, Yuanfei; Sui, Laizhi; Wang, Xiaowei; Zhang, Dan; Tian, Dan; Li, Suyu; Jin, Mingxing
2017-01-01
In this study, we observed the evolution of the spectral emission intensity of a glass sample with the increase of sample temperature, laser energy, and delay time in femtosecond laser-induced breakdown spectroscopy (fs-LIBS). In the experiment, the sample was uniformly heated from 22 °C to 200 °C, the laser energy was changed from 0.3 mJ to 1.8 mJ, and the delay time was adjusted from 0.6 μs to 3.0 μs. The results indicated that increasing the sample temperature could enhance the emission intensity and reduce the limits of detection, which is attributed to the increase in the ablated mass and the plasma temperature. And the spectral intensity increases with the increase of the laser energy and the delay time, however, the spectral line intensity no longer increases when the laser pulse energy and delay time reach a certain value. This study will lead to a further improvement in the applications of fs-LIBS.
Sampling theory applied to measurement and analysis of temperature for climate studies
NASA Technical Reports Server (NTRS)
Edwards, Howard B.
1987-01-01
Of all the errors discussed in climatology literature, aliasing errors caused by undersampling of unsmoothed or improperly smoothed temperature data seem to be completely overlooked. This is a serious oversight in view of long-term trends of 1 K or less. Adequate sampling of properly smoothed data is demonstrated with a Hamming digital filter. It is also demonstrated that hourly temperatures, daily averages, and annual averages free of aliasing errors can be obtained by use of a microprocessor added to standard weather sensors and recorders.
Sampling theory applied to measurement and analysis of temperature for climate studies
NASA Technical Reports Server (NTRS)
Edwards, Howard B.
1987-01-01
Of all the errors discussed in climatology literature, aliasing errors caused by undersampling of unsmoothed or improperly smoothed temperature data seem to be completely overlooked. This is a serious oversight in view of long-term trends of 1 K or less. Adequate sampling of properly smoothed data is demonstrated with a Hamming digital filter. It is also demonstrated that hourly temperatures, daily averages, and annual averages free of aliasing errors can be obtained by use of a microprocessor added to standard weather sensors and recorders.
NASA Astrophysics Data System (ADS)
Qi, Shengqi; Hou, Deyi; Luo, Jian
2017-09-01
This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.
Effect of Flux Adjustments on Temperature Variability in Climate Models
Duffy, P.; Bell, J.; Covey, C.; Sloan, L.
1999-12-27
It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.
Effect of flux adjustments on temperature variability in climate models
NASA Astrophysics Data System (ADS)
CMIP investigators; Duffy, P. B.; Bell, J.; Covey, C.; Sloan, L.
2000-03-01
It has been suggested that “flux adjustments” in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.
Temperature Models for the Mexican Subduction Zone
NASA Astrophysics Data System (ADS)
Manea, V. C.; Kostoglodov, V.; Currie, C.; Manea, M.; Wang, K.
2002-12-01
It is well known that the temperature is one of the major factors which controls the seismogenic zone. The Mexican subduction zone is characterized by a very shallow flat subducting interplate in its central part (Acapulco, Oaxaca), and deeper subduction slabs northern (Jalisco) and southern (Chiapas). It has been proposed that the seismogenic zone is controlled, among other factors, by a temperature. Therefore, we have developed four two-dimensional steady state thermal models for Jalisco, Guerrero, Oaxaca and Chiapas. The updip limit of the seismogenic zone is taken between 100 §C and 150 §C, while the downdip limit is thought to be at 350 §C because of the transition from stick-slip to stable-sliding. The shape of the subducting plate is inferred from gravity and seismicity. The convergence velocity between oceanic and continental lithospheric plates is taken as the following: 5 cm/yr for Jalisco profile, 5.5 for Guerrero profile, 5.8 for Oaxaca profile, and 7.8 for Chiapas profile. The age of the subducting plates, since they are young, and provides the primary control on the forearc thermal structure, are as the following: 11 My for Jalisco profile, 14.5 My for Guerrero profile, 15 My for Oaxaca profile, and 28 My for Chiapas profile. We also introduced in the models a small quantity of frictional heating (pore pressure ratio 0.98). The value of 0.98 for pore pressure ratio was obtained for the Guerrero profile, in order to fit the intersection between the 350 §C isotherm and the subducting plate at 200 Km from trench. The value of 200 km coupling zone from trench is inferred from GPS data for the steady interseismic period and also for the last slow aseismic slip that occurred in Guerrero in 2002. We have used this value of pore pressure ratio (0.98) for all the other profiles. For the others three profiles we obtained the following coupling extents: Jalisco - 100 km, Oaxaca - 170 km and Chiapas - 125 km (from the trench). Independent constrains of the
Modelling LARES temperature distribution and thermal drag
NASA Astrophysics Data System (ADS)
Nguyen, Phuc H.; Matzner, Richard
2015-10-01
The LARES satellite, a laser-ranged space experiment to contribute to geophysics observation, and to measure the general relativistic Lense-Thirring effect, has been observed to undergo an anomalous along-track orbital acceleration of -0.4 pm/s2 (pm : = picometer). This thermal "drag" is not surprising; along-track thermal drag has previously been observed with the related LAGEOS satellites (-3.4 pm/s2). It is hypothesized that the thermal drag is principally due to anisotropic thermal radiation from the satellite's exterior. We report the results of numerical computations of the along-track orbital decay of the LARES satellite during the first 126 days after launch. The results depend to a significant degree on the visual and IR absorbance α and emissivity ɛ of the fused silica Cube Corner Reflectors. We present results for two values of α IR = ɛ IR : 0.82, a standard number for "clean" fused silica; and 0.60, a possible value for silica with slight surface contamination subjected to the space environment. The heating and the resultant along-track acceleration depend on the plane of the orbit, the sun position, and, in particular, on the occurrence of eclipses, all of which are functions of time. Thus we compute the thermal drag for specific days. We compare our model to observational data, available for a 120 day period starting with the 7th day after launch, which shows the average acceleration of -0.4 pm/s2. With our model the average along-track thermal drag over this 120 day period for CCR α IR = ɛ IR = 0.82 was computed to be -0.59 pm/s2. For CCR α IR = ɛ IR = 0.60 we compute -0.36 pm/s2. LARES consists of a solid spherical tungsten sphere, into which the CCRs are set in colatitude circles. Our calculation models the satellite as 93 isothermal elements: the tungsten part, and each of the 92 Cube Corner Reflectors. The satellite is heated from two sources: sunlight and Earth's infrared (IR) radiation. We work in the fast-spin regime, where CCRs with
Sample weight and digestion temperature as critical factors in mercury determination in fish
Sadiq, M.; Zaidi, T.H.; Al-Mohana, H. )
1991-09-01
The concern about mercury (Hg) pollution of the marine environment started with the well publicized case of Minimata (Japan) where in the 1950s several persons died or became seriously ill after consuming fish or shellfish containing high levels of methylmercury. It is now accepted that Hg contaminated seafoods constitute a hazard to human health. To safeguard humans, accurate determination of Hg in marine biota is, therefore, very important. Two steps are involved in the determination of total Hg in biological materials: (a) decomposition of organic matrix (sample preparation), and (b) determination of Hg in aliquot samples. Although the procedures for determining Hg using the cold vapor technique are well established, sample preparation procedures have not been standardized. In general, samples of marine biota have been prepared by digesting different weights at different temperatures, by using mixtures of different chemicals and of varying quantities, and by digesting for variable durations. The objectives of the present paper were to evaluate the effects of sample weights and digestion temperatures on Hg determination in fish.
NASA Astrophysics Data System (ADS)
Saito, H.; Saito, T.; Hamamoto, S.; Komatsu, T.
2015-12-01
In our previous study, we have observed trace element concentrations in groundwater increased when groundwater temperature was increased with constant thermal loading using a 50-m long vertical heat exchanger installed at Saitama University, Japan. During the field experiment, 38 degree C fluid was circulated in the heat exchanger resulting 2.8 kW thermal loading over 295 days. Groundwater samples were collected regularly from 17-m and 40-m deep aquifers at four observation wells located 1, 2, 5, and 10 m, respectively, from the heat exchange well and were analyzed with ICP-MS. As a result, concentrations of some trace elements such as boron increased with temperature especially at the 17-m deep aquifer that is known as marine sediment. It has been also observed that the increased concentrations have decreased after the thermal loading was terminated indicating that this phenomenon may be reversible. Although the mechanism is not fully understood, changes in the liquid phase concentration should be associated with dissolution and/or desorption from the solid phase. We therefore attempt to model this phenomenon by introducing temperature dependence in equilibrium linear adsorption isotherms. We assumed that distribution coefficients decrease with temperature so that the liquid phase concentration of a given element becomes higher as the temperature increases under the condition that the total mass stays constant. A shape function was developed to model the temperature dependence of the distribution coefficient. By solving the mass balance equation between the liquid phase and the solid phase for a given element, a new term describing changes in the concentration was implemented in a source/sink term of a standard convection dispersion equation (CDE). The CDE was then solved under a constant ground water flow using FlexPDE. By calibrating parameters in the newly developed shape function, the changes in element concentrations observed were quite well predicted. The
Stream Water Temperature Model for Upper Mississippi River Basin
NASA Astrophysics Data System (ADS)
Mahat, V.; Yan, E.
2016-12-01
Relationship of air temperature with stream water temperature is nonlinear. Equilibrium temperature, which is obtained by setting up the sum of all heat fluxes through the water surface equal to zero, shows a linear relationship with stream water temperature, and can be used to project the stream water temperature at large time scale under different climate scenarios . But, for a small time scale, stream water temperature deviates largely from the equilibrium temperature, and linear relationship between equilibrium and stream water temperatures does not hold. This deviation of stream water temperature from equilibrium temperature (deviation) is related to upstream temperature and many other factors that are not accounted for, when equilibrium temperature is calculated. In this paper, we quantified the deviation using an empirical, multiple linear regression model, utilizing readily available physical parameters: air temperature, flow and the equilibrium temperature. The empirical model results showed a strong relationship, with correlation ranging from 0.95 to 1, between the deviation and these variables for 58 USGS gaging stations of Upper Mississippi River basin. This deviation when added to the equilibrium temperature provides stream water temperature. Comparisons of simulated daily stream water temperatures with recorded temperatures for 58 USGS gaging stations showed correlation values in the range of 0.98 to 1 and RMSE values in the range of 0.51 to 1.43. Reasonable results were also obtained when regression model parameters were transferred from one station to another located up to about 100 km far.
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. Furthermore, the method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. Furthermore, themore » method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.« less
M. Krug; R. Shogan; A. Fero; M. Snyder
2004-11-01
Pressurized water reactor (PWR) cores, operate under extreme environmental conditions due to coolant chemistry, operating temperature, and neutron exposure. Extending the life of PWRs require detailed knowledge of the changes in mechanical and corrosion properties of the structural austenitic stainless steel components adjacent to the fuel. This report contains basic material characterization information of the as-installed samples of reactor internals material which were harvested from a decommissioned PWR.
Slice sampling technique in Bayesian extreme of gold price modelling
NASA Astrophysics Data System (ADS)
Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham
2013-09-01
In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.
NERVE AS MODEL TEMPERATURE END ORGAN
Bernhard, C. G.; Granit, Ragnar
1946-01-01
Rapid local cooling of mammalian nerve sets up a discharge which is preceded by a local temperature potential, the cooled region being electronegative relative to a normal portion of the nerve. Heating the nerve locally above its normal temperature similarly makes the heated region electronegative relative to a region at normal temperature, and again a discharge is set up from the heated region. These local temperature potentials, set up by the nerve itself, are held to serve as "generator potentials" and the mechanism found is regarded as the prototype for temperature end organs. PMID:19873460
NASA Astrophysics Data System (ADS)
Szwarc, Timothy; Hubbard, Scott
2014-09-01
The effects of atmosphere, ambient temperature, and geologic material were studied experimentally and using a computer model to predict the heating undergone by Mars rocks during rover sampling operations. Tests were performed on five well-characterized and/or Mars analog materials: Indiana limestone, Saddleback basalt, kaolinite, travertine, and water ice. Eighteen tests were conducted to 55 mm depth using a Mars Sample Return prototype coring drill, with each sample containing six thermal sensors. A thermal simulation was written to predict the complete thermal profile within each sample during coring and this model was shown to be capable of predicting temperature increases with an average error of about 7%. This model may be used to schedule power levels and periods of rest during actual sample acquisition processes to avoid damaging samples or freezing the bit into icy formations. Maximum rock temperature increase is found to be modeled by a power law incorporating rock and operational parameters. Energy transmission efficiency in coring is found to increase linearly with rock hardness and decrease by 31% at Mars pressure.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Errors of five-day mean surface wind and temperature conditions due to inadequate sampling
NASA Technical Reports Server (NTRS)
Legler, David M.
1991-01-01
Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.
Errors of five-day mean surface wind and temperature conditions due to inadequate sampling
NASA Technical Reports Server (NTRS)
Legler, David M.
1991-01-01
Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.
Sengupta, Mita E; Thapa, Sundar; Thamsborg, Stig M; Mejer, Helena
2016-02-15
Strongyle eggs of helminths of livestock usually hatch within a few hours or days after deposition with faeces. This poses a problem when faecal sampling is performed in the field. As oxygen is needed for embryonic development, it is recommended to reduce air supply during transport and refrigerate. The present study therefore investigated the combined effect of vacuum packing and temperature on survival of strongyle eggs and their subsequent ability to hatch and develop into L3. Fresh faecal samples were collected from calves infected with Cooperia oncophora, pigs infected with Oesophagostomum dentatum, and horses infected with Strongylus vulgaris and cyathostomins. The samples were allocated into four treatments: vacuum packing and storage at 5 °C or 20 °C (5 V and 20 V); normal packing in plastic gloves closed with a loose knot and storage at 5 °C or 20 °C (5 N and 20 N). The number of eggs per gram faeces (EPG) was estimated every fourth day until day 28 post set up (p.s.) by a concentration McMaster-method. Larval cultures were prepared on day 0, 12 and 28 p.s. and the larval yield determined. For C. oncophora, the EPG was significantly higher in vacuum packed samples after 28 days as compared to normal storage, regardless of temperature. However, O. dentatum EPG was significantly higher in samples kept at 5 °C as compared to 20 °C, irrespective of packing. For the horse strongyles, vacuum packed samples at 5 °C had a significantly higher EPG compared to the other treatments after 28 days. The highest larval yield of O. dentatum and horse strongyles were obtained from fresh faecal samples, however, if storage is necessary prior to setting up larval cultures O. dentatum should be kept at room temperature (aerobic or anaerobic). However, horse strongyle coprocultures should ideally be set up on the day of collection to ensure maximum yield. Eggs of C. oncophora should be kept vacuum packed at room temperature for the highest larval yield.
NASA Astrophysics Data System (ADS)
Bodzenta, Jerzy; Kaźmierczak-Bałata, Anna; Bukowski, Roman; Nowak, Marian; Solecka, Barbara
2017-06-01
Modeling of the probe beam deflection caused by temperature gradients for layered sample was realized in COMSOL Multiphysics, which utilizes finite element method to analyze heat transport. The sample consisted of a 100-nm-thick layer on a 500-\\upmu m-thick substrate. It was also assumed that the sample was illuminated with either a Gaussian or a flat top beam of harmonically modulated intensity. To obtain the probe beam deflection signal, the normal and tangential components of the temperature gradient in the air above the sample were integrated over the probe beam path. The numerical model of the experiment gave insight into the various parameter dependencies, e.g., the thermal and optical properties of the substrate and the layer, and the geometry of the experiment. These insights are used in the analysis of experimental data and in the planning of future measurements.
A Nonlinear Viscoelastic Model for Ceramics at High Temperatures
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.
2002-01-01
High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.
Montaser, A.
1990-01-01
In this project, new high temperature plasmas and new sample introduction systems are developed for rapid elemental and isotopic analysis of gases, solutions, and solids using atomic emission spectrometry (AES) and mass spectrometry (MS). These devices offer promise of solving singularly difficult analytical problems that either exist now or are likely to arise in the future in the various fields of energy generation, environmental pollution, biomedicine and nutrition. Emphasis is being placed on: generation of annular, helium inductively coupled plasmas (He ICPs) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies, with the intent of enhancing the detecting powers of a number of elements; diagnostic studies of high-temperature plasmas to quantify their fundamental properties, with the ultimate aim to improve analytical performance of atomic spectrometry; development and characterization of new sample introduction systems that consume microliter or microgram quantities of samples, and investigation of new membrane separators for striping solvent from sample aerosol to reduce various interferences and to enhance sensitivity in plasma spectrometry.
Compact low temperature scanning tunneling microscope with in-situ sample preparation capability
NASA Astrophysics Data System (ADS)
Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang
2015-09-01
We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.
Compact low temperature scanning tunneling microscope with in-situ sample preparation capability
Kim, Jungdae; Nam, Hyoungdo; Schroeder, Allan; Shih, Chih-Kang; Qin, Shengyong; Kim, Sang-ui; Eom, Daejin
2015-09-15
We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.
Effect of short-term room temperature storage on the microbial community in infant fecal samples
Guo, Yong; Li, Sheng-Hui; Kuang, Ya-Shu; He, Jian-Rong; Lu, Jin-Hua; Luo, Bei-Jun; Jiang, Feng-Ju; Liu, Yao-Zhong; Papasian, Christopher J.; Xia, Hui-Min; Deng, Hong-Wen; Qiu, Xiu
2016-01-01
Sample storage conditions are important for unbiased analysis of microbial communities in metagenomic studies. Specifically, for infant gut microbiota studies, stool specimens are often exposed to room temperature (RT) conditions prior to analysis. This could lead to variations in structural and quantitative assessment of bacterial communities. To estimate such effects of RT storage, we collected feces from 29 healthy infants (0–3 months) and partitioned each sample into 5 portions to be stored for different lengths of time at RT before freezing at −80 °C. Alpha diversity did not differ between samples with storage time from 0 to 2 hours. The UniFrac distances and microbial composition analysis showed significant differences by testing among individuals, but not by testing between different time points at RT. Changes in the relative abundance of some specific (less common, minor) taxa were still found during storage at room temperature. Our results support previous studies in children and adults, and provided useful information for accurate characterization of infant gut microbiomes. In particular, our study furnished a solid foundation and justification for using fecal samples exposed to RT for less than 2 hours for comparative analyses between various medical conditions. PMID:27226242
Compact low temperature scanning tunneling microscope with in-situ sample preparation capability.
Kim, Jungdae; Nam, Hyoungdo; Qin, Shengyong; Kim, Sang-ui; Schroeder, Allan; Eom, Daejin; Shih, Chih-Kang
2015-09-01
We report on the design of a compact low temperature scanning tunneling microscope (STM) having in-situ sample preparation capability. The in-situ sample preparation chamber was designed to be compact allowing quick transfer of samples to the STM stage, which is ideal for preparing temperature sensitive samples such as ultra-thin metal films on semiconductor substrates. Conventional spring suspensions on the STM head often cause mechanical issues. To address this problem, we developed a simple vibration damper consisting of welded metal bellows and rubber pads. In addition, we developed a novel technique to ensure an ultra-high-vacuum (UHV) seal between the copper and stainless steel, which provides excellent reliability for cryostats operating in UHV. The performance of the STM was tested from 2 K to 77 K by using epitaxial thin Pb films on Si. Very high mechanical stability was achieved with clear atomic resolution even when using cryostats operating at 77 K. At 2 K, a clean superconducting gap was observed, and the spectrum was easily fit using the BCS density of states with negligible broadening.
A temperature dependent SPICE macro-model for power MOSFETs
Pierce, D.G.
1991-01-01
The power MOSFET SPICE Macro-Model has been developed suitable for use over the temperature range {minus}55 to 125 {degrees}C. The model is comprised of a single parameter set with temperature dependence accessed through the SPICE .TEMP card. SPICE parameter extraction techniques for the model and model predictive accuracy are discussed. 7 refs., 8 figs., 1 tab.
Temperature calibration of lacustrine alkenones using in-situ sampling and growth cultures
NASA Astrophysics Data System (ADS)
Huang, Y.; Toney, J. L.; Andersen, R.; Fritz, S. C.; Baker, P. A.; Grimm, E. C.; Theroux, S.; Amaral Zettler, L.; Nyren, P. E.
2010-12-01
Sedimentary alkenones have been found in an increasing number of lakes around the globe. Studies using molecular biological tools, however, indicate that haptophyte species that produce lacustrine alkenones differ from the oceanic species. In order to convert alkenone unsaturation ratios measured in sediments into temperature, it is necessary to obtain an accurate calibration for individual lakes. Using Lake George, North Dakota, U.S. as an example, we have carried out temperature calibrations by both in-situ water column sampling and culture growth experiments. In-situ measured lake water temperatures in the lake show a strong correlation with the alkenone unsaturation indices (r-squared = 0.82), indicating a rapid equilibrium of alkenone distributions with the lake water temperature in the water column. We applied the in-situ calibration to down-core measurements for Lake George and generated realistic temperature estimates for the past 8 kyr. Algal isolation and culture growth, on the other hand, reveal the presence of two different types of alkenone producing haptophytes. The species making a predominant C37:4 alkenone (species A) produced much greater concentrations of alkenones per unit volume than the species that produced a predominant C37:3 alkenone (species B). It is the first time that a haptophyte species (species A) making a predominant C37:4 alkenone is cultured successfully and now replicated at four different growth temperatures. The distribution of alkeones in Lake George sediments matches extremely well with the alkenones produced by species A, indicating species A is likely the producer for the alkenones in the sediments. The alkenone unsaturation ratio of alkenones produced by species A haptophyte shows a primary dependence on growth temperature as expected, but the slope of change appears to vary depending on the growth stages. The implications of our findings for paleoclimate reconstructions using lacustrine alkenones will be discussed.
A Unimodal Model for Double Observer Distance Sampling Surveys
Becker, Earl F.; Christ, Aaron M.
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied. PMID:26317984
Simulating canopy temperature for modelling heat stress in cereals
USDA-ARS?s Scientific Manuscript database
Crop models must be improved to account for the large effects of heat stress effects on crop yields. To date, most approaches in crop models use air temperature despite evidence that crop canopy temperature better explains yield reductions associated with high temperature events. This study presents...
Wang, Fuyixue; Dong, Zijing; Chen, Shuo; Chen, Bingyao; Yang, Jiafei; Wei, Xing; Wang, Shi; Ying, Kui
2016-10-01
This study aims to accelerate MR temperature imaging using the proton resonance frequency (PRF) shift method for real time temperature monitoring during thermal ablation. The proposed method estimates temperature changes from undersampled k-space with a fully sampled center. This proposed algorithm is based on the hybrid multi-baseline and referenceless treatment image model and can be seen as an extension of the conventional k-space-based hybrid thermometry. The parameters of hybrid model are acquired by utilizing information from low resolution images which are obtained from fully-sampled centers of k-space. Registration is used to correct temperature errors due to the displacement of the subject. Phantom heating simulations, motion simulations, phantom heating and in-vivo experiments were performed to investigate the efficiency of the proposed method. SPIRiT and the conventional k-space estimation reconstruction thermometry were implemented for comparison using the same sampling pattern. The phantom heating simulations showed that the proposed method results in lower RMSEs than the conventional k-space hybrid thermometry and SPIRiT at various reduction factors tested. The motion simulations indicated the robustness of the proposed method to displacement of the subject. Phantom heating experiment further demonstrated the ability of the method to reconstruct temperature maps with less computation time and higher accuracy (RMSEs lower than 0.4°C) at a net reduction factor of 3.5 in the presence of large noise caused by a microwave needle. In-vivo experiments validated the feasibility of the proposed method to estimate temperature changes from undersampled k-space (net reduction factor 4.3) in presence of respiratory motion and complicated anatomical structure, while reducing computation time as much as 10-fold compared with the conventional k-space method. The proposed method accelerates the PRF-shift MR thermometry and provides more accurate temperature maps in
LaCount, Robert B.
1993-01-01
A furnace with two hot zones holds multiple analysis tubes. Each tube has a separable sample-packing section positioned in the first hot zone and a catalyst-packing section positioned in the second hot zone. A mass flow controller is connected to an inlet of each sample tube, and gas is supplied to the mass flow controller. Oxygen is supplied through a mass flow controller to each tube to either or both of an inlet of the first tube and an intermediate portion between the tube sections to intermingle with and oxidize the entrained gases evolved from the sample. Oxidation of those gases is completed in the catalyst in each second tube section. A thermocouple within a sample reduces furnace temperature when an exothermic condition is sensed within the sample. Oxidized gases flow from outlets of the tubes to individual gas cells. The cells are sequentially aligned with an infrared detector, which senses the composition and quantities of the gas components. Each elongated cell is tapered inward toward the center from cell windows at the ends. Volume is reduced from a conventional cell, while permitting maximum interaction of gas with the light beam. Reduced volume and angulation of the cell inlets provide rapid purgings of the cell, providing shorter cycles between detections. For coal and other high molecular weight samples, from 50% to 100% oxygen is introduced to the tubes.
Temperature Chaos in Some Spherical Mixed p-Spin Models
NASA Astrophysics Data System (ADS)
Chen, Wei-Kuo; Panchenko, Dmitry
2017-03-01
We give two types of examples of the spherical mixed even- p-spin models for which chaos in temperature holds. These complement some known results for the spherical pure p-spin models and for models with Ising spins. For example, in contrast to a recent result of Subag who showed absence of chaos in temperature in the spherical pure p-spin models for p≥3, we show that even a smaller order perturbation induces temperature chaos.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With
NASA Astrophysics Data System (ADS)
Potamias, Dimitrios; Alxneit, Ivo; Wokaun, Alexander
2017-09-01
The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.
Long-term room temperature preservation of corpse soft tissue: an approach for tissue sample storage
2011-01-01
Background Disaster victim identification (DVI) represents one of the most difficult challenges in forensic sciences, and subsequent DNA typing is essential. Collected samples for DNA-based human identification are usually stored at low temperature to halt the degradation processes of human remains. We have developed a simple and reliable procedure for soft tissue storage and preservation for DNA extraction. It ensures high quality DNA suitable for PCR-based DNA typing after at least 1 year of room temperature storage. Methods Fragments of human psoas muscle were exposed to three different environmental conditions for diverse time periods at room temperature. Storage conditions included: (a) a preserving medium consisting of solid sodium chloride (salt), (b) no additional substances and (c) garden soil. DNA was extracted with proteinase K/SDS followed by organic solvent treatment and concentration by centrifugal filter devices. Quantification was carried out by real-time PCR using commercial kits. Short tandem repeat (STR) typing profiles were analysed with 'expert software'. Results DNA quantities recovered from samples stored in salt were similar up to the complete storage time and underscored the effectiveness of the preservation method. It was possible to reliably and accurately type different genetic systems including autosomal STRs and mitochondrial and Y-chromosome haplogroups. Autosomal STR typing quality was evaluated by expert software, denoting high quality profiles from DNA samples obtained from corpse tissue stored in salt for up to 365 days. Conclusions The procedure proposed herein is a cost efficient alternative for storage of human remains in challenging environmental areas, such as mass disaster locations, mass graves and exhumations. This technique should be considered as an additional method for sample storage when preservation of DNA integrity is required for PCR-based DNA typing. PMID:21846338
Potamias, Dimitrios; Alxneit, Ivo; Wokaun, Alexander
2017-09-01
The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Latent spatial models and sampling design for landscape genetics
Ephraim M. Hanks; Melvin B. Hooten; Steven T. Knick; Sara J. Oyler-McCance; Jennifer A. Fike; Todd B. Cross; Michael K. Schwartz
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial...
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Optimization of sampled imaging system with baseband response squeeze model
NASA Astrophysics Data System (ADS)
Yang, Huaidong; Chen, Kexin; Huang, Xingyue; He, Qingsheng; Jin, Guofan
2008-03-01
When evaluating or designing a sampled imager, a comprehensive analysis is necessary and a trade-off among optics, photoelectric detector and display technique is inevitable. A new method for sampled imaging system evaluation and optimization is developed in this paper. By extension of MTF in sampled imaging system, inseparable parameters of a detector are taken into account and relations among optics, detector and display are revealed. To measure the artifacts of sampling, the Baseband Response Squeeze model, which will impose a penalty for undersampling, is clarified. Taken the squeezed baseband response and its cutoff frequency for favorable criterion, the method is competent not only for evaluating but also for optimizing sampled imaging system oriented either to single task or to multi-task. The method is applied to optimize a typical sampled imaging system. a sensitivity analysis of various detector parameters is performed and the resulted guidelines are given.
Scaled tests and modeling of effluent stack sampling location mixing.
Recknagle, Kurtis P; Yokuda, Satoru T; Ballinger, Marcel Y; Barnett, J Matthew
2009-02-01
A three-dimensional computational fluid dynamics computer model was used to evaluate the mixing at a sampling system for radioactive air emissions. Researchers sought to determine whether the location would meet the criteria for uniform air velocity and contaminant concentration as prescribed in the American National Standards Institute standard, Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities. This standard requires that the sampling location be well-mixed and stipulates specific tests to verify the extent of mixing. The exhaust system for the Radiochemical Processing Laboratory was modeled with a computational fluid dynamics code to better understand the flow and contaminant mixing and to predict mixing test results. The modeled results were compared to actual measurements made at a scale-model stack and to the limited data set for the full-scale facility stack. Results indicated that the computational fluid dynamics code provides reasonable predictions for velocity, cyclonic flow, gas, and aerosol uniformity, although the code predicts greater improvement in mixing as the injection point is moved farther away from the sampling location than is actually observed by measurements. In expanding from small to full scale, the modeled predictions for full-scale measurements show similar uniformity values as in the scale model. This work indicated that a computational fluid dynamics code can be a cost-effective aid in designing or retrofitting a facility's stack sampling location that will be required to meet standard ANSI/HPS N13.1-1999.
Latent spatial models and sampling design for landscape genetics
Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.
Water adsorption at high temperature on core samples from The Geysers geothermal field
Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.
1998-06-01
The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal field, California, was measured at 150, 200, and 250 C as a function of steam pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption and desorption runs were made in order to investigate the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were made on the same rock samples. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there was no direct correlation between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The hysteresis decreased significantly at 250 C. The results indicate that multilayer adsorption, rather than capillary condensation, is the dominant water storage mechanism at high temperatures.
Water adsorption at high temperature on core samples from The Geysers geothermal field
Gruszkiewicz, M.S.; Horita, J.; Simonson, J.M.; Mesmer, R.E.
1998-06-01
The quantity of water retained by rock samples taken from three wells located in The Geysers geothermal reservoir, California, was measured at 150, 200, and 250 C as a function of pressure in the range 0.00 {le} p/p{sub 0} {le} 0.98, where p{sub 0} is the saturated water vapor pressure. Both adsorption (increasing pressure) and desorption (decreasing pressure) runs were made in order to investigate the nature and the extent of the hysteresis. Additionally, low temperature gas adsorption analyses were performed on the same rock samples. Nitrogen or krypton adsorption and desorption isotherms at 77 K were used to obtain BET specific surface areas, pore volumes and their distributions with respect to pore sizes. Mercury intrusion porosimetry was also used to obtain similar information extending to very large pores (macropores). A qualitative correlation was found between the surface properties obtained from nitrogen adsorption and the mineralogical and petrological characteristics of the solids. However, there is in general no proportionality between BET specific surface areas and the capacity of the rocks for water adsorption at high temperatures. The results indicate that multilayer adsorption rather than capillary condensation is the dominant water storage mechanism at high temperatures.
RNA modeling using Gibbs sampling and stochastic context free grammars
Grate, L.; Herbster, M.; Rughey, R.; Haussler, D.
1994-12-31
A new method of discovering the common secondary structure of a family of homologous RNA sequences using Gibbs sampling and stochastic context-free grammars is proposed. Given an unaligned set of sequences, a Gibbs sampling step simultaneously estimates the secondary structure of each sequence and a set of statistical parameters describing the common secondary structure of the set as a whole. These parameters describe a statistical model of the family. After the Gibbs sampling has produced a crude statistical model for the family, this model is translated into a stochastic context-free grammar, which is then refined by an Expectation Maximization (EM) procedure to produce a more complete model. A prototype implementation of the method is tested on tRNA, pieces of 16S rRNA and on U5 snRNA with good results.
Modeling air temperature changes in Northern Asia
NASA Astrophysics Data System (ADS)
Onuchin, A.; Korets, M.; Shvidenko, A.; Burenina, T.; Musokhranova, A.
2014-11-01
Based on time series (1950-2005) of monthly temperatures from 73 weather stations in Northern Asia (limited by 70-180° EL and 48-75° NL), it is shown that there are statistically significant spatial differences in character and intensity of the monthly and yearly temperature trends. These differences are defined by geomorphological and geographical parameters of the area including exposure of the territory to Arctic and Pacific air mass, geographic coordinates, elevation, and distances to Arctic and Pacific oceans. Study area has been divided into six domains with unique groupings of the temperature trends based on cluster analysis. An original methodology for mapping of temperature trends has been developed and applied to the region. The assessment of spatial patterns of temperature trends at the regional level requires consideration of specific regional features in the complex of factors operating in the atmosphere-hydrosphere-lithosphere-biosphere system.
Cusp Catastrophe Polynomial Model: Power and Sample Size Estimation
Chen, Ding-Geng(Din); Chen, Xinguang(Jim); Lin, Feng; Tang, Wan; Lio, Y. L.; Guo, (Tammy) Yuanyuan
2016-01-01
Guastello’s polynomial regression method for solving cusp catastrophe model has been widely applied to analyze nonlinear behavior outcomes. However, no statistical power analysis for this modeling approach has been reported probably due to the complex nature of the cusp catastrophe model. Since statistical power analysis is essential for research design, we propose a novel method in this paper to fill in the gap. The method is simulation-based and can be used to calculate statistical power and sample size when Guastello’s polynomial regression method is used to cusp catastrophe modeling analysis. With this novel approach, a power curve is produced first to depict the relationship between statistical power and samples size under different model specifications. This power curve is then used to determine sample size required for specified statistical power. We verify the method first through four scenarios generated through Monte Carlo simulations, and followed by an application of the method with real published data in modeling early sexual initiation among young adolescents. Findings of our study suggest that this simulation-based power analysis method can be used to estimate sample size and statistical power for Guastello’s polynomial regression method in cusp catastrophe modeling. PMID:27158562
Cusp Catastrophe Polynomial Model: Power and Sample Size Estimation.
Chen, Ding-Geng Din; Chen, Xinguang Jim; Lin, Feng; Tang, Wan; Lio, Y L; Guo, Tammy Yuanyuan
2014-12-01
Guastello's polynomial regression method for solving cusp catastrophe model has been widely applied to analyze nonlinear behavior outcomes. However, no statistical power analysis for this modeling approach has been reported probably due to the complex nature of the cusp catastrophe model. Since statistical power analysis is essential for research design, we propose a novel method in this paper to fill in the gap. The method is simulation-based and can be used to calculate statistical power and sample size when Guastello's polynomial regression method is used to cusp catastrophe modeling analysis. With this novel approach, a power curve is produced first to depict the relationship between statistical power and samples size under different model specifications. This power curve is then used to determine sample size required for specified statistical power. We verify the method first through four scenarios generated through Monte Carlo simulations, and followed by an application of the method with real published data in modeling early sexual initiation among young adolescents. Findings of our study suggest that this simulation-based power analysis method can be used to estimate sample size and statistical power for Guastello's polynomial regression method in cusp catastrophe modeling.
Temperature distributions in the laser-heated diamond anvil cell from 3-D numerical modeling
Rainey, E. S. G.; Kavner, A.; Hernlund, J. W.
2013-11-28
We present TempDAC, a 3-D numerical model for calculating the steady-state temperature distribution for continuous wave laser-heated experiments in the diamond anvil cell. TempDAC solves the steady heat conduction equation in three dimensions over the sample chamber, gasket, and diamond anvils and includes material-, temperature-, and direction-dependent thermal conductivity, while allowing for flexible sample geometries, laser beam intensity profile, and laser absorption properties. The model has been validated against an axisymmetric analytic solution for the temperature distribution within a laser-heated sample. Example calculations illustrate the importance of considering heat flow in three dimensions for the laser-heated diamond anvil cell. In particular, we show that a “flat top” input laser beam profile does not lead to a more uniform temperature distribution or flatter temperature gradients than a wide Gaussian laser beam.
NASA Technical Reports Server (NTRS)
Jovanovic, S.; Reed, G. W., Jr.
1979-01-01
The concentrations of Hg released at at the most 130 C increase with depth in near-surface samples from cores. This is in response to a daytime thermal gradient with temperatures of approximately 400 K at the surface decreasing to approximately 250 K at greater than 10 cm depth (Keihm and Langseth, 1973). The steepness of the slopes and the depths to which the concentration gradients extend appear to be determined by the color, density and possibly the grain size of the soils. Earlier surface layers can be identified and, in general, are in agreement with other indicators of such layers. Low temperature volatilized Br exhibits trends that parallel those of Hg in a number of cases. This is also true of Br and Hg fractions released in stepwise heating experiments at higher temperatures. The coherence, especially in higher temperature fractions, between these chemically dissimilar elements implies a common physical process of entrapment; possibly one related to the presence of vapor deposits on surfaces and to opening and closing of microcracks and pores.
Far-infrared Dust Temperatures and Column Densities of the MALT90 Molecular Clump Sample
NASA Astrophysics Data System (ADS)
Guzmán, Andrés E.; Sanhueza, Patricio; Contreras, Yanett; Smith, Howard A.; Jackson, James M.; Hoq, Sadia; Rathborne, Jill M.
2015-12-01
We present dust column densities and dust temperatures for ˜3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm-2, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.
Ambient temperature modelling with soft computing techniques
Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo
2010-07-15
This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)
Modeling the Freezing of SN in High Temperature Furnaces
NASA Technical Reports Server (NTRS)
Brush, Lucien
1999-01-01
Presently, crystal growth furnaces are being designed that will be used to monitor the crystal melt interface shape and the solutal and thermal fields in its vicinity during the directional freezing of dilute binary alloys, To monitor the thermal field within the solidifying materials, thermocouple arrays (AMITA) are inserted into the sample. Intrusive thermocouple monitoring devices can affect the experimental data being measured. Therefore, one objective of this work is to minimize the effect of the thermocouples on the data generated. To aid in accomplishing this objective, two models of solidification have been developed. Model A is a fully transient, one dimensional model for the freezing of a dilute binary alloy that is used to compute temperature profiles for comparison with measurements taken from the thermocouples. Model B is a fully transient two dimensional model of the solidification of a pure metal. It will be used to uncover the manner in which thermocouple placement and orientation within the ampoule breaks the longitudinal axis of symmetry of the thermal field and the crystal-melt interface. Results and conclusions are based on the comparison of the models with experimental results taken during the freezing of pure Sn.
Modeling the Freezing of SN in High Temperature Furnaces
NASA Technical Reports Server (NTRS)
Brush, Lucien
1999-01-01
Presently, crystal growth furnaces are being designed that will be used to monitor the crystal melt interface shape and the solutal and thermal fields in its vicinity during the directional freezing of dilute binary alloys, To monitor the thermal field within the solidifying materials, thermocouple arrays (AMITA) are inserted into the sample. Intrusive thermocouple monitoring devices can affect the experimental data being measured. Therefore, one objective of this work is to minimize the effect of the thermocouples on the data generated. To aid in accomplishing this objective, two models of solidification have been developed. Model A is a fully transient, one dimensional model for the freezing of a dilute binary alloy that is used to compute temperature profiles for comparison with measurements taken from the thermocouples. Model B is a fully transient two dimensional model of the solidification of a pure metal. It will be used to uncover the manner in which thermocouple placement and orientation within the ampoule breaks the longitudinal axis of symmetry of the thermal field and the crystal-melt interface. Results and conclusions are based on the comparison of the models with experimental results taken during the freezing of pure Sn.
Temperature-dependent rate models of vascular cambium cell mortality
Matthew B. Dickinson; Edward A. Johnson
2004-01-01
We use two rate-process models to describe cell mortality at elevated temperatures as a means of understanding vascular cambium cell death during surface fires. In the models, cell death is caused by irreversible damage to cellular molecules that occurs at rates that increase exponentially with temperature. The models differ in whether cells show cumulative effects of...
Study of Low Temperature Baking Effect on Field Emission on Nb Samples Treated by BEP, EP, and BCP
Andy Wu, Song Jin, Robert Rimmer, Xiang Yang Lu, K. Zhao, Laura MacIntyre, Robert Ike
2010-05-01
Field emission is still one of the major obstacles facing Nb superconducting radio frequency (SRF) community for allowing Nb SRF cavities to reach routinely accelerating gradient of 35 MV/m that is required for the international linear collider. Nowadays, the well know low temperature backing at 120 oC for 48 hours is a common procedure used in the SRF community to improve the high field Q slope. However, some cavity production data have showed that the low temperature baking may induce field emission for cavities treated by EP. On the other hand, an earlier study of field emission on Nb flat samples treated by BCP showed an opposite conclusion. In this presentation, the preliminary measurements of Nb flat samples treated by BEP, EP, and BCP via our unique home-made scanning field emission microscope before and after the low temperature baking are reported. Some correlations between surface smoothness and the number of the observed field emitters were found. The observed experimental results can be understood, at least partially, by a simple model that involves the change of the thickness of the pent-oxide layer on Nb surfaces.
Generation of high-purity low-temperature samples of 39K for applications in metrology
NASA Astrophysics Data System (ADS)
Antoni-Micollier, L.; Barrett, B.; Chichet, L.; Condon, G.; Battelier, B.; Landragin, A.; Bouyer, P.
2017-08-01
We present an all-optical technique to prepare a sample of 39K in a magnetically insensitive state with 95% purity while maintaining a temperature of 6 μ K . This versatile preparation scheme is particularly well suited to performing matter-wave interferometry with species exhibiting closely separated hyperfine levels, such as the isotopes of lithium and potassium, and opens new possibilities for metrology with these atoms. We demonstrate the feasibility of such measurements by realizing an atomic gravimeter and a Ramsey-type spectrometer, both of which exhibit a state-of-the-art sensitivity for cold potassium.
An open-population hierarchical distance sampling model
Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,
2015-01-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
Saurina, Javier; Hlabangana, Leah; Garcia-Milla, Daniel; Hernandez-Cassou, Santiago
2004-05-01
This paper describes a flow-injection (FI) method for the simultaneous determination of aniline and cyclohexylamine impurities in cyclamate products. The method consists of the derivatization of amines with 1,2-naphthoquinone-4-sulfonate under selective and non-selective conditions. Here, the selectivity is achieved by working at 20 degree C, at which only aniline reacts, whilst higher temperatures (80 degree C) lead to a non-selective reaction of the two analytes. The FI manifold is composed of two flow cells for the spectrophotometric detection of derivatives at 480 nm. Experimental conditions have been optimized by factorial design and multicriteria making approach. Quantification is accomplished by differential analysis of the analyte contributions in the double peaks generated when the sample reaches cell 1 and cell 2. Results obtained with the proposed method are in satisfactory agreement with those provided by the standard method for the analysis of cyclamate samples.
Sample temperature profile during the excimer laser annealing of silicon nanoparticles
NASA Astrophysics Data System (ADS)
Caninenberg, M.; Verheyen, E.; Kiesler, D.; Stoib, B.; Brandt, M. S.; Benson, N.; Schmechel, R.
2015-11-01
Based on the heat diffusion equation we describe the temperature profile of a silicon nanoparticle thin film on silicon during excimer laser annealing using COMSOL Multiphysics. For this purpose system specific material parameters are determined such as the silicon nanoparticle melting point at 1683 K, the surface reflectivity at 248 nm of 20% and the nanoparticle thermal conductivity between 0.3 and 1.2 W/m K. To validate our model, the simulation results are compared to experimental data obtained by Raman spectroscopy, SEM microscopy and electrochemical capacitance-voltage measurements (ECV). The experimental data are in good agreement with our theoretical findings and support the validity of the model.
Tuomas, V.; Jaakko, L.
2013-07-01
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
Hu, Yun; Zhang, Ying; Li, Boyan; Ozaki, Yukihiro
2007-01-01
The glass transition temperatures (Tg) of poly(ethylene terephthalate) (PET) thin films with different thicknesses are determined by analyzing their in situ reflection-absorption infrared (RAIR) spectra measured over a temperature range of 28 to 84 degrees C. The criterion of standard deviation of the covariance matrices is used as a graphical indicator for the determination of the Tg present in the sample-sample two-dimensional (2D) correlation spectra calculated from the temperature-dependent RAIR spectra. After two data pretreatments of the first derivative of the spectral absorbance versus temperature and the mean normalization over the wavenumbers are sequentially carried out on the RAIR spectra, an abrupt change of the first-derivative correlation spectra with respect to temperature is quickly obtained. It reflects the temperature at which the apparent intensity changes in pertinent absorption bands of PET thin films take place due to the dramatic segmental motion of PET chain conformation. The Tg of the thin PET films is accordingly determined. The results reveal that it decreases with a great dependence on the film thickness and that sample-sample 2D correlation spectroscopy enables one to determine the transition temperature of polymer thin films in an easy and valid way.
The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters
NASA Astrophysics Data System (ADS)
Zou, S.; Maughan, B. J.; Giles, P. A.; Vikhlinin, A.; Pacaud, F.; Burenin, R.; Hornstrup, A.
2016-11-01
We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 < z < 0.15 with a median temperature of {˜ }2{keV}. The sample is a statistically complete flux-limited subset of the 400 deg2 survey. We investigated the scaling relation between X-ray luminosity (L) and temperature (T), taking selection biases fully into account. The logarithmic slope of the bolometric L-T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L-T relation, we show that there is no evidence for the slope, normalization, or scatter of the L-T relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised L-T relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups.
Ebey, P S; Dole, J M; Nobile, A; Schoonover, J R; Burmann, J; Cook, B; Letts, S; Sanchez, J; Nikroo, A
2005-06-24
The purpose of the experiments described in this paper was to expose samples of polymeric materials to a mixture of deuterium-tritium (DT) gas at elevated temperature and pressure to investigate the effects (i.e. damage) on the materials. The materials and exposure parameters were chosen with to be relevant to proposed uses of similar materials in inertial fusion ignition experiments at the National Ignition Facility. Two types of samples were exposed and tested. The first type consisted of 10 4-lead ribbon cables of fine manganin wire insulated with polyimide. Wires of this type are proposed for use in thermal shimming of hohlraums and the goal of this experiment was to measure the change in electrical resistance of the insulation due to tritium exposure. The second type of sample consisted of 20 planar polymer samples that may be used as ignition capsule materials. The exposure was at 34.5 GPa (5010 psia) and 70 C for 48 hours. The change in electrical resistance of the wire insulation will be presented. The results for capsule materials will be presented in a separate paper in this issue.
Description of a sample holder for ion channeling near liquid-helium temperature
NASA Astrophysics Data System (ADS)
Daudin, B.; Dubus, M.; Viargues, F.
1990-01-01
Ion channeling is sensitive to very small shifts (10 -2 nm) of the atomic equilibrium positions. As a consequence, this technique appears to be suitable to study lattice dynamics, in particular when a displacive phase transition occurs. As many phase transitions of interest are observed at low temperature, we developed a three-axis goniometer in order to perform channeling experiments between 5 and 30 K. As no thermal screen could be placed between the sample and the ion beam, the quantity of heat radiated onto the sample holder was very large. The technical solutions which were chosen to overcome this difficulty and ensure both an efficient cooling and a good rotational mobility of the sample are described in detail. A liquid-helium flow of ˜ 6.5 1/h was found to be necessary to achieve a continuous refrigeration of the sample at 5 K. To conclude, proton channeling experiments in the blue bronze, K 0.3MoO 3, are presented as an illustration of the device possibilities.
On species sampling sequences induced by residual allocation models
Rodríguez, Abel; Quintana, Fernando A.
2014-01-01
We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling. PMID:25477705
On species sampling sequences induced by residual allocation models.
Rodríguez, Abel; Quintana, Fernando A
2015-02-01
We discuss fully Bayesian inference in a class of species sampling models that are induced by residual allocation (sometimes called stick-breaking) priors on almost surely discrete random measures. This class provides a generalization of the well-known Ewens sampling formula that allows for additional flexibility while retaining computational tractability. In particular, the procedure is used to derive the exchangeable predictive probability functions associated with the generalized Dirichlet process of Hjort (2000) and the probit stick-breaking prior of Chung and Dunson (2009) and Rodriguez and Dunson (2011). The procedure is illustrated with applications to genetics and nonparametric mixture modeling.
Abstract: Sample Size Planning for Latent Curve Models.
Lai, Keke
2011-11-30
When designing a study that uses structural equation modeling (SEM), an important task is to decide an appropriate sample size. Historically, this task is approached from the power analytic perspective, where the goal is to obtain sufficient power to reject a false null hypothesis. However, hypothesis testing only tells if a population effect is zero and fails to address the question about the population effect size. Moreover, significance tests in the SEM context often reject the null hypothesis too easily, and therefore the problem in practice is having too much power instead of not enough power. An alternative means to infer the population effect is forming confidence intervals (CIs). A CI is more informative than hypothesis testing because a CI provides a range of plausible values for the population effect size of interest. Given the close relationship between CI and sample size, the sample size for an SEM study can be planned with the goal to obtain sufficiently narrow CIs for the population model parameters of interest. Latent curve models (LCMs) is an application of SEM with mean structure to studying change over time. The sample size planning method for LCM from the CI perspective is based on maximum likelihood and expected information matrix. Given a sample, to form a CI for the model parameter of interest in LCM, it requires the sample covariance matrix S, sample mean vector [Formula: see text], and sample size N. Therefore, the width (w) of the resulting CI can be considered a function of S, [Formula: see text], and N. Inverting the CI formation process gives the sample size planning process. The inverted process requires a proxy for the population covariance matrix Σ, population mean vector μ, and the desired width ω as input, and it returns N as output. The specification of the input information for sample size planning needs to be performed based on a systematic literature review. In the context of covariance structure analysis, Lai and Kelley
THE TWO-LEVEL MODEL AT FINITE-TEMPERATURE
Goodman, A.L.
1980-07-01
The finite-temperature HFB cranking equations are solved for the two-level model. The pair gap, moment of inertia and internal energy are determined as functions of spin and temperature. Thermal excitations and rotations collaborate to destroy the pair correlations. Raising the temperature eliminates the backbending effect and improves the HFB approximation.
Horner, T W; Dunn, M L; Eggett, D L; Ogden, L V
2011-07-01
Many consumers are unable to enjoy the benefits of milk due to lactose intolerance. Lactose-free milk is available but at about 2 times the cost of regular milk or greater, it may be difficult for consumers to afford. The high cost of lactose-free milk is due in part to the added cost of the lactose hydrolysis process. Hydrolysis at refrigerated temperatures, possibly in the bulk tank or package, could increase the flexibility of the process and potentially reduce the cost. A rapid β-galactosidase assay was used to determine the relative activity of commercially available lactase samples at different temperatures. Four enzymes exhibited low-temperature activity and were added to refrigerated raw and pasteurized milk at various concentrations and allowed to react for various lengths of time. The degree of lactose hydrolysis by each of the enzymes as a function of time and enzyme concentration was determined by HPLC. The 2 most active enzymes, as determined by the β-galactosidase assay, hydrolyzed over 98% of the lactose in 24h at 2°C using the supplier's recommended dosage. The other 2 enzymes hydrolyzed over 95% of the lactose in 24h at twice the supplier's recommended dosage at 2°C. Results were consistent in all milk types tested. The results show that it is feasible to hydrolyze lactose during refrigerated storage of milk using currently available enzymes. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Modeling 3D faces from samplings via compressive sensing
NASA Astrophysics Data System (ADS)
Sun, Qi; Tang, Yanlong; Hu, Ping
2013-07-01
3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.
Accelerated rare event sampling: Refinement and Ising model analysis
NASA Astrophysics Data System (ADS)
Yevick, David; Lee, Yong Hwan
In this paper, a recently introduced accelerated sampling technique [D. Yevick, Int. J. Mod. Phys. C 27, 1650041 (2016)] for constructing transition matrices is further developed and applied to a two-dimensional 32×32 Ising spin system. By permitting backward displacements up to a certain limit for each forward step while evolving the system to first higher and then lower energies within a restricted interval that is steadily displaced toward zero temperature as the computation proceeds, accuracy can be greatly enhanced. Simultaneously, the elements obtained from numerous independent calculations are collected in a single transition matrix. The relative accuracy of this novel method is established through a comparison to a transition matrix procedure based on the Metropolis algorithm in which the temperature is appropriately varied during the calculation and the results interpreted in terms of the distribution of realizations over both energy and magnetization.
NASA Astrophysics Data System (ADS)
Yu, Y.; Hewins, R. H.; Clayton, R. N.; Mayeda, T. K.
1993-07-01
Chondrules in carbonaceous and ordinary chondrites show slope-1 mixing lines on the oxygen three-isotope diagram, suggestive of a gas-melt exchange process during chondrule formation. In order to test this conjecture and to extend our existing knowledge of chondrule thermal history and the kinetics of reaction of interstellar dust with solar nebula gas, an experiment involving high- temperature oxygen isotope exchange between a 16O-rich sample (meteorite) and water vapor (terrestrial) has been designed. The experiment was conducted with a DELTECH vertical tube furnace with ceramic parts shielded with metal foil. The starting meteorite powder (one of two C3 carbonaceous chondrites--bulk Allende and Ornans) was pressed into a pellet and suspended at the hot spot inside the furnace. The furnace gas was a mixture of H2O vapor and H2 (1 atm total pressure, fO2 = IW-0.5) [1]. The preliminary experiments were performed at 1400 degrees C for durations from 5 minutes to 36 hours, and were terminated by quenching the samples into liquid nitrogen. The meteorite charges and the water samples collected were later analyzed for their oxygen isotope compositions. The experimental results (Fig.1) show that the exchange process has greatly modified delta-18O and delta-17O for both meteorites, which move towards the projected equilibrium point as the heating time increases. For Allende samples, the exchange proceeds quickly in the first 5 minutes, which accounts for most of the isotope exchange (~84% of total change in delta-18O(sub)A-W, and ~57% of total change in delta-17O). Then the exchange is dramatically slowed down, and takes at least 12 hours to finally reach equilibrium with the ambient water vapor. The approach to equilibrium is not a straight line on the three-isotope graph, possibly due to the presence of residual 16O-rich solids in the molten sample. A similar exchange profile is observed for Ornans samples. However, it takes longer for the Ornans sample to reach
Reconciling Physical and Seismic Reference Mantle Models: Geographical Sampling Biases
NASA Astrophysics Data System (ADS)
Lau, H. C.; Goes, S. D.; Davies, R.
2012-12-01
Earth's internal structure is, to a very good first approximation, spherically symmetric. Seismic imaging of lateral anomalies, which are the expression of mantle dynamics, relies on a well-constrained 1-D reference structure. Similarly, interpretation of 3-D seismic structure relies on understanding the physical nature of the 1-D reference. However, plausible physical models of average mantle structure, once converted to seismic velocity, fail to explain the observed 1-D reference velocities. In particular, relative to seismic reference models, the average physical structure for a thermally and chemically well-mixed (i.e., pyrolitic) mantle is: (i) consistently slower in the upper mantle; and (ii) has a higher velocity gradient in the lower mantle. Here, we investigate whether or not geographically biased sampling by the seismic waves used to construct seismic reference models, plays a role in this mismatch. This is done by calculating P-wave travel times through an Earth-like synthetic mantle structure and comparing sampled average synthetic travel times with model average travel times. Our synthetic structure is generated from a global spherical mantle circulation model, in which the geographic distribution of heterogeneity is constrained by 300 million years of plate motion history. Results indicate that geographical biasing is of the same magnitude as the mismatch between our physical model and a seismic reference model. We find that preferential propagation of rays along subduction zones in the upper mantle, together with more uniform sampling by rays in the lower mantle, would lead to a recovered reference model that shows: (i) increased upper mantle velocities; and (ii) a decreased velocity gradient in the lower mantle, when compared to the actual model reference. These tests with P travel times imply that a thermally and chemically well-mixed mantle may actually be consistent with seismic reference mantle models, but further tests with other wave types
Modelling of tandem cell temperature coefficients
Friedman, D.J.
1996-05-01
This paper discusses the temperature dependence of the basic solar-cell operating parameters for a GaInP/GaAs series-connected two-terminal tandem cell. The effects of series resistance and of different incident solar spectra are also discussed.
Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses
Bastos, Rodrigo O.; Appoloni, Carlos R.
2008-08-07
In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.
Sampling Kinetic Protein Folding Pathways using All-Atom Models
NASA Astrophysics Data System (ADS)
Bolhuis, P. G.
This chapter summarizes several computational strategies to study the kinetics of two-state protein folding using all atom models. After explaining the background of two state folding using energy landscapes I introduce common protein models and computational tools to study folding thermodynamics and kinetics. Free energy landscapes are able to capture the thermodynamics of two-state protein folding, and several methods for efficient sampling of these landscapes are presented. An accurate estimate of folding kinetics, the main topic of this chapter, is more difficult to achieve. I argue that path sampling methods are well suited to overcome the problems connected to the sampling of folding kinetics. Some of the major issues are illustrated in the case study on the folding of the GB1 hairpin.
Mohammadhoseini, Elham; Safavi, Enayat; Seifi, Sepideh; Seifirad, Soroush; Firoozbakhsh, Shahram; Peiman, Soheil
2015-01-01
Background: Results of arterial blood gas analysis can be biased by pre-analytical factors, such as time interval before analysis, temperature during storage and syringe type. Objectives: To investigate the effects of samples storage temperature and time delay on blood gases, bicarbonate and PH results in human arterial blood samples. Patients and Methods: 2.5 mL arterial blood samples were drawn from 45 patients via an indwelling Intraarterial catheter. Each sample was divided into five equal samples and stored in multipurpose tuberculin plastic syringes. Blood gas analysis was performed on one of five samples as soon as possible. Four other samples were divided into two groups stored at 22°C and 0°C. Blood gas analyses were repeated at 30 and 60 minutes after sampling. Results: PaO2 of the samples stored at 0°C was increased significantly after 60 minutes (P = 0.007). The PaCO2 of the samples kept for 30 and 60 minutes at 22°C was significantly higher than primary result (P = 0.04, P < 0.001). In samples stored at 22°C, pH decreased significantly after 30 and 60 minutes (P = 0.017, P = 0.001). There were no significant differences in other results of samples stored at 0°C or 22°C after 30 or 60 minutes. Conclusions: In samples stored in plastic syringes, overestimation of PaO2 levels should be noted if samples cooled before analysis. In samples stored in plastic syringes, it is not necessary to store samples in iced water when analysis delayed up to one hour. PMID:26019892
Automated biowaste sampling system urine subsystem operating model, part 1
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Mangialardi, J. K.; Rosen, F.
1973-01-01
The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.
Modeling evaporative loss of oil mist collected by sampling filters.
Raynor, P C; Volckens, J; Leith, D
2000-01-01
Oil mists can cause respiratory distress and have been linked to skin and gastrointestinal cancers in workers. Standard concentration assessment methods call for sampling these mists with fibrous or membrane filters. Previous experimental studies using glass fiber (GF) filters and polyvinyl chloride and polytetrafluoroethylene membrane filters indicate that mist sampled onto filters may volatilize. A model has been developed to predict the evaporation of mist collected on a fibrous sampling filter. Evaporation of retained fluid from membrane filters can be modeled by treating the filter as though it is a fibrous filter. Predictions from the model exhibit good agreement with experimental results. At low mist concentrations, the model indicates that evaporation of retained mineral oil occurs readily. At high mist concentrations, significant evaporation from the filters is not expected because the vapor accompanying the airborne mist is already saturated with the compounds in the oil. The findings from this study indicate that sampling mineral oil mist with filters in accordance with standard methods can lead to estimates of worker exposure to oil mist that are too low.
A three stage sampling model for remote sensing applications
NASA Technical Reports Server (NTRS)
Eisgruber, L. M.
1972-01-01
A conceptual model and an empirical application of the relationship between the manner of selecting observations and its effect on the precision of estimates from remote sensing are reported. This three stage sampling scheme considers flightlines, segments within flightlines, and units within these segments. The error of estimate is dependent on the number of observations in each of the stages.
Sampling and modeling riparian forest structure and riparian microclimate
Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam. Temesgen
2013-01-01
Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...
Language Arts Curriculum Framework: Sample Curriculum Model, Grade 8.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock.
Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade eight language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different…
Language Arts Curriculum Framework: Sample Curriculum Model, Grade 5.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock.
Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade five language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different writing…
Language Arts Curriculum Framework: Sample Curriculum Model, Grade 7.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock.
Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade seven language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different…
Language Arts Curriculum Framework: Sample Curriculum Model, Grade 6.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock.
Based on the 1998 Arkansas English Language Arts Curriculum Frameworks, this sample curriculum model for grade six language arts is divided into sections focusing on writing; reading; and listening, speaking, and viewing. The writing section's stated goals are to help students employ a wide range of strategies as they write; use different writing…
Accelerated failure time model under general biased sampling scheme.
Kim, Jane Paik; Sit, Tony; Ying, Zhiliang
2016-07-01
Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The redshift distribution of cosmological samples: a forward modeling approach
NASA Astrophysics Data System (ADS)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina
2017-08-01
Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
Two-Temperature Model of Nonequilibrium Electron Relaxation:. a Review
NASA Astrophysics Data System (ADS)
Singh, Navinder
The present paper is a review of the phenomena related to nonequilibrium electron relaxation in bulk and nano-scale metallic samples. The workable Two-Temperature Model (TTM) based on Boltzmann-Bloch-Peierls kinetic equation has been applied to study the ultra-fast (femto-second) electronic relaxation in various metallic systems. The advent of new ultra-fast (femto-second) laser technology and pump-probe spectroscopy has produced wealth of new results for micro- and nano-scale electronic technology. The aim of this paper is to clarify the TTM, conditions of its validity and nonvalidity, its modifications for nano-systems, to sum-up the progress, and to point out open problems in this field. We also give a phenomenological integro-differential equation for the kinetics of nondegenerate electrons that goes beyond the TTM.
Simple model for temperature control of glycolytic oscillations
NASA Astrophysics Data System (ADS)
Postnikov, E. B.; Verveyko, D. V.; Verisokin, A. Yu.
2011-06-01
We introduce the temperature-dependent autocatalytic coefficient into the Merkin-Needham-Scott version of the Selkov system and consider the resulting equations as a model for temperature-controlled, self-sustained glycolytic oscillations in a closed reactor. It has been shown that this simple model reproduces key features observed in the experiments with temperature growth: (i) exponentially decreasing period of oscillations; (ii) reversal of relative duration leading and tail fronts. The applied model also reproduces the modulations of oscillations induced by the periodic temperature change.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.
2016-01-01
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J
2016-01-15
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. Published by Elsevier B.V.
THOMPSON, J.F.
1999-06-02
The Standard Hydrogen Monitoring Systems have been experiencing high temperature/moisture problems with gas samples from the Aging Waste Tanks. These moist hot gas samples have stopped the operation of the SHMS units on tanks AZ-101, AZ-102, and AY-102. This study looks at alternatives for gas sample conditioners for the Aging Waste Facility.
FAR-INFRARED DUST TEMPERATURES AND COLUMN DENSITIES OF THE MALT90 MOLECULAR CLUMP SAMPLE
Guzmán, Andrés E.; Smith, Howard A.; Sanhueza, Patricio; Contreras, Yanett; Rathborne, Jill M.; Jackson, James M.; Hoq, Sadia
2015-12-20
We present dust column densities and dust temperatures for ∼3000 young, high-mass molecular clumps from the Millimeter Astronomy Legacy Team 90 GHz survey, derived from adjusting single-temperature dust emission models to the far-infrared intensity maps measured between 160 and 870 μm from the Herschel/Herschel Infrared Galactic Plane Survey (Hi-Gal) and APEX/APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) surveys. We discuss the methodology employed in analyzing the data, calculating physical parameters, and estimating their uncertainties. The population average dust temperature of the clumps are 16.8 ± 0.2 K for the clumps that do not exhibit mid-infrared signatures of star formation (quiescent clumps), 18.6 ± 0.2 K for the clumps that display mid-infrared signatures of ongoing star formation but have not yet developed an H ii region (protostellar clumps), and 23.7 ± 0.2 and 28.1 ± 0.3 K for clumps associated with H ii and photo-dissociation regions, respectively. These four groups exhibit large overlaps in their temperature distributions, with dispersions ranging between 4 and 6 K. The median of the peak column densities of the protostellar clump population is 0.20 ± 0.02 g cm{sup −2}, which is about 50% higher compared to the median of the peak column densities associated with clumps in the other evolutionary stages. We compare the dust temperatures and column densities measured toward the center of the clumps with the mean values of each clump. We find that in the quiescent clumps, the dust temperature increases toward the outer regions and that these clumps are associated with the shallowest column density profiles. In contrast, molecular clumps in the protostellar or H ii region phase have dust temperature gradients more consistent with internal heating and are associated with steeper column density profiles compared with the quiescent clumps.
Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions
Forstmann, B.U.; Ratcliff, R.; Wagenmakers, E.-J.
2016-01-01
Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model—the diffusion decision model—is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences. PMID:26393872
A physically based analytical spatial air temperature and humidity model
Yang Yang; Theodore A. Endreny; David J. Nowak
2013-01-01
Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...
Green Granary Temperature Control System Modeling and Simulation
NASA Astrophysics Data System (ADS)
Shi, Qingsheng
As an important link of food production and distribution process, Granary's temperature control performance seriously affects the food quality and storage costs. Based on the analysis of granary components, granary temperature control model is established. The simulation results show the validity of established model.
Data augmentation for models based on rejection sampling
Rao, Vinayak; Lin, Lizhen; Dunson, David B.
2016-01-01
We present a data augmentation scheme to perform Markov chain Monte Carlo inference for models where data generation involves a rejection sampling algorithm. Our idea is a simple scheme to instantiate the rejected proposals preceding each data point. The resulting joint probability over observed and rejected variables can be much simpler than the marginal distribution over the observed variables, which often involves intractable integrals. We consider three problems: modelling flow-cytometry measurements subject to truncation; the Bayesian analysis of the matrix Langevin distribution on the Stiefel manifold; and Bayesian inference for a nonparametric Gaussian process density model. The latter two are instances of doubly-intractable Markov chain Monte Carlo problems, where evaluating the likelihood is intractable. Our experiments demonstrate superior performance over state-of-the-art sampling algorithms for such problems. PMID:27279660
High temperature furnace modeling and performance verifications
NASA Technical Reports Server (NTRS)
Smith, James E., Jr.
1988-01-01
Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.
Temperature Calculations in the Coastal Modeling System
2017-04-01
sediment can be controlled by the density-driven flow and mixing. Temperature can alter the water physical environment that impacts marine organisms ...survey station locations. In application of the CMS to the Corrotoman River, a quadtree grid system was developed to discretize the computational...601-634-2840; fax: 601-634-3080) of the U.S. Army Engineer Research and Development Center (ERDC), Coastal and Hydraulics Laboratory (CHL). The CIRP
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
Modeling Climate Change Effects on Stream Temperatures in Regulated Rivers
NASA Astrophysics Data System (ADS)
Null, S. E.; Akhbari, M.; Ligare, S. T.; Rheinheimer, D. E.; Peek, R.; Yarnell, S. M.; Viers, J. H.
2013-12-01
We provide a method for examining mesoscale stream temperature objectives downstream of dams with anticipated climate change using an integrated multi-model approach. Changing hydroclimatic conditions will likely impact stream temperatures within reservoirs and below dams, and affect downstream ecology. We model hydrology and water temperature using a series of linked models that includes a hydrology model to predict natural unimpaired flows in upstream reaches, a reservoir temperature simulation model , an operations model to simulate reservoir releases, and a stream temperature simulation model to simulate downstream conditions . All models are 1-dimensional and operate on either a weekly or daily timestep. First, we model reservoir thermal dynamics and release operations of hypothetical reservoirs of different sizes, elevations, and latitudes with climate-forced inflow hydrologies to examine the potential to manage stream temperatures for coldwater habitat. Results are presented as stream temperature change from the historical time period and indicate that reservoir releases are cooler than upstream conditions, although the absolute temperatures of reaches below dams warm with climate change. We also apply our method to a case study in California's Yuba River watershed to evaluate water regulation and hydropower operation effects on stream temperatures with climate change. Catchments of the upper Yuba River are highly-engineered, with multiple, interconnected infrastructure to provide hydropower, water supply, flood control, environmental flows, and recreation. Results illustrate climate-driven versus operations-driven changes to stream temperatures. This work highlights the need for methods to consider reservoir regulation effects on stream temperatures with climate change, particularly for hydropower relicensing (which currently ignores climate change) such that impacts to other beneficial uses like coldwater habitat and instream ecosystems can be
NASA Technical Reports Server (NTRS)
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
Temperature-variable high-frequency dynamic modeling of PIN diode
NASA Astrophysics Data System (ADS)
Shangbin, Ye; Jiajia, Zhang; Yicheng, Zhang; Yongtao, Yao
2016-04-01
The PIN diode model for high frequency dynamic transient characteristic simulation is important in conducted EMI analysis. The model should take junction temperature into consideration since equipment usually works at a wide range of temperature. In this paper, a temperature-variable high frequency dynamic model for the PIN diode is built, which is based on the Laplace-transform analytical model at constant temperature. The relationship between model parameters and temperature is expressed as temperature functions by analyzing the physical principle of these parameters. A fast recovery power diode MUR1560 is chosen as the test sample and its dynamic performance is tested under inductive load by a temperature chamber experiment, which is used for model parameter extraction and model verification. Results show that the model proposed in this paper is accurate for reverse recovery simulation with relatively small errors at the temperature range from 25 to 120 °C. Project supported by the National High Technology and Development Program of China (No. 2011AA11A265).
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
NASA Astrophysics Data System (ADS)
Sala, Juan E.; Pisoni, Juan P.; Quintana, Flavio
2017-04-01
Temperature is a primary determinant of biogeographic patterns and ecosystem processes. Standard techniques to study the ocean temperature in situ are, however, particularly limited by their time and spatial coverage, problems which might be partially mitigated by using marine top predators as biological platforms for oceanographic sampling. We used small archival tags deployed on 33 Magellanic penguins (Spheniscus magellanicus), and obtained 21,070 geo-localized profiles of water temperature, during late spring of 2008, 2011, 2012 and 2013; in a region of the North Patagonian Sea with limited oceanographic records in situ. We compared our in situ data of sea surface temperature (SST) with those available from satellite remote sensing; to describe the three-dimensional temperature fields around the area of influence of two important tidal frontal systems; and to study the inter-annual variation in the three-dimensional temperature fields. There was a strong positive relationship between satellite- and animal-derived SST data although there was an overestimation by remote-sensing by a maximum difference of +2 °C. Little inter-annual variability in the 3-dimensional temperature fields was found, with the exception of 2012 (and to a lesser extent in 2013) where the SST was significantly higher. In 2013, we found weak stratification in a region which was unexpected. In addition, during the same year, a warm small-scale vortex is indicated by the animal-derived temperature data. This allowed us to describe and better understand the dynamics of the water masses, which, so far, have been mainly studied by remote sensors and numerical models. Our results highlight again the potential of using marine top predators as biological platforms to collect oceanographic data, which will enhance and accelerate studies on the Southwest Atlantic Ocean. In a changing world, threatened by climate change, it is urgent to fill information gaps on the coupled ocean-atmosphere system
Manipulation of Samples at Extreme Temperatures for Fast in-situ Synchrotron Measurements
Weber, Richard
2016-04-22
An aerodynamic sample levitation system with laser beam heating was integrated with the APS beamlines 6 ID-D, 11 ID-C and 20 BM-B. The new capability enables in-situ measurements of structure and XANES at extreme temperatures (300-3500 °C) and in conditions that completely avoid contact with container surfaces. In addition to maintaining a high degree of sample purity, the use of aerodynamic levitation enables deep supercooling and greatly enhanced glass formation from a wide variety of melts and liquids. Development and integration of controlled extreme sample environments and new measurement techniques is an important aspect of beamline operations and user support. Processing and solidifying liquids is a critical value-adding step in manufacturing semiconductors, optical materials, metals and in the operation of many energy conversion devices. Understanding structural evolution is of fundamental importance in condensed materials, geology, and biology. The new capability provides unique possibilities for materials research and helps to develop and maintain a competitive materials manufacturing and energy utilization industry. Test samples were used to demonstrate key features of the capability including experiments on hot crystalline materials, liquids at temperatures from about 500 to 3500 °C. The use of controlled atmospheres using redox gas mixtures enabled in-situ changes in the oxidation states of cations in melts. Significant innovations in this work were: (i) Use of redox gas mixtures to adjust the oxidation state of cations in-situ (ii) Operation with a fully enclosed system suitable for work with nuclear fuel materials (iii) Making high quality high energy in-situ x-ray diffraction measurements (iv) Making high quality in-situ XANES measurements (v) Publishing high impact results (vi) Developing independent funding for the research on nuclear materials This SBIR project work led to a commercial instrument product for the niche market of processing and
Stability of volatile sulfur compounds (VSCs) in sampling bags - impact of temperature.
Le, H; Sivret, E C; Parcsi, G; Stuetz, R M
2013-01-01
Volatile sulfur compounds (VSCs) are a major component of odorous emissions that can cause annoyance to local populations surrounding wastewater, waste management and agricultural practices. Odour collection and storage using sample bags can result in VSC losses due to sorption and leakage. Stability within 72 hour storage of VSC samples in three sampling bag materials (Tedlar, Mylar, Nalophan) was studied at three temperatures: 5, 20, and 30 °C. The VSC samples consisted of hydrogen sulfide (H2S), methanethiol (MeSH), ethanethiol (EtSH), dimethyl sulfide (DMS), tert-butanethiol (t-BuSH), ethylmethyl sulfide (EMS), 1-butanethiol (1-BuSH), dimethyl disulfide (DMDS), diethyl disulfide (DEDS), and dimethyl trisulfide (DMTS). The results for H2S showed that higher loss trend was clearly observed (46-50% at 24 hours) at 30 °C compared to the loss at 5 °C or 20 °C (of up to 27% at 24 hours) in all three bag materials. The same phenomenon was obtained for other thiols with the relative recoveries after a 24 hour period of 76-78% at 30 °C and 80-93% at 5 and 20 °C for MeSH; 77-80% at 30 °C and 79-95% at 5 and 20 °C for EtSH; 87-89% at 30 °C and 82-98% at 5 and 20 °C for t-BuSH; 61-73% at 30 °C and 76-98% at 5 and 20 °C for 1-BuSH. Results for other sulfides and disulfides (DMS, EMS, DMDS, DEDS) indicated stable relative recoveries with little dependency on temperature (83-103% after 24 hours). DMTS had clear loss trends (with relative recoveries of 74-87% in the three bag types after 24 hours) but showed minor differences in relative recoveries at 5, 20, and 30 °C.
Reiter, David A; Peacock, Andrew; Spencer, Richard G
2011-05-01
Multiexponential transverse relaxation in tissue has been interpreted as a marker of water compartmentation. Articular cartilage has been reported to exhibit such relaxation in several studies, with the relative contributions of tissue heterogeneity and tissue microstructure remaining unspecified. In bovine nasal cartilage, conflicting data regarding the existence of multiexponential relaxation have been reported. Imaging and analysis artifacts as well as rapid chemical exchange between tissue compartments have been identified as potential causes for this discrepancy. Here, we find that disruption of cartilage microstructure by freeze-thawing can greatly alter the character of transverse relaxation in this tissue. We conclude that fresh cartilage exhibits multiexponential relaxation based upon its microstructural water compartments, but that multiexponentiality can be lost or rendered undetectable by freeze-thawing. In addition, we find that increasing chemical exchange by raising sample temperature from 4°C to 37°C does not substantially limit the ability to detect multiexponential relaxation. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Miettinen, L.; Kekäläinen, P.; Merikoski, J.; Myllys, M.; Timonen, J.
2008-08-01
A method for determining the in-plane thermal diffusivity of planar samples was constructed. The time-dependent temperature field of the sample heated at one edge was measured with an infrared camera. The temperature fields were averaged for different times over a narrow strip around the center line of the sample, and the temperature profiles for varying time were fitted by a solution to a corresponding one-dimensional heat equation. Heat losses by convective and radiative heat transfer were both included in the model. Two fitting parameters, the thermal diffusivity and the effective heat-loss term, were obtained from time-dependent temperature data by optimization. The ratio of these two parameters was also extracted from the steady-state temperature profile. The method was found to give good and consistent results when tested on copper and aluminum samples.
HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS
Vinayak N. Kabadi
1999-02-20
It is well known that the fluid phase equilibria can be represented by a number of {gamma}-models , but unfortunately most of them do not function well under high temperature. In this calculation, we mainly investigate the performance of UNIQUAC and NRTL models under high temperature, using temperature dependent parameters rather than using the original formulas. the other feature of this calculation is that we try to relate the excess Gibbs energy G{sup E}and enthalpy of mixing H{sup E}simultaneously. In other words, we will use the high temperature and pressure G{sup E} and H{sup E}data to regress the temperature dependant parameters to find out which model and what kind of temperature dependant parameters should be used.
Temperature Dependent Constitutive Modeling for Magnesium Alloy Sheet
Lee, Jong K.; Lee, June K.; Kim, Hyung S.; Kim, Heon Y.
2010-06-15
Magnesium alloys have been increasingly used in automotive and electronic industries because of their excellent strength to weight ratio and EMI shielding properties. However, magnesium alloys have low formability at room temperature due to their unique mechanical behavior (twinning and untwining), prompting for forming at an elevated temperature. In this study, a temperature dependent constitutive model for magnesium alloy (AZ31B) sheet is developed. A hardening law based on non linear kinematic hardening model is used to consider Bauschinger effect properly. Material parameters are determined from a series of uni-axial cyclic experiments (T-C-T or C-T-C) with the temperature ranging 150-250 deg. C. The influence of temperature on the constitutive equation is introduced by the material parameters assumed to be functions of temperature. Fitting process of the assumed model to measured data is presented and the results are compared.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data
Liu, Zitao; Hauskrecht, Milos
2016-01-01
Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data.
Liu, Zitao; Hauskrecht, Milos
2016-02-01
Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy.
Simulation of soil temperature dynamics with models using different concepts.
Sándor, Renáta; Fodor, Nándor
2012-01-01
This paper presents two soil temperature models with empirical and mechanistic concepts. At the test site (calcaric arenosol), meteorological parameters as well as soil moisture content and temperature at 5 different depths were measured in an experiment with 8 parcels realizing the combinations of the fertilized, nonfertilized, irrigated, nonirrigated treatments in two replicates. Leaf area dynamics was also monitored. Soil temperature was calculated with the original and a modified version of CERES as well as with the HYDRUS-1D model. The simulated soil temperature values were compared to the observed ones. The vegetation reduced both the average soil temperature and its diurnal amplitude; therefore, considering the leaf area dynamics is important in modeling. The models underestimated the actual soil temperature and overestimated the temperature oscillation within the winter period. All models failed to account for the insulation effect of snow cover. The modified CERES provided explicitly more accurate soil temperature values than the original one. Though HYDRUS-1D provided more accurate soil temperature estimations, its superiority to CERES is not unequivocal as it requires more detailed inputs.
Sanz, C; Ansorena, D; Bello, J; Cid, C
2001-03-01
Equilibration time and temperature were the factors studied to choose the best conditions for analyzing volatiles in roasted ground Arabica coffee by a static headspace sampling extraction method. Three temperatures of equilibration were studied: 60, 80, and 90 degrees C. A larger quantity of volatile compounds was extracted at 90 degrees C than at 80 or 60 degrees C, although the same qualitative profile was found for each. The extraction of the volatile compounds was studied at seven different equilibration times: 30, 45, 60, 80, 100, 120, and 150 min. The best time of equilibration for headspace analysis of roasted ground Arabica coffee should be selected depending on the chemical class or compound studied. One hundred and twenty-two volatile compounds were identified, including 26 furans, 20 ketones, 20 pyrazines, 9 alcohols, 9 aldehydes, 8 esters, 6 pyrroles, 6 thiophenes, 4 sulfur compounds, 3 benzenic compounds, 2 phenolic compounds, 2 pyridines, 2 thiazoles, 1 oxazole, 1 lactone, 1 alkane, 1 alkene, and 1 acid.
FPL roof temperature and moisture model : description and verification
A. TenWolde
This paper describes a mathematical model developed by the Forest Products Laboratory to predict attic temperatures, relative humidities, and roof sheathing moisture content. Comparison of data from model simulation and measured data provided limited validation of the model and led to the following conclusions: (1) the model can...
Multi-Relaxation Temperature-Dependent Dielectric Model of the Arctic Soil at Positive Temperatures
NASA Astrophysics Data System (ADS)
Savin, I. V.; Mironov, V. L.
2014-11-01
Frequency spectra of the dielectric permittivity of the Arctic soil of Alaska are investigated with allowance for the dipole and ionic relaxation of molecules of the soil moisture at frequencies from 40 MHz to 16 GHz and temperatures from -5 to +25°С. A generalized temperature-dependent multi-relaxation refraction dielectric model of the humid Arctic soil is suggested.
do Nascimento, Cássio; dos Santos, Janine Navarro; Pedrazzi, Vinícius; Pita, Murillo Sucena; Monesi, Nadia; Ribeiro, Ricardo Faria; de Albuquerque, Rubens Ferreira
2014-01-01
Molecular diagnosis methods have been largely used in epidemiological or clinical studies to detect and quantify microbial species that may colonize the oral cavity in healthy or disease. The preservation of genetic material from samples remains the major challenge to ensure the feasibility of these methodologies. Long-term storage may compromise the final result. The aim of this study was to evaluate the effect of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization. Saliva and supragingival biofilm were taken from 10 healthy subjects, aliquoted (n=364) and processed according to proposed protocols: immediate processing and processed after 2 or 4 weeks, and 6 or 12 months of storage at 4°C, -20°C and -80°C. Either total or individual microbial counts were recorded in lower values for samples processed after 12 months of storage, irrespective of temperatures tested. Samples stored up to 6 months at cold temperatures showed similar counts to those immediately processed. The microbial incidence was also significantly reduced in samples stored during 12 months in all temperatures. Temperature and time of oral samples storage have relevant impact in the detection and quantification of bacterial and fungal species by Checkerboard DNA-DNA hybridization method. Samples should be processed immediately after collection or up to 6 months if conserved at cold temperatures to avoid false-negative results. Copyright © 2013 Elsevier Ltd. All rights reserved.
Numerical modeling of temperature distributions within the neonatal head.
Van Leeuwen, G M; Hand, J W; Lagendijk, J J; Azzopardi, D V; Edwards, A D
2000-09-01
Introduction of hypothermia therapy as a neuroprotection therapy after hypoxia-ischemia in newborn infants requires appraisal of cooling methods. In this numerical study thermal simulations were performed to test the hypothesis that cooling of the surface of the cranium by the application of a cooling bonnet significantly reduces deep brain temperature and produces a temperature differential between the deep brain and the body core. A realistic three-dimensional (3-D) computer model of infant head anatomy was used, derived from magnetic resonance data from a newborn infant. Temperature distributions were calculated using the Pennes heatsink model. The cooling bonnet was at a constant temperature of 10 degrees C. When modeling head cooling only, a constant body core temperature of 37 degrees C was imposed. The computed result showed no significant cooling of the deep brain regions, only the very superficial regions of the brain are cooled to temperatures of 33-34 degrees C. Poor efficacy of head cooling was still found after a considerable increase in the modeled thermal conductivities of the skin and skull, or after a decrease in perfusion. The results for the heatsink thermal model of the infant head were confirmed by comparison of results computed for a scaled down adult head, using both the heatsink description and a discrete vessel thermal model with both anatomy and vasculature obtained from MR data. The results indicate that significant reduction in brain temperature will only be achieved if the infant's core temperature is lowered.
Montaser, A.
1992-09-01
New high temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. Emphasis was placed on atmospheric pressure He inductively coupled plasmas (ICP) suitable for atomization, excitation, and ionization of elements; simulation and computer modeling of plasma sources with potential for use in spectrochemical analysis; spectroscopic imaging and diagnostic studies of high temperature plasmas, particularly He ICP discharges; and development of new, low-cost sample introduction systems, and examination of techniques for probing the aerosols over a wide range. Refs., 14 figs. (DLC)
Mathematical modeling of high and low temperature heat pipes
NASA Technical Reports Server (NTRS)
Chi, S. W.
1971-01-01
Following a review of heat and mass transfer theory relevant to heat pipe performance, math models are developed for calculating heat-transfer limitations of high-temperature heat pipes and heat-transfer limitations and temperature gradient of low temperature heat pipes. Calculated results are compared with the available experimental data from various sources to increase confidence in the present math models. Complete listings of two computer programs for high- and low-temperature heat pipes respectively are included. These programs enable the performance to be predicted of heat pipes with wrapped-screen, rectangular-groove, or screen-covered rectangular-groove wick.
Mathematical modeling of high and low temperature heat pipes
NASA Technical Reports Server (NTRS)
Chi, S. W.
1971-01-01
Mathematical models are developed for calculating heat-transfer limitations of high-temperature heat pipes and heat-transfer limitations and temperature gradient of low temperature heat pipes. Calculated results are compared with the available experimental data from various sources to increase confidence in the present math models. Complete listings of two computer programs for high- and low-temperature heat pipes respectively are appended. These programs enable the performance of heat pipes with wrapped-screen, rectangular-groove or screen-covered rectangular-groove wick to be predicted.
Imputation for semiparametric transformation models with biased-sampling data
Liu, Hao; Qin, Jing; Shen, Yu
2012-01-01
Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example. PMID:22903245
West Flank Coso, CA FORGE 3D temperature model
Doug Blankenship
2016-03-01
x,y,z data of the 3D temperature model for the West Flank Coso FORGE site. Model grid spacing is 250m. The temperature model for the Coso geothermal field used over 100 geothermal production sized wells and intermediate-depth temperature holes. At the near surface of this model, two boundary temperatures were assumed: (1) areas with surface manifestations, including fumaroles along the northeast striking normal faults and northwest striking dextral faults with the hydrothermal field, a temperature of ~104ËšC was applied to datum at +1066 meters above sea level elevation, and (2) a near-surface temperature at about 10 meters depth, of 20ËšC was applied below the diurnal and annual conductive temperature perturbations. These assumptions were based on heat flow studies conducted at the CVF and for the Mojave Desert. On the edges of the hydrothermal system, a 73ËšC/km (4ËšF/100â€™) temperature gradient contour was established using conductive gradient data from shallow and intermediate-depth temperature holes. This contour was continued to all elevation datums between the 20ËšC surface and -1520 meters below mean sea level. Because the West Flank is outside of the geothermal field footprint, during Phase 1, the three wells inside the FORGE site were incorporated into the preexisting temperature model. To ensure a complete model was built based on all the available data sets, measured bottom-hole temperature gradients in certain wells were downward extrapolated to the next deepest elevation datum (or a maximum of about 25% of the well depth where conductive gradients are evident in the lower portions of the wells). After assuring that the margins of the geothermal field were going to be adequately modelled, the data was contoured using the Kriging method algorithm. Although the extrapolated temperatures and boundary conditions are not rigorous, the calculated temperatures are anticipated to be within ~6ËšC (20ËšF), or one contour interval, of the
Modeling HIV Prevention Strategies among Two Puerto Rican Samples
Santiago-Rivas, Marimer; Pérez-Jiménez, David
2012-01-01
The Information-Motivation-Behavioral Skills model examines factors that are used to initiate and maintain sexual and reproductive health promotion behaviors. The present study evaluated the association among these constructs as it is applied to sexually active heterosexual adults with steady partners, using a Structural Equation Modeling approach. At the same time, it was analyzed if the same model structure could be generalized to two samples of participants that produced the results following two different formats for data collection. Two-hundred ninety one participants completed the Information-Motivation-Behavioral Skills Questionnaire (Spanish version), and 756 participants completed an Internet version on the instrument. The proposed model fits the data for both groups, supporting a predictive and positive relationship among all of the latent variables, with Information predicting Motivation, and Motivation therefore predicting Behavioral Skills. The findings support the notion that there are important issues that need to be addressed when promoting HIV prevention. PMID:23243320
OPC model sampling evaluation and weakpoint "in-situ" improvement
NASA Astrophysics Data System (ADS)
Fu, Nan; Elshafie, Shady; Ning, Guoxiang; Roling, Stefan
2016-10-01
One of the major challenges of optical proximity correction (OPC) models is to maximize the coverage of real design features using sampling pattern. Normally, OPC model building is based on 1-D and 2-D test patterns with systematically changing pitches alignment with design rules. However, those features with different optical and geometric properties will generate weak-points where OPC simulation cannot precisely predict resist contours on wafer due to the nature of infinite IC designs and limited number of model test patterns. In this paper, optical property data of real design features were collected from full chips and classified to compare with the same kind of data from OPC test patterns. Therefore sample coverage could be visually mapped according to different optical properties. Design features, which are out of OPC capability, were distinguished by their optical properties and marked as weak-points. New patterns with similar optical properties would be added into model build site-list. Further, an alternative and more efficient method was created in this paper to improve the treatment of issue features and remove weak-points without rebuilding models. Since certain classification of optical properties will generate weak-points, an OPC-integrated repair algorithm was developed and implemented to scan full chip for optical properties, locate those features and then optimize OPC treatment or apply precise sizing on site. This is a named "in-situ" weak-point improvement flow which includes issue feature definition, allocation in full chip and real-time improvement.
Integrated flow and temperature modeling at the catchment scale
NASA Astrophysics Data System (ADS)
Loinaz, Maria C.; Davidsen, Hasse Kampp; Butts, Michael; Bauer-Gottwein, Peter
2013-07-01
Changes in natural stream temperature levels can be detrimental to the health of aquatic ecosystems. Water use and land management directly affect the distribution of diffuse heat sources and thermal loads to streams, while riparian vegetation and geomorphology play a critical role in how thermal loads are buffered. In many areas, groundwater flow is a significant contribution to river flow, particularly during low flows and therefore has a strong influence on stream temperature levels and dynamics. However, previous stream temperature models do not properly simulate how surface water-groundwater dynamics affect stream temperature. A coupled surface water-groundwater and temperature model has therefore been developed to quantify the impacts of land management and water use on stream flow and temperatures. The model is applied to the simulation of stream temperature levels in a spring-fed stream, the Silver Creek Basin in Idaho, where stream temperature affects the populations of fish and other aquatic organisms. The model calibration highlights the importance of spatially distributed flow dynamics in the catchment to accurately predict stream temperatures. The results also show the value of including temperature data in an integrated flow model calibration because temperature data provide additional constraints on the flow sources and volumes. Simulations show that a reduction of 10% in the groundwater flow to the Silver Creek Basin can cause average and maximum temperature increases in Silver Creek over 0.3 °C and 1.5 °C, respectively. In spring-fed systems like Silver Creek, it is clearly not feasible to separate river habitat restoration from upstream catchment and groundwater management.
A stochastic model for the analysis of maximum daily temperature
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.
2016-08-01
In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.
Ozone and temperature: A test of the consistency of models and observations in the middle atmosphere
NASA Astrophysics Data System (ADS)
Orris, Rebecca Lyn
1997-08-01
Several stratospheric monthly-, zonally-averaged satellite ozone and temperature datasets have been created, merged with other observational datasets, and extrapolated to form ozone climatologies, with coverage from the surface to 80km and from 90oS to 90oN. Equilibrium temperatures in the stratosphere for each ozone dataset are calculated using a fixed dynamical heating (FDH) model and are compared with measured temperatures. An extensive study is conducted of the sensitivity of the modeled temperatures to uncertainties of inputs, with emphasis on the accuracy of the radiative transfer models, the uncertainty of the ozone mixing ratios, and inter-annual variability. We examine the long-term variability of the temperature with 25 years of data from the 3o resolution SKYHI GCM and find evidence of low frequency variation of the 3o model temperatures with a time scale of about 10 years. This long-term variability creates a significant source of uncertainty in our study, since dynamical heating rates derived from only 1 year of 1o SKYHI data are used. Most measured datasets are only available for a few years, which is an inadequate sample for averaging purposes. The uncertainty introduced into the comparison of FDH-modeled temperatures and measurements near 1mb in the tropics due to interannual variability has a maximum of approximately ±8K. Global-mean calculations on isobaric surfaces are shown to eliminate most of the interannual variability of the modeled and measured temperatures. Multiple years of global-mean UARS MLS temperatures, as well as MLS and LIMS temperatures at pressures of 1mb and greater, agree to within ±2K. For most months studied, global-mean Barnett and Corney (BC) temperatures are found to be significantly warmer (3.5-5K) than either the MLS or LIMS temperatures between 2-10mb. Comparisons of global-mean FDH-modeled temperatures with measured LIMS and MLS temperatures show the model is colder than measurements by 3-7K. Consistency between
Ignition temperature of magnesium powder clouds: a theoretical model.
Chunmiao, Yuan; Chang, Li; Gang, Li; Peihong, Zhang
2012-11-15
Minimum ignition temperature of dust clouds (MIT-DC) is an important consideration when adopting explosion prevention measures. This paper presents a model for determining minimum ignition temperature for a magnesium powder cloud under conditions simulating a Godbert-Greenwald (GG) furnace. The model is based on heterogeneous oxidation of metal particles and Newton's law of motion, while correlating particle size, dust concentration, and dust dispersion pressure with MIT-DC. The model predicted values in close agreement with experimental data and is especially useful in predicting temperature and velocity change as particles pass through the furnace tube.
Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W
2016-07-19
The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature.
Selwa, Edithe; Huynh, Tru; Ciccotti, Giovanni; Maragliano, Luca; Malliavin, Thérèse E
2014-10-01
The catalytic domain of the adenyl cyclase (AC) toxin from Bordetella pertussis is activated by interaction with calmodulin (CaM), resulting in cAMP overproduction in the infected cell. In the X-ray crystallographic structure of the complex between AC and the C terminal lobe of CaM, the toxin displays a markedly elongated shape. As for the structure of the isolated protein, experimental results support the hypothesis that more globular conformations are sampled, but information at atomic resolution is still lacking. Here, we use temperature-accelerated molecular dynamics (TAMD) simulations to generate putative all-atom models of globular conformations sampled by CaM-free AC. As collective variables, we use centers of mass coordinates of groups of residues selected from the analysis of standard molecular dynamics (MD) simulations. Results show that TAMD allows extended conformational sampling and generates AC conformations that are more globular than in the complexed state. These structures are then refined via energy minimization and further unrestrained MD simulations to optimize inter-domain packing interactions, thus resulting in the identification of a set of hydrogen bonds present in the globular conformations.
Physical Models of Seismic-Attenuation Measurements on Lab Samples
NASA Astrophysics Data System (ADS)
Coulman, T. J.; Morozov, I. B.
2012-12-01
Seismic attenuation in Earth materials is often measured in the lab by using low-frequency forced oscillations or static creep experiments. The usual assumption in interpreting and even designing such experiments is the "viscoelastic" behavior of materials, i.e., their description by the notions of a Q-factor and material memory. However, this is not the only theoretical approach to internal friction, and it also involves several contradictions with conventional mechanics. From the viewpoint of mechanics, the frequency-dependent Q becomes a particularly enigmatic property attributed to the material. At the same time, the behavior of rock samples in seismic-attenuation experiments can be explained by a strictly mechanical approach. We use this approach to simulate such experiments analytically and numerically for a system of two cylinders consisting of a rock sample and elastic standard undergoing forced oscillations, and also for a single rock sample cylinder undergoing static creep. The system is subject to oscillatory compression or torsion, and the phase-lag between the sample and standard is measured. Unlike in the viscoelastic approach, a full Lagrangian formulation is considered, in which material anelasticity is described by parameters of "solid viscosity" and a dissipation function from which the constitutive equation is derived. Results show that this physical model of anelasticity predicts creep results very close to those obtained by using empirical Burger's bodies or Andrade laws. With nonlinear (non-Newtonian) solid viscosity, the system shows an almost instantaneous initial deformation followed by slow creep towards an equilibrium. For Aheim Dunite, the "rheologic" parameters of nonlinear viscosity are υ=0.79 and η=2.4 GPa-s. Phase-lag results for nonlinear viscosity show Q's slowly decreasing with frequency. To explain a Q increasing with frequency (which is often observed in the lab and in the field), one has to consider nonlinear viscosity with
NASA Astrophysics Data System (ADS)
Miura, Takuya; Xie, Wei; Yanase, Takashi; Nagahama, Taro; Shimada, Toshihiro
2015-09-01
Plasma chemical vapor deposition (CVD) is now gathering attention from a novel viewpoint, because it is easy to combine plasma processes and electrochemistry by applying a bias voltage to the sample. In order to explore electrochemistry during the plasma CVD, the temperature of the sample must be controlled precisely. In traditional equipment, the sample temperature is measured by a radiation thermometer. Since emissivity of the sample surface changes in the course of the CVD growth, it is difficult to measure the exact temperature using the radiation thermometer. In this work, we developed new equipment to control the temperature of electrically floated samples by thermocouple with Wi-Fi transmission. The growth of the CNT was investigated using our plasma CVD equipment. We examined the temperature accuracy and stability controlled by the thermocouple with monitoring the radiation thermometer. We noticed that the thermocouple readings were stable, whereas the readings of the radiation thermometer changes significantly (20 °C) during plasma CVD. This result clearly shows that the sample temperature should be measured with direct connection. On the result of CVD experiment, different structures of carbon including CNT were obtained by changing the bias voltages.
Sampling the NCAR TIEGCM, TIME-GCM, and GSWM models for CEDAR and TIMED related studies
NASA Astrophysics Data System (ADS)
Oberheide, J.; Hagan, M. E.; Roble, R. G.; Lu, G.
2003-04-01
The instruments on the TIMED satellite and a complement of ground based CEDAR instruments will provide invaluable diagnostics of mesosphere, lower thermosphere, and E-region ionosphere (MLTI, ca. 60-180 km) forcings, dynamics, and energetics. The interpretation of these diagnostics and elucidation of the impact of the associated processes on the MLTI requires complementary modeling initiatives. We make samples of the NCAR/HAO TIME-GCM, TIEGCM, and GSWM model outputs available to the community via the web. The model results are sampled in a way to provide winds, temperatures, and trace constituents that would be measured by the TIMED instruments if the satellite flew through the model atmosphere. We also provide an analogous product for the CEDAR ground-based component of TIMED.
A physically based model of global freshwater surface temperature
NASA Astrophysics Data System (ADS)
Beek, Ludovicus P. H.; Eikelboom, Tessa; Vliet, Michelle T. H.; Bierkens, Marc F. P.
2012-09-01
Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for
Modeling of bacterial growth as a function of temperature.
Zwietering, M H; de Koos, J T; Hasenack, B E; de Witt, J C; van't Riet, K
1991-01-01
The temperature of chilled foods is a very important variable for microbial safety in a production and distribution chain. To predict the number of organisms as a function of temperature and time, it is essential to model the lag time, specific growth rate, and asymptote (growth yield) as a function of temperature. The objective of this research was to determine the suitability and usefulness of different models, either available from the literature or newly developed. The models were compared by using an F test, by which the lack of fit of the models was compared with the measuring error. From the results, a hyperbolic model was selected for the description of the lag time as a function of temperature. Modified forms of the Ratkowsky model were selected as the most suitable model for both the growth rate and the asymptote as a function of temperature. The selected models could be used to predict experimentally determined numbers of organisms as a function of temperature and time. PMID:2059034
Monte Carlo grain growth modeling with local temperature gradients
NASA Astrophysics Data System (ADS)
Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.
2017-09-01
This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.
Dynamic modeling of temperature change in outdoor operated tubular photobioreactors.
Androga, Dominic Deo; Uyar, Basar; Koku, Harun; Eroglu, Inci
2017-04-06
In this study, a one-dimensional transient model was developed to analyze the temperature variation of tubular photobioreactors operated outdoors and the validity of the model was tested by comparing the predictions of the model with the experimental data. The model included the effects of convection and radiative heat exchange on the reactor temperature throughout the day. The temperatures in the reactors increased with increasing solar radiation and air temperatures, and the predicted reactor temperatures corresponded well to the measured experimental values. The heat transferred to the reactor was mainly through radiation: the radiative heat absorbed by the reactor medium, ground radiation, air radiation, and solar (direct and diffuse) radiation, while heat loss was mainly through the heat transfer to the cooling water and forced convection. The amount of heat transferred by reflected radiation and metabolic activities of the bacteria and pump work was negligible. Counter-current cooling was more effective in controlling reactor temperature than co-current cooling. The model developed identifies major heat transfer mechanisms in outdoor operated tubular photobioreactors, and accurately predicts temperature changes in these systems. This is useful in determining cooling duty under transient conditions and scaling up photobioreactors. The photobioreactor design and the thermal modeling were carried out and experimental results obtained for the case study of photofermentative hydrogen production by Rhodobacter capsulatus, but the approach is applicable to photobiological systems that are to be operated under outdoor conditions with significant cooling demands.
Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality
NASA Astrophysics Data System (ADS)
Caldwell, R. J.; Rajagopalan, B.
2011-12-01
Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other
Heat propagation models for superconducting nanobridges at millikelvin temperatures
NASA Astrophysics Data System (ADS)
Blois, A.; Rozhko, S.; Hao, L.; Gallop, J. C.; Romans, E. J.
2017-01-01
Nanoscale superconducting quantum interference devices (nanoSQUIDs) most commonly use Dayem bridges as Josephson elements to reduce the loop size and achieve high spin sensitivity. Except at temperatures close to the critical temperature T c, the electrical characteristics of these bridges exhibit undesirable thermal hysteresis which complicates device operation. This makes proper thermal analysis an essential design consideration for optimising nanoSQUID performance at ultralow temperatures. However the existing theoretical models for this hysteresis were developed for micron-scale devices operating close to liquid helium temperatures, and are not fully applicable to a new generation of much smaller devices operating at significantly lower temperatures. We have therefore developed a new analytic heat model which enables a more accurate prediction of the thermal behaviour in such circumstances. We demonstrate that this model is in good agreement with experimental results measured down to 100 mK and discuss its validity for different nanoSQUID geometries.
A temperature dependent SPICE macro-model for power MOSFETs
Pierce, D.G.
1992-05-01
A power MOSFET macro-model for use with the circuit simulator SPICE has been developed suitable for use over the temperature range of {minus}55 to 125{degrees}C. The model is comprised of a single parameter set with the temperature dependence accessed through the SPICE TEMP card. This report describes in detail the development of the model and the extraction algorithms used to obtain model parameters. The extraction algorithms are described in sufficient detail to allow for automated measurements which in turn allows for rapid and cost effective development of an accurate SPICE model for any power MOSFET. 22 refs.
Comparative modelling by restraint-based conformational sampling.
Furnham, Nicholas; de Bakker, Paul Iw; Gore, Swanand; Burke, David F; Blundell, Tom L
2008-01-31
Although comparative modelling is routinely used to produce three-dimensional models of proteins, very few automated approaches are formulated in a way that allows inclusion of restraints derived from experimental data as well as those from the structures of homologues. Furthermore, proteins are usually described as a single conformer, rather than an ensemble that represents the heterogeneity and inaccuracy of experimentally determined protein structures. Here we address these issues by exploring the application of the restraint-based conformational space search engine, RAPPER, which has previously been developed for rebuilding experimentally defined protein structures and for fitting models to electron density derived from X-ray diffraction analyses. A new application of RAPPER for comparative modelling uses positional restraints and knowledge-based sampling to generate models with accuracies comparable to other leading modelling tools. Knowledge-based predictions are based on geometrical features of the homologous templates and rules concerning main-chain and side-chain conformations. By directly changing the restraints derived from available templates we estimate the accuracy limits of the method in comparative modelling. The application of RAPPER to comparative modelling provides an effective means of exploring the conformational space available to a target sequence. Enhanced methods for generating positional restraints can greatly improve structure prediction. Generation of an ensemble of solutions that are consistent with both target sequence and knowledge derived from the template structures provides a more appropriate representation of a structural prediction than a single model. By formulating homologous structural information as sets of restraints we can begin to consider how comparative models might be used to inform conformer generation from sparse experimental data.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Assimilation of Surface Temperature in Land Surface Models
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman
1998-01-01
Hydrological models have been calibrated and validated using catchment streamflows. However, using a point measurement does not guarantee correct spatial distribution of model computed heat fluxes, soil moisture and surface temperatures. With the advent of satellites in the late 70s, surface temperature is being measured two to four times a day from various satellite sensors and different platforms. The purpose of this paper is to demonstrate use of satellite surface temperature in (a) validation of model computed surface temperatures and (b) assimilation of satellite surface temperatures into a hydrological model in order to improve the prediction accuracy of soil moistures and heat fluxes. The assimilation is carried out by comparing the satellite and the model produced surface temperatures and setting the "true"temperature midway between the two values. Based on this "true" surface temperature, the physical relationships of water and energy balance are used to reset the other variables. This is a case of nudging the water and energy balance variables so that they are consistent with each other and the true" surface temperature. The potential of this assimilation scheme is demonstrated in the form of various experiments that highlight the various aspects. This study is carried over the Red-Arkansas basin in the southern United States (a 5 deg X 10 deg area) over a time period of a year (August 1987 - July 1988). The land surface hydrological model is run on an hourly time step. The results show that satellite surface temperature assimilation improves the accuracy of the computed surface soil moisture remarkably.
Temperature sensitivity of a numerical pollen forecast model
NASA Astrophysics Data System (ADS)
Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone
2016-04-01
Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.
Modeling the Effect of Temperature on Ozone-Related Mortality.
Modeling the Effect of Temperature on Ozone-Related Mortality. Wilson, Ander, Reich, Brian J, Neas, Lucas M., Rappold, Ana G. Background: Previous studies show ozone and temperature are associated with increased mortality; however, the joint effect is not well explored. Underst...
Modeling the Effect of Temperature on Ozone-Related Mortality.
Modeling the Effect of Temperature on Ozone-Related Mortality. Wilson, Ander, Reich, Brian J, Neas, Lucas M., Rappold, Ana G. Background: Previous studies show ozone and temperature are associated with increased mortality; however, the joint effect is not well explored. Underst...
A generalized conditional heteroscedastic model for temperature downscaling
NASA Astrophysics Data System (ADS)
Modarres, R.; Ouarda, T. B. M. J.
2014-11-01
This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.
Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples
Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.; Mahadevan, Sankaran
2015-09-01
Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.
Motif Yggdrasil: sampling sequence motifs from a tree mixture model.
Andersson, Samuel A; Lagergren, Jens
2007-06-01
In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model (MY model) describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. The model allows toggling, i.e., the restriction of a position to a subset of nucleotides, but does not require aligned sequences nor edge lengths, which may be difficult to come by. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. We show that the MY model improves the modeling of difficult motif instances and that the use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes. We investigate the sensitivity to errors in the tree and show that using random trees MY sampler still has a performance similar to the original version.
Volcanic Aerosol Evolution: Model vs. In Situ Sampling
NASA Astrophysics Data System (ADS)
Pfeffer, M. A.; Rietmeijer, F. J.; Brearley, A. J.; Fischer, T. P.
2002-12-01
Volcanoes are the most significant non-anthropogenic source of tropospheric aerosols. Aerosol samples were collected at different distances from 92°C fumarolic source at Poás Volcano. Aerosols were captured on TEM grids coated by a thin C-film using a specially designed collector. In the sampling, grids were exposed to the plume for 30-second intervals then sealed and frozen to prevent reaction before ATEM analysis to determine aerosol size and chemistry. Gas composition was established using gas chromatography, wet chemistry techniques, AAS and Ion Chromatography on samples collected directly from a fumarolic vent. SO2 flux was measured remotely by COSPEC. A Gaussian plume dispersion model was used to model concentrations of the gases at different distances down-wind. Calculated mixing ratios of air and the initial gas species were used as input to the thermo-chemical model GASWORKS (Symonds and Reed, Am. Jour. Sci., 1993). Modeled products were compared with measured aerosol compositions. Aerosols predicted to precipitate out of the plume one meter above the fumarole are [CaSO4, Fe2.3SO4, H2SO4, MgF2. Na2SO4, silica, water]. Where the plume leaves the confines of the crater, 380 meters distant, the predicted aerosols are the same, excepting FeF3 replacing Fe2.3SO4. Collected aerosols show considerable compositional differences between the sampling locations and are more complex than those predicted. Aerosols from the fumarole consist of [Fe +/- Si,S,Cl], [S +/- O] and [Si +/- O]. Aerosols collected on the crater rim consist of the same plus [O,Na,Mg,Ca], [O,Si,Cl +/- Fe], [Fe,O,F] and [S,O +/- Mg,Ca]. The comparison between results obtained by the equilibrium gas model and the actual aerosol compositions shows that an assumption of chemical and thermal equilibrium evolution is invalid. The complex aerosols collected contrast the simple formulae predicted. These findings show that complex, non-equilibrium chemical reactions take place immediately upon volcanic
Evaluation of CIRA temperature model with lidar and future perspectives
NASA Astrophysics Data System (ADS)
Keckhut, Philippe; Hauchecorne, Alain
The CIRA model is widely used for many atmospheric applications. Many comparisons with temperature lidar have all revealed similar bias and will be presented. The mean tempera-ture is today not sufficient and future models will require additional functionalities. The use of statistical temperature mean fields requires some information about the variability to esti-mate the significance of the comparisons with other sources. Some tentative to estimate such variability will be presented. Another crucial issue for temperature comparisons concerns the tidal variability. How this effect can be considered in a model will be discussed. Finally, the pertinence of statistical models in a changing atmosphere is also an issue that needs specific considerations.
A New Empirical Model of the Temperature Humidity Index.
NASA Astrophysics Data System (ADS)
Schoen, Carl
2005-09-01
A simplified scale of apparent temperature, considering only dry-bulb temperature and humidity, has become known as the temperature humidity index (THI). The index was empirically constructed and was presented in the form of a table. It is often useful to have a formula instead for use in interpolation or for programming calculators or computers. The National Weather Service uses a polynomial multiple regression formula, but it is in some ways unsatisfactory. A new model of the THI is presented that is much simpler—having only 3 parameters as compared with 16 for the NWS model. The new model also more closely fits the tabulated values and has the advantage that it allows extrapolation outside of the temperature range of the table. Temperature humidity pairs above the effective range of the NWS model are occasionally encountered, and the ability to extrapolate into colder temperature ranges allows the new model to be more effectively contained as part of a more general apparent temperature index.
NASA Astrophysics Data System (ADS)
Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.
2014-02-01
Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable and reproducible results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and they approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial data sets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In
NASA Astrophysics Data System (ADS)
Heckmann, T.; Gegg, K.; Gegg, A.; Becht, M.
2013-06-01
Predictive spatial modelling is an important task in natural hazard assessment and regionalisation of geomorphic processes or landforms. Logistic regression is a multivariate statistical approach frequently used in predictive modelling; it can be conducted stepwise in order to select from a number of candidate independent variables those that lead to the best model. In our case study on a debris flow susceptibility model, we investigate the sensitivity of model selection and quality to different sample sizes in light of the following problem: on the one hand, a sample has to be large enough to cover the variability of geofactors within the study area, and to yield stable results; on the other hand, the sample must not be too large, because a large sample is likely to violate the assumption of independent observations due to spatial autocorrelation. Using stepwise model selection with 1000 random samples for a number of sample sizes between n = 50 and n = 5000, we investigate the inclusion and exclusion of geofactors and the diversity of the resulting models as a function of sample size; the multiplicity of different models is assessed using numerical indices borrowed from information theory and biodiversity research. Model diversity decreases with increasing sample size and reaches either a local minimum or a plateau; even larger sample sizes do not further reduce it, and approach the upper limit of sample size given, in this study, by the autocorrelation range of the spatial datasets. In this way, an optimised sample size can be derived from an exploratory analysis. Model uncertainty due to sampling and model selection, and its predictive ability, are explored statistically and spatially through the example of 100 models estimated in one study area and validated in a neighbouring area: depending on the study area and on sample size, the predicted probabilities for debris flow release differed, on average, by 7 to 23 percentage points. In view of these results, we
Chironomids as indicators of climate change: a temperature inference model for Greenland
NASA Astrophysics Data System (ADS)
Maddison, Eleanor J.; Long, Antony J.; Woodroffe, Sarah A.; Ranner, P. Helen; Huntley, Brian
2014-05-01
Current climate warming is predicted to accelerate melting of the Greenland Ice Sheet and cause global sea level to rise, but there is uncertainty about whether changes will be abrupt or more gradual, and whether the key forcing will be air or ocean temperatures. Examining past ice sheet response to climate change is therefore important, yet only a few quantitative temperature reconstructions exist from the Greenland Ice Sheet margin. Subfossil chironomids are a widely used biological proxy, with modern calibration data-sets used to construct past temperature. Many chironomid-inferred temperature models exist in the northern hemisphere high latitudes, however, no model currently exists for Greenland. Here we present a new model from south-west Greenland which utilises 22 lakes from the Nuup Kangerlua area (samples collected in summer 2011) and 19 lakes from the Kangerlussuaq fjord area (part of a dataset reported in Brodersen and Anderson (2002)). Monthly mean air temperatures were modelled for each lake site from air temperature logger data, collected in 2011-2012 from the Nuup Kangerlua area, and meteorological station temperature data. In the field, lake physical parameters and environmental variables were measured. Collected lake water and sediment samples were analysed in the laboratory. Statistical analysis of air temperature, geographical information, lake water chemistry and contemporary chironomid assemblage data was subsequently undertaken on the 41 lake training set. Mean June air temperature was found to be the main environmental control on the chironomid community, although other factors, including sample depth, conductivity and total nitrogen water content, were also found to be important. Weighted averaging partial least squares (WA-PLS) analysis was used to develop a new mean June air temperature inference model. Analysis indicated that the best model was a two component WA-PLS with r2=0.77, r2boot=0.56 and root mean square error of prediction = 1
De novo protein conformational sampling using a probabilistic graphical model
NASA Astrophysics Data System (ADS)
Bhattacharya, Debswapna; Cheng, Jianlin
2015-11-01
Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Modeling and Simulation of a Tethered Harpoon for Comet Sampling
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.
2014-01-01
This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.
Modeling and Simulation of a Tethered Harpoon for Comet Sampling
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.
2014-01-01
This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.
Testing fault growth models with low-temperature thermochronology
NASA Astrophysics Data System (ADS)
Curry, Magdalena; Barnes, Jason; Colgan, Joseph
2017-04-01
Common fault-growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. We outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. We test our framework in the normal-fault bounded Pine Forest Range from the U.S. Basin and Range Province. We combine new and existing cross-sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show rapid exhumation began along the range-front fault between ca. 15-11 Ma at rates of 0.2-0.4 km/m.y., ultimately exhuming ca. 1.5-5 km. The ages of onset of rapid exhumation identified at each sample transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern >40 km length within 3-4 m.y. of onset. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.
Role of temperature-dependent spin model parameters in ultra-fast magnetization dynamics
NASA Astrophysics Data System (ADS)
Deák, A.; Hinzke, D.; Szunyogh, L.; Nowak, U.
2017-08-01
In the spirit of multi-scale modelling magnetization dynamics at elevated temperature is often simulated in terms of a spin model where the model parameters are derived from first principles. While these parameters are mostly assumed temperature-independent and thermal properties arise from spin fluctuations only, other scenarios are also possible. Choosing bcc Fe as an example, we investigate the influence of different kinds of model assumptions on ultra-fast spin dynamics, where following a femtosecond laser pulse, a sample is demagnetized due to a sudden rise of the electron temperature. While different model assumptions do not affect the simulational results qualitatively, their details do depend on the nature of the modelling.
High temperature spice modeling of partially depleted SOI MOSFETs
Osman, M.A.; Osman, A.A.
1996-03-01
Several partially depleted SOI N- and P-mosfets with dimensions ranging from W/L=30/10 to 15/3 were characterized from room temperature up to 300 C. The devices exhibited a well defined and sharp zero temperature coefficient biasing point up to 573 K in both linear and saturation regions. Simulation of the I-V characteristics using a temperature dependent SOI SPICE were in excellent agreement with measurements. Additionally, measured ZTC points agreed favorably with the predicted ZTC points using expressions derived from the temperature dependent SOI model for the ZTC {copyright} {ital 1996 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Hearty, Thomas J.; Savtchenko, Andrey; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In
2014-03-01
We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be ± 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and > 30% dry over midlatitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.
NASA Technical Reports Server (NTRS)
Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In
2014-01-01
We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.
Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; Sanz-Medel, Alfredo
2014-01-27
A new custom-built Peltier-cooled laser ablation cell is described. The proposed cryogenic cell combines a small internal volume (20 cm(3)) with a unique and reliable on-sample temperature control. The use of a flexible temperature sensor, directly located on the sample surface, ensures a rigorous sample temperature control throughout the entire analysis time and allows instant response to any possible fluctuation. In this way sample integrity and, therefore, reproducibility can be guaranteed during the ablation. The refrigeration of the proposed cryogenic cell combines an internal refrigeration system, controlled by a sensitive thermocouple, with an external refrigeration system. Cooling of the sample is directly carried out by 8 small (1 cm×1 cm) Peltier elements placed in a circular arrangement in the base of the cell. These Peltier elements are located below a copper plate where the sample is placed. Due to the small size of the cooling electronics and their circular allocation it was possible to maintain a peephole under the sample for illumination allowing a much better visualization of the sample, a factor especially important when working with structurally complex tissue sections. The analytical performance of the cryogenic cell was studied using a glass reference material (SRM NIST 612) at room temperature and at -20°C. The proposed cell design shows a reasonable signal washout (signal decay within less than 10 s to background level), high sensitivity and good signal stability (in the range 6.6-11.7%). Furthermore, high precision (0.4-2.6%) and accuracy (0.3-3.9%) in the isotope ratio measurements were also observed operating the cell both at room temperature and at -20°C. Finally, experimental results obtained for the cell application to qualitative elemental imaging of structurally complex tissue samples (e.g. eye sections from a native frozen porcine eye and fresh flower leaves) demonstrate that working in cryogenic conditions is critical in such
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Mechanistic modeling of broth temperature in outdoor photobioreactors.
Béchet, Quentin; Shilton, Andy; Fringer, Oliver B; Muñoz, Raul; Guieysse, Benoit
2010-03-15
This study presents the first mechanistic model describing broth temperature in column photobioreactors as a function of static (location, reactor geometry) and dynamic (light irradiance, air temperature, wind velocity) parameters. Based on a heat balance on the liquid phase the model predicted temperature in a pneumatically agitated column photobioreactor (1 m(2) illuminated area, 0.19 m internal diameter, 50 L gas-free cultivation broth) operated outdoor in Singapore to an accuracy of 2.4 °C at the 95% confidence interval over the entire data set used (104 measurements from 7 different batches). Solar radiation (0 to 200 W) and air convection (-30 to 50 W)were the main contributors to broth temperature change. The model predicted broth temperature above 40 °C will be reached during summer months in the same photobioreactor operated in California, a value well over the maximum temperature tolerated by most commercial algae species. Accordingly, 18,000 and 5500 GJ year(-1) ha(-1) of heat energy must be removed to maintain broth temperature at or below 25 and 35 °C, respectively, assuming a reactor density of one reactor per square meter. Clearly, the significant issue of temperature control must be addressed when evaluating the technical feasibility, costs, and sustainability of large-scale algae production.
NASA Technical Reports Server (NTRS)
Franzen, M. A.; Roe, L. A.; Buffington, J. A.; Sears, D. W. G.
2005-01-01
There have been a number of missions that have explored the solar system with cameras and other instruments but profound questions remain that can only be addressed through the analysis of returned samples. However, due to lack of appropriate technology, high cost, and high risk, sample return has only recently become a feasible part of robotic solar system exploration. One specific objective of the President s new vision is that robotic exploration of the solar system should enhance human exploration as it discovers and understands the the solar system, and searches for life and resources [1]. Missions to small bodies, asteroids and comets, will partially fill the huge technological void between missions to the Moon and missions to Mars. However, such missions must be low cost and inherently simple, so they can be applied routinely to many missions. Sample return from asteroids, comets, Mars, and Jupiter s moons will be an important and natural part of the human exploration of space effort. Here we describe the collector designed for the Hera Near-Earth Asteroid Sample Return Mission. We have built a small prototype for preliminary evaluation, but expect the final collector to gather approx.100 g of sample of dust grains to centimeter sized clasts on each application to the surface of the asteroid.
Modeling the wet bulb globe temperature using standard meteorological measurements.
Liljegren, James C; Carhart, Richard A; Lawday, Philip; Tschopp, Stephen; Sharp, Robert
2008-10-01
The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 degree C based on comparisons with wet bulb globe temperature measurements at all depots.
Modeling the wet bulb globe temperature using standard meteorological measurements.
Liljegren, J. C.; Carhart, R. A.; Lawday, P.; Tschopp, S.; Sharp, R.; Decision and Information Sciences
2008-10-01
The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 C based on comparisons with wet bulb globe temperature measurements at all depots.
A model function for ocean microwave brightness temperatures
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1983-01-01
A relatively simple, yet accurate, relationship between the microwave brightness temperature of the ocean and conventional oceanographic and meteorological parameters is derived. The equation for the brightness temperature upwelling from the sea surface through the intervening atmosphere is obtained, considering radiative emission and scattering by the sea surface along with radiative absorption and emission by the atmosphere. A number of approximations are applied to the integral brightness temperature equation and its supporting equations in order to obtain a simple equation for the brightness temperature that does not contain integrals. Values for a number of atmospheric parameters are determined, including temperature sensitivities, oxygen opacity, water vapor and liquid water normalized absorption coefficients, and effective columnar height. The sea surface emissivity model is then considered, modelling the sea surface as a composite of foam-free rough water and foam patches.
Stapleton, Mary; Daly, Niamh; O'Kelly, Ruth; Turner, Michael J
2017-01-01
Background The inhibition of glycolysis prior to glucose measurement is an important consideration when interpreting glucose tolerance tests. This is particularly important in gestational diabetes mellitus where prompt diagnosis and treatment is essential. A study was planned to investigate the effect of preservatives and temperature on glycolysis. Methods Blood samples for glucose were obtained from consented females. Lithium heparin and fluoride-EDTA samples transported rapidly in ice slurry to the laboratory were analysed for glucose concentration and then held either in ice slurry or at room temperature for varying time intervals. Paired fluoride-citrate samples were received at room temperature and held at room temperature, with analysis at similar time intervals. Results No significant difference was noted between mean glucose concentrations when comparing different sample types received in ice slurry. The mean glucose concentrations decreased significantly for both sets of samples when held at room temperature (0.4 mmol/L) and in ice slurry (0.2 mmol/L). A review of patient glucose tolerance tests reported in our hospital indicated that 17.8% exceeded the recommended diagnostic criteria for gestational diabetes mellitus. It was predicted that if the results of fasting samples were revised to reflect the effect of glycolysis at room temperature, the adjusted diagnostic rate could increase to 35.3%. Conclusion Preanalytical handling of blood samples for glucose analysis is vital. Fluoride-EDTA is an imperfect antiglycolytic, even when the samples are transported and analysed rapidly provides such optimal conditions. The use of fluoride-citrate tubes may offer a viable alternative in the diagnosis of diabetes mellitus.
Hierarchical Bayesian Modeling, Estimation, and Sampling for Multigroup Shape Analysis
Yu, Yen-Yun; Fletcher, P. Thomas; Awate, Suyash P.
2016-01-01
This paper proposes a novel method for the analysis of anatomical shapes present in biomedical image data. Motivated by the natural organization of population data into multiple groups, this paper presents a novel hierarchical generative statistical model on shapes. The proposed method represents shapes using pointsets and defines a joint distribution on the population’s (i) shape variables and (ii) object-boundary data. The proposed method solves for optimal (i) point locations, (ii) correspondences, and (iii) model-parameter values as a single optimization problem. The optimization uses expectation maximization relying on a novel Markov-chain Monte-Carlo algorithm for sampling in Kendall shape space. Results on clinical brain images demonstrate advantages over the state of the art. PMID:25320776
Sparse model selection in the highly under-sampled regime
NASA Astrophysics Data System (ADS)
Bulso, Nicola; Marsili, Matteo; Roudi, Yasser
2016-09-01
We propose a method for recovering the structure of a sparse undirected graphical model when very few samples are available. The method decides about the presence or absence of bonds between pairs of variable by considering one pair at a time and using a closed form formula, analytically derived by calculating the posterior probability for every possible model explaining a two body system using Jeffreys prior. The approach does not rely on the optimization of any cost functions and consequently is much faster than existing algorithms. Despite this time and computational advantage, numerical results show that for several sparse topologies the algorithm is comparable to the best existing algorithms, and is more accurate in the presence of hidden variables. We apply this approach to the analysis of US stock market data and to neural data, in order to show its efficiency in recovering robust statistical dependencies in real data with non-stationary correlations in time and/or space.
The effects of sampling frequency on the climate statistics of the ECMWF general circulation model
Phillips, T.J.; Gates, W.L.; Arpe, K.
1992-09-01
The effects of sampling frequency on the first- and second-moment statistics of selected EC model variables are investigated in a simulation of ``perpetual July`` with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the enfolding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and run-off, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over the oceans.
The effects of sampling frequency on the climate statistics of the ECMWF general circulation model
Phillips, T.J.; Gates, W.L. ); Arpe, K. )
1992-09-01
The effects of sampling frequency on the first- and second-moment statistics of selected EC model variables are investigated in a simulation of perpetual July'' with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the enfolding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and run-off, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over the oceans.
Modelling proteins: conformational sampling and reconstruction of folding kinetics.
Klenin, Konstantin; Strodel, Birgit; Wales, David J; Wenzel, Wolfgang
2011-08-01
In the last decades biomolecular simulation has made tremendous inroads to help elucidate biomolecular processes in-silico. Despite enormous advances in molecular dynamics techniques and the available computational power, many problems involve long time scales and large-scale molecular rearrangements that are still difficult to sample adequately. In this review we therefore summarise recent efforts to fundamentally improve this situation by decoupling the sampling of the energy landscape from the description of the kinetics of the process. Recent years have seen the emergence of many advanced sampling techniques, which permit efficient characterisation of the relevant family of molecular conformations by dispensing with the details of the short-term kinetics of the process. Because these methods generate thermodynamic information at best, they must be complemented by techniques to reconstruct the kinetics of the process using the ensemble of relevant conformations. Here we review recent advances for both types of methods and discuss their perspectives to permit efficient and accurate modelling of large-scale conformational changes in biomolecules. This article is part of a Special Issue entitled: Protein Dynamics: Experimental and Computational Approaches. Copyright © 2010 Elsevier B.V. All rights reserved.
Constitutive modelling of aluminium alloy sheet at warm forming temperatures
NASA Astrophysics Data System (ADS)
Kurukuri, S.; Worswick, M. J.; Winkler, S.
2016-08-01
The formability of aluminium alloy sheet can be greatly improved by warm forming. However predicting constitutive behaviour under warm forming conditions is a challenge for aluminium alloys due to strong, coupled temperature- and rate-sensitivity. In this work, uniaxial tensile characterization of 0.5 mm thick fully annealed aluminium alloy brazing sheet, widely used in the fabrication of automotive heat exchanger components, is performed at various temperatures (25 to 250 °C) and strain rates (0.002 and 0.02 s-1). In order to capture the observed rate- and temperature-dependent work hardening behaviour, a phenomenological extended-Nadai model and the physically based (i) Bergstrom and (ii) Nes models are considered and compared. It is demonstrated that the Nes model is able to accurately describe the flow stress of AA3003 sheet at different temperatures, strain rates and instantaneous strain rate jumps.
Lee-Wick standard model at finite temperature
NASA Astrophysics Data System (ADS)
Lebed, Richard F.; Long, Andrew J.; TerBeek, Russell H.
2013-10-01
The Lee-Wick Standard Model at temperatures near the electroweak scale is considered, with the aim of studying the electroweak phase transition. While Lee-Wick theories possess states of negative norm, they are not pathological but instead are treated by imposing particular boundary conditions and using particular integration contours in the calculation of S-matrix elements. It is not immediately clear how to extend this prescription to formulate the theory at finite temperature; we explore two different pictures of finite-temperature Lee-Wick theories, and calculate the thermodynamic variables and the (one-loop) thermal effective potential. We apply these results to study the Lee-Wick Standard Model and find that the electroweak phase transition is a continuous crossover, much like in the Standard Model. However, the high-temperature behavior is modified due to cancellations between thermal corrections arising from the negative- and positive-norm states.
One-Dimensional Temperature Modeling Techniques. Review and Recommendations
1990-08-01
1981 b) has an int’-grated simple low. canopy model with more compiete linkage with tile ground. Two other models applicable to forests have been...temperature Complete 3-D modeling of backgrounds must take and convective heat fl,-:es and can affect local tempera- into account many factors not...remotely Hutchison (1989) Modeling directional thermal radi - sensedcropsurfacetemperatures. Remote Sens;,zg ofthe ance from a forest canopy. Remote
A Temperature-Dependent Battery Model for Wireless Sensor Networks.
Rodrigues, Leonardo M; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-02-22
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments.
A Temperature-Dependent Battery Model for Wireless Sensor Networks
Rodrigues, Leonardo M.; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments. PMID:28241444
Event-based stormwater management pond runoff temperature model
NASA Astrophysics Data System (ADS)
Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.
2016-09-01
Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.
Modeling the Orbital Sampling Effect of Extrasolar Moons
NASA Astrophysics Data System (ADS)
Heller, René; Hippke, Michael; Jackson, Brian
2016-04-01
The orbital sampling effect (OSE) appears in phase-folded transit light curves of extrasolar planets with moons. Analytical OSE models have hitherto neglected stellar limb darkening and non-zero transit impact parameters and assumed that the moon is on a circular, co-planar orbit around the planet. Here, we present an analytical OSE model for eccentric moon orbits, which we implement in a numerical simulator with stellar limb darkening that allows for arbitrary transit impact parameters. We also describe and publicly release a fully numerical OSE simulator (PyOSE) that can model arbitrary inclinations of the transiting moon orbit. Both our analytical solution for the OSE and PyOSE can be used to search for exomoons in long-term stellar light curves such as those by Kepler and the upcoming PLATO mission. Our updated OSE model offers an independent method for the verification of possible future exomoon claims via transit timing variations and transit duration variations. Photometrically quiet K and M dwarf stars are particularly promising targets for an exomoon discovery using the OSE.
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2016-08-01
An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an instance-dependent effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to k -step contrastive divergence (CD-k ) with k up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one-step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.
Leavitt, V M; De Meo, E; Riccitelli, G; Rocca, M A; Comi, G; Filippi, M; Sumowski, J F
2015-11-01
Elevated body temperature was recently reported for the first time in patients with relapsing-remitting multiple sclerosis (RRMS) relative to healthy controls. In addition, warmer body temperature was associated with worse fatigue. These findings are highly novel, may indicate a novel pathophysiology for MS fatigue, and therefore warrant replication in a geographically separate sample. Here, we investigated body temperature and its association to fatigue in an Italian sample of 44 RRMS patients and 44 age- and sex-matched healthy controls. Consistent with our original report, we found elevated body temperature in the RRMS sample compared to healthy controls. Warmer body temperature was associated with worse fatigue, thereby supporting the notion of endogenous temperature elevations in patients with RRMS as a novel pathophysiological factor underlying fatigue. Our findings highlight a paradigm shift in our understanding of the effect of heat in RRMS, from exogenous (i.e., Uhthoff's phenomenon) to endogenous. Although randomized controlled trials of cooling treatments (i.e., aspirin, cooling garments) to reduce fatigue in RRMS have been successful, consideration of endogenously elevated body temperature as the underlying target will enhance our development of novel treatments.
Arauzo, A.; Guerrero, E.; Urtizberea, A.; Stankiewicz, J.; Rillo, C.
2012-06-15
A sample holder design for high temperature measurements in a commercial MPMS SQUID magnetometer from Quantum Design is presented. It fulfills the requirements for the simultaneous use of the oven and reciprocating sample option (RSO) options, thus allowing sensitive magnetic measurements up to 800 K. Alternating current susceptibility can also be measured, since the holder does not induce any phase shift relative to the ac driven field. It is easily fabricated by twisting Constantan Copyright-Sign wires into a braid nesting the sample inside. This design ensures that the sample be placed tightly into a tough holder with its orientation fixed, and prevents any sample displacement during the fast movements of the RSO transport, up to high temperatures.
Wang-Landau sampling in face-centered-cubic hydrophobic-hydrophilic lattice model proteins.
Liu, Jingfa; Song, Beibei; Yao, Yonglei; Xue, Yu; Liu, Wenjie; Liu, Zhaoxia
2014-10-01
Finding the global minimum-energy structure is one of the main problems of protein structure prediction. The face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice model can reach high approximation ratios of real protein structures, so the fcc lattice model is a good choice to predict the protein structures. The lacking of an effective global optimization method is the key obstacle in solving this problem. The Wang-Landau sampling method is especially useful for complex systems with a rough energy landscape and has been successfully applied to solving many optimization problems. We apply the improved Wang-Landau (IWL) sampling method, which incorporates the generation of an initial conformation based on the greedy strategy and the neighborhood strategy based on pull moves into the Wang-Landau sampling method to predict the protein structures on the fcc HP lattice model. Unlike conventional Monte Carlo simulations that generate a probability distribution at a given temperature, the Wang-Landau sampling method can estimate the density of states accurately via a random walk, which produces a flat histogram in energy space. We test 12 general benchmark instances on both two-dimensional and three-dimensional (3D) fcc HP lattice models. The lowest energies by the IWL sampling method are as good as or better than those of other methods in the literature for all instances. We then test five sets of larger-scale instances, denoted by the S, R, F90, F180, and CASP target instances on the 3D fcc HP lattice model. The numerical results show that our algorithm performs better than the other five methods in the literature on both the lowest energies and the average lowest energies in all runs. The IWL sampling method turns out to be a powerful tool to study the structure prediction of the fcc HP lattice model proteins.
Assessment of two-temperature kinetic model for ionizing air
NASA Technical Reports Server (NTRS)
Park, Chul
1987-01-01
A two-temperature chemical-kinetic model for air is assessed by comparing theoretical results with existing experimental data obtained in shock-tubes, ballistic ranges, and flight experiments. In the model, named the TTv model, one temperature (T) is assumed to characterize the heavy-particle translational and molecular rotational energies, and another temperature (Tv) to characterize the molecular vibrational, electron translational, and electronic excitation energies. The theoretical results for nonequilibrium air flow in shock tubes are obtained using the computer code STRAP (Shock-Tube Radiation Program), and for flow along the stagnation streamline in the shock layer over spherical bodies using the newly developed code STRAP (Stagnation-Point Radiation Program). Substantial agreement is shown between the theoretical and experimental results for relaxation times and radiative heat fluxes. At very high temperatures the spectral calculations need further improvement. The present agreement provides strong evidence that the two-temperature model characterizes principal features of nonequilibrium air flow. New theoretical results using the model are presented for the radiative heat fluxes at the stagnation point of a 6-m-radius sphere, representing an aeroassisted orbital transfer vehicle, over a range of free-stream conditions. Assumptions, approximations, and limitations of the model are discussed.
Temperature dependence of heterogeneous nucleation: Extension of the Fletcher model
NASA Astrophysics Data System (ADS)
McGraw, Robert; Winkler, Paul; Wagner, Paul
2015-04-01
Recently there have been several cases reported where the critical saturation ratio for onset of heterogeneous nucleation increases with nucleation temperature (positive slope dependence). This behavior contrasts with the behavior observed in homogeneous nucleation, where a decreasing critical saturation ratio with increasing nucleation temperature (negative slope dependence) seems universal. For this reason the positive slope dependence is referred to as anomalous. Negative slope dependence is found in heterogeneous nucleation as well, but because so few temperature-dependent measurements have been reported, it is not presently clear which slope condition (positive or negative) will become more frequent. Especially interesting is the case of water vapor condensation on silver nanoparticles [Kupc et al., AS&T 47: i-iv, 2013] where the critical saturation ratio for heterogeneous nucleation onset passes through a maximum, at about 278K, with higher (lower) temperatures showing the usual (anomalous) temperature dependence. In the present study we develop an extension of Fletcher's classical, capillarity-based, model of heterogeneous nucleation that explicitly resolves the roles of surface energy and surface entropy in determining temperature dependence. Application of the second nucleation theorem, which relates temperature dependence of nucleation rate to cluster energy, yields both necessary and sufficient conditions for anomalous temperature behavior in the extended Fletcher model. In particular it is found that an increasing contact angle with temperature is a necessary, but not sufficient, condition for anomalous temperature dependence to occur. Methods for inferring microscopic contact angle and its temperature dependence from heterogeneous nucleation probability measurements are discussed in light of the new theory.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
A Temperature-Dependent Hysteresis Model for Relaxor Ferroelectric Compounds
2004-01-01
PMN-driven flextensional sonar transducer submersed in water experiences a temperature increase of approximately 40oC before equilibrium is reached [9...nonlinearities materials 1. Introduction Transducers employing relaxor ferroelectric materials are increasingly considered for applications ranging from...facilitates subsequent transducer design and model-based control implementation. A number of the initial models for the relaxor ferroelectric
NASA Astrophysics Data System (ADS)
Richardson, M.; Cowtan, K.; Hawkins, E.; Stolpe, M.
2015-12-01
Observational temperature records such as HadCRUT4 typically have incomplete geographical coverage and blend air temperature over land with sea surface temperatures over ocean, in contrast to model output which is commonly reported as global air temperature. This complicates estimation of properties such as the transient climate response (TCR). Observation-based estimates of TCR have been made using energy-budget constraints applied to time series of historical radiative forcing and surface temperature changes, while model TCR is formally derived from simulations where CO2 increases at 1% per year. We perform a like-with-like comparison using three published energy-budget methods to derive modelled TCR from historical CMIP5 temperature series sampled in a manner consistent with HadCRUT4. Observation-based TCR estimates agree to within 0.12 K of the multi-model mean in each case and for 2 of the 3 energy-budget methods the observation-based TCR is higher than the multi-model mean. For one energy-budget method, using the HadCRUT4 blending method leads to a TCR underestimate of 0.3±0.1 K, relative to that estimated using global near-surface air temperatures.
Molecular Modeling of High-Temperature Oxidation of Refractory Borides
2008-02-01
04-Feb 08 Final Technical 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER MOLECULAR MODELING OF HIGH-TEMPERATURE OXIDATION OF REFRACTORY BORIDES 5b. GRANT...Prescribed by ANSI Std. Z39.18 MOLECULAR MODELING OF HIGH-TEMPERATURE OXIDATION OF REFRACTORY BORIDES FA9550-05-1-0026 Final Report (11/15/2004-02/14...deficient centers, instead of molecular 02 as in the Deal-Grove model . These network defects will lead to sub-linear dependence of the oxidation rate with
Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.
Figura, Simon; Livingstone, David M; Kipfer, Rolf
2015-01-01
Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed.
An Analytic Function of Lunar Surface Temperature for Exospheric Modeling
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David
2014-01-01
We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.
Lattice Boltzmann model for simulating temperature-sensitive ferrofluids.
Niu, Xiao-Dong; Yamaguchi, Hiroshi; Yoshikawa, Keisuke
2009-04-01
In this paper, a lattice Boltzmann model for simulating temperature-sensitive ferrofluids is presented. The lattice Boltzmann equation for modeling the magnetic field is formulated using a scalar magnetic potential. Introducing a time derivative into the original elliptic equation for the scalar potential leads to an advection-diffusion equation, with an effective velocity determined by the temperature gradient. The time derivative is multiplied by an adjustable preconditioning parameter to ensure that the lattice Boltzmann solution remain close to a solution of the original elliptic equation for the scalar potential. To test the present lattice Boltzmann model, numerical simulations for the thermomagnetic nature convection of the ferrofluids in a cubic cavity are carried out. Good agreement between the obtained results and experimental data shows that the present lattice Boltzmann model is promising for studying temperature-sensitive ferrofluid flows.
River water temperature and fish growth forecasting models
NASA Astrophysics Data System (ADS)
Danner, E.; Pike, A.; Lindley, S.; Mendelssohn, R.; Dewitt, L.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.
2010-12-01
Water is a valuable, limited, and highly regulated resource throughout the United States. When making decisions about water allocations, state and federal water project managers must consider the short-term and long-term needs of agriculture, urban users, hydroelectric production, flood control, and the ecosystems downstream. In the Central Valley of California, river water temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. There is consequently strong interest in modeling water temperature dynamics and the subsequent impacts on fish growth in such regulated rivers. However, the accuracy of current stream temperature models is limited by the lack of spatially detailed meteorological forecasts. To address these issues, we developed a high-resolution deterministic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) in a state-space framework, and applied this model to Upper Sacramento River. We then adapted salmon bioenergetics models to incorporate the temperature data at sub-hourly time steps to provide more realistic estimates of salmon growth. The temperature model uses physically-based heat budgets to calculate the rate of heat transfer to/from the river. We use variables provided by the TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) model—a high-resolution assimilation of satellite-derived meteorological observations and numerical weather simulations—as inputs. The TOPS-WRF framework allows us to improve the spatial and temporal resolution of stream temperature predictions. The salmon growth models are adapted from the Wisconsin bioenergetics model. We have made the output from both models available on an interactive website so that water and fisheries managers can determine the past, current and three day forecasted water temperatures at
ERIC Educational Resources Information Center
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
NASA Technical Reports Server (NTRS)
Parker, K. C.; Torian, J. G.
1980-01-01
A sample environmental control and life support model performance analysis using the environmental analysis routines library is presented. An example of a complete model set up and execution is provided. The particular model was synthesized to utilize all of the component performance routines and most of the program options.
Quantum coherence of spin-boson model at finite temperature
NASA Astrophysics Data System (ADS)
Wu, Wei; Xu, Jing-Bo
2017-02-01
We investigate the dynamical behavior of quantum coherence in spin-boson model, which consists of a qubit coupled to a finite-temperature bosonic bath with power-law spectral density beyond rotating wave approximation, by employing l1-norm as well as quantum relative entropy. It is shown that the temperature of bosonic bath and counter-rotating terms significantly affect the decoherence rate in sub-Ohmic, Ohmic and super-Ohmic baths. At high temperature, we find the counter-rotating terms of spin-boson model are able to increase the decoherence rate for sub-Ohmic baths, however, for Ohmic and super-Ohmic baths, the counter-rotating terms tend to decrease the value of decoherence rate. At low temperature, we find the counter-rotating terms always play a positive role in preserving the qubit's quantum coherence regardless of sub-Ohmic, Ohmic and super-Ohmic baths.
A model of the ground surface temperature for micrometeorological analysis
NASA Astrophysics Data System (ADS)
Leaf, Julian S.; Erell, Evyatar
2017-07-01
Micrometeorological models at various scales require ground surface temperature, which may not always be measured in sufficient spatial or temporal detail. There is thus a need for a model that can calculate the surface temperature using only widely available weather data, thermal properties of the ground, and surface properties. The vegetated/permeable surface energy balance (VP-SEB) model introduced here requires no a priori knowledge of soil temperature or moisture at any depth. It combines a two-layer characterization of the soil column following the heat conservation law with a sinusoidal function to estimate deep soil temperature, and a simplified procedure for calculating moisture content. A physically based solution is used for each of the energy balance components allowing VP-SEB to be highly portable. VP-SEB was tested using field data measuring bare loess desert soil in dry weather and following rain events. Modeled hourly surface temperature correlated well with the measured data (r 2 = 0.95 for a whole year), with a root-mean-square error of 2.77 K. The model was used to generate input for a pedestrian thermal comfort study using the Index of Thermal Stress (ITS). The simulation shows that the thermal stress on a pedestrian standing in the sun on a fully paved surface, which may be over 500 W on a warm summer day, may be as much as 100 W lower on a grass surface exposed to the same meteorological conditions.
Applications of a New England stream temperature model to ...
We have applied a statistical stream network (SSN) model to predict stream thermal metrics (summer monthly medians, growing season maximum magnitude and timing, and daily rates of change) across New England nontidal streams and rivers, excluding northern Maine watersheds that extend into Canada (Detenbeck et al., in review). We excluded stream temperature observations within one kilometer downstream of dams from our model development, so our predictions for those reaches represent potential thermal regimes in the absence of dam effects. We used stream thermal thresholds for mean July temperatures delineating transitions between coldwater, transitional coolwater, and warmwater fish communities derived by Beauchene et al. (2014) to classify expected stream and river thermal regimes across New England. Within the model domain and based on 2006 land-use and air temperatures, the model predicts that 21.8% of stream + river kilometers would support coldwater fish communities (mean July water temperatures 22.3 degrees C mean July temperatures). Application of the model allows us to assess potential condition given full riparian zone restoration as well as potential loss of cold or coolwater habitat given loss of riparian shading. Given restoration of all ripa
Predicting wastewater temperatures in sewer pipes using abductive network models.
Abdel-Aal, M; Mohamed, M; Smits, R; Abdel-Aal, R E; De Gussem, K; Schellart, A; Tait, S
2015-01-01
A predictive modelling technique was employed to estimate wastewater temperatures in sewer pipes. The simplicity of abductive predictive models attracts large numbers of users due to their minimal computation time and limited number of measurable input parameters. Data measured from five sewer pipes over a period of 12 months provide 33,900 training entries and 39,000 evaluation entries to support the models' development. Two simple predictive models for urban upstream combined sewers and large downstream collector sewers were developed. They delivered good correlation between measured and predicted wastewater temperatures proven by their R(2) values of up to 0.98 and root mean square error (RMSE) of the temperature change along the sewer pipe ranging from 0.15 °C to 0.33 °C. Analysis of a number of potential input parameters indicated that upstream wastewater temperature and downstream in-sewer air temperature were the only input parameters that are needed in the developed models to deliver this level of accuracy.
Smith, Megan M.; Hao, Yue; Mason, Harris E.; Carroll, Susan A.
2014-12-31
Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution features including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.
Gordon, J.D.; Schroder, L.J.; Morden-Moore, A. L.; Bowersox, V.C.
1995-01-01
Separate experiments by the U.S. Geological Survey (USGS) and the Illinois State Water Survey Central Analytical Laboratory (CAL) independently assessed the stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures. The USGS experiment represented a test of sample stability under a diverse range of conditions, whereas the CAL experiment was a controlled test of sample stability. In the experiment by the USGS, a statistically significant (?? = 0.05) relation between [H+] and time was found for the composited filtered, natural, wet-deposition solution when all reported values are included in the analysis. However, if two outlying pH values most likely representing measurement error are excluded from the analysis, the change in [H+] over time was not statistically significant. In the experiment by the CAL, randomly selected samples were reanalyzed between July 1984 and February 1991. The original analysis and reanalysis pairs revealed that [H+] differences, although very small, were statistically different from zero, whereas specific-conductance differences were not. Nevertheless, the results of the CAL reanalysis project indicate there appears to be no consistent, chemically significant degradation in sample integrity with regard to [H+] and specific conductance while samples are stored at room temperature at the CAL. Based on the results of the CAL and USGS studies, short-term (45-60 day) stability of [H+] and specific conductance in natural filtered wet-deposition samples that are shipped and stored unchilled at ambient temperatures was satisfactory.
A constitutive model with damage for high temperature superalloys
NASA Technical Reports Server (NTRS)
Sherwood, J. A.; Stouffer, D. C.
1988-01-01
A unified constitutive model is searched for that is applicable for high temperature superalloys used in modern gas turbines. Two unified inelastic state variable constitutive models were evaluated for use with the damage parameter proposed by Kachanov. The first is a model (Bodner, Partom) in which hardening is modeled through the use of a single state variable that is similar to drag stress. The other (Ramaswamy) employs both a drag stress and back stress. The extension was successful for predicting the tensile, creep, fatigue, torsional and nonproportional response of Rene' 80 at several temperatures. In both formulations, a cumulative damage parameter is introduced to model the changes in material properties due to the formation of microcracks and microvoids that ultimately produce a macroscopic crack. A back stress/drag stress/damage model was evaluated for Rene' 95 at 1200 F and is shown to predict the tensile, creep, and cyclic loading responses reasonably well.
ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION
Martino, C.; Herman, D.; Pike, J.; Peters, T.
2014-06-05
Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.
Defining Predictive Probability Functions for Species Sampling Models.
Lee, Jaeyong; Quintana, Fernando A; Müller, Peter; Trippa, Lorenzo
2013-01-01
We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF.
Modeling low-temperature geochemical processes: Chapter 2
Nordstrom, D. Kirk; Campbell, Kate M.
2014-01-01
This chapter provides an overview of geochemical modeling that applies to water–rock interactions under ambient conditions of temperature and pressure. Topics include modeling definitions, historical background, issues of activity coefficients, popular codes and databases, examples of modeling common types of water–rock interactions, and issues of model reliability. Examples include speciation, microbial redox kinetics and ferrous iron oxidation, calcite dissolution, pyrite oxidation, combined pyrite and calcite dissolution, dedolomitization, seawater–carbonate groundwater mixing, reactive-transport modeling in streams, modeling catchments, and evaporation of seawater. The chapter emphasizes limitations to geochemical modeling: that a proper understanding and ability to communicate model results well are as important as completing a set of useful modeling computations and that greater sophistication in model and code development is not necessarily an advancement. If the goal is to understand how a particular geochemical system behaves, it is better to collect more field data than rely on computer codes.
On the temperature model of CO{sub 2} lasers
Nevdakh, Vladimir V; Ganjali, Monireh; Arshinov, K I
2007-03-31
A refined temperature model of CO{sub 2} lasers is presented, which takes into account the fact that vibrational modes of the CO{sub 2} molecule have the common ground vibrational level. New formulas for the occupation numbers and the vibrational energy storage in individual modes are obtained as well as expressions relating the vibrational temperatures of the CO{sub 2} molecules with the excitation and relaxation rates of lower vibrational levels of modes upon excitation of the CO{sub 2}-N{sub 2}-He mixture in an electric discharge. The character of dependences of the vibrational temperatures on the discharge current is discussed. (active media)
Temperature-dependent bursting pattern analysis by modified Plant model
2014-01-01
Many electrophysiological properties of neuron including firing rates and rhythmical oscillation change in response to a temperature variation, but the mechanism underlying these correlations remains unverified. In this study, we analyzed various action potential (AP) parameters of bursting pacemaker neurons in the abdominal ganglion of Aplysia juliana to examine whether or not bursting patterns are altered in response to temperature change. Here we found that the inter-burst interval, burst duration, and number of spike during burst decreased as temperature increased. On the other hand, the numbers of bursts per minute and numbers of spikes per minute increased and then decreased, but interspike interval during burst firstly decreased and then increased. We also tested the reproducibility of temperature-dependent changes in bursting patterns and AP parameters. Finally we performed computational simulations of these phenomena by using a modified Plant model composed of equations with temperature-dependent scaling factors to mathematically clarify the temperature-dependent changes of bursting patterns in burst-firing neurons. Taken together, we found that the modified Plant model could trace the ionic mechanism underlying the temperature-dependent change in bursting pattern from experiments with bursting pacemaker neurons in the abdominal ganglia of Aplysia juliana. PMID:25051923
Integrated Modeling of Spacecraft Touch-and-Go Sampling
NASA Technical Reports Server (NTRS)
Quadrelli, Marco
2009-01-01
An integrated modeling tool has been developed to include multi-body dynamics, orbital dynamics, and touch-and-go dynamics for spacecraft covering three types of end-effectors: a sticky pad, a brush-wheel sampler, and a pellet gun. Several multi-body models of a free-flying spacecraft with a multi-link manipulator driving these end-effectors have been tested with typical contact conditions arising when the manipulator arm is to sample the surface of an asteroidal body. The test data have been infused directly into the dynamics formulation including such information as the mass collected as a function of end-effector longitudinal speed for the brush-wheel and sticky-pad samplers, and the mass collected as a function of projectile speed for the pellet gun sampler. These data represent the realistic behavior of the end effector while in contact with a surface, and represent a low-order model of more complex contact conditions that otherwise would have to be simulated. Numerical results demonstrate the adequacy of these multibody models for spacecraft and manipulator- arm control design. The work contributes to the development of a touch-and-go testbed for small body exploration, denoted as the GREX Testbed (GN&C for Rendezvous-based EXploration). The GREX testbed addresses the key issues involved in landing on an asteroidal body or comet; namely, a complex, low-gravity field; partially known terrain properties; possible comet outgassing; dust ejection; and navigating to a safe and scientifically desirable zone.
Friedberg-Lee model at finite temperature and density
NASA Astrophysics Data System (ADS)
Mao, Hong; Yao, Minjie; Zhao, Wei-Qin
2008-06-01
The Friedberg-Lee model is studied at finite temperature and density. By using the finite temperature field theory, the effective potential of the Friedberg-Lee model and the bag constant B(T) and B(T,μ) have been calculated at different temperatures and densities. It is shown that there is a critical temperature TC≃106.6 MeV when μ=0 MeV and a critical chemical potential μ≃223.1 MeV for fixing the temperature at T=50 MeV. We also calculate the soliton solutions of the Friedberg-Lee model at finite temperature and density. It turns out that when T⩽TC (or μ⩽μC), there is a bag constant B(T) [or B(T,μ)] and the soliton solutions are stable. However, when T>TC (or μ>μC) the bag constant B(T)=0 MeV [or B(T,μ)=0 MeV] and there is no soliton solution anymore, therefore, the confinement of quarks disappears quickly.
Heat Transfer Modeling for Rigid High-Temperature Fibrous Insulation
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Cunnington, George R.; Knutson, Jeffrey R.
2012-01-01
Combined radiation and conduction heat transfer through a high-temperature, high-porosity, rigid multiple-fiber fibrous insulation was modeled using a thermal model previously used to model heat transfer in flexible single-fiber fibrous insulation. The rigid insulation studied was alumina enhanced thermal barrier (AETB) at densities between 130 and 260 kilograms per cubic meter. The model consists of using the diffusion approximation for radiation heat transfer, a semi-empirical solid conduction model, and a standard gas conduction model. The relevant parameters needed for the heat transfer model were estimated from steady-state thermal measurements in nitrogen gas at various temperatures and environmental pressures. The heat transfer modeling methodology was evaluated by comparison with standard thermal conductivity measurements, and steady-state thermal measurements in helium and carbon dioxide gases. The heat transfer model is applicable over the temperature range of 300 to 1360 K, pressure range of 0.133 to 101.3 x 10(exp 3) Pa, and over the insulation density range of 130 to 260 kilograms per cubic meter in various gaseous environments.
NASA Astrophysics Data System (ADS)
Frances, Colleen Elizabeth
Fires are responsible for the loss of thousands of lives and billions of dollars in property damage each year in the United States. Flame retardants can assist in the prevention of fires through mechanisms which either prevent or greatly inhibit flame spread and development. In this study samples of both brominated and non-brominated polystyrene were tested in the Milligram-scale Flaming Calorimeter and images captured with two DSL-R cameras were analyzed to determine flame temperatures through use of a non-intrusive method. Based on the flame temperature measurement results, a better understanding of the gas phase mechanisms of flame retardants may result, as temperature is an important diagnostic in the study of fire and combustion. Measurements taken at 70% of the total flame height resulted in average maximum temperatures of about 1656 K for polystyrene and about 1614 K for brominated polystyrene, suggesting that the polymer flame retardant may reduce flame temperatures.
Phase behaviors and membrane properties of model liposomes: temperature effect.
Wu, Hsing-Lun; Sheng, Yu-Jane; Tsao, Heng-Kwong
2014-09-28
The phase behaviors and membrane properties of small unilamellar vesicles have been explored at different temperatures by dissipative particle dynamics simulations. The vesicles spontaneously formed by model lipids exhibit pre-transition from gel to ripple phase and main transition from ripple to liquid phase. The vesicle shape exhibits the faceted feature at low temperature, becomes more sphere-like with increasing temperature, but loses its sphericity at high temperature. As the temperature rises, the vesicle size grows but the membrane thickness declines. The main transition (Tm) can be identified by the inflection point. The membrane structural characteristics are analyzed. The inner and outer leaflets are asymmetric. The length of the lipid tail and area density of the lipid head in both leaflets decrease with increasing temperature. However, the mean lipid volume grows at low temperature but declines at high temperature. The membrane mechanical properties are also investigated. The water permeability grows exponentially with increasing T but the membrane tension peaks at Tm. Both the bending and stretching moduli have their minima near Tm. Those results are consistent with the experimental observations, indicating that the main signatures associated with phase transition are clearly observed in small unilamellar vesicles.
Phase behaviors and membrane properties of model liposomes: Temperature effect
NASA Astrophysics Data System (ADS)
Wu, Hsing-Lun; Sheng, Yu-Jane; Tsao, Heng-Kwong
2014-09-01
The phase behaviors and membrane properties of small unilamellar vesicles have been explored at different temperatures by dissipative particle dynamics simulations. The vesicles spontaneously formed by model lipids exhibit pre-transition from gel to ripple phase and main transition from ripple to liquid phase. The vesicle shape exhibits the faceted feature at low temperature, becomes more sphere-like with increasing temperature, but loses its sphericity at high temperature. As the temperature rises, the vesicle size grows but the membrane thickness declines. The main transition (Tm) can be identified by the inflection point. The membrane structural characteristics are analyzed. The inner and outer leaflets are asymmetric. The length of the lipid tail and area density of the lipid head in both leaflets decrease with increasing temperature. However, the mean lipid volume grows at low temperature but declines at high temperature. The membrane mechanical properties are also investigated. The water permeability grows exponentially with increasing T but the membrane tension peaks at Tm. Both the bending and stretching moduli have their minima near Tm. Those results are consistent with the experimental observations, indicating that the main signatures associated with phase transition are clearly observed in small unilamellar vesicles.
An exospheric temperature model from CHAMP thermospheric density
NASA Astrophysics Data System (ADS)
Weng, Libin; Lei, Jiuhou; Sutton, Eric; Dou, Xiankang; Fang, Hanxian
2017-02-01
In this study, the effective exospheric temperature, named as T∞, derived from thermospheric densities measured by the CHAMP satellite during 2002-2010 was utilized to develop an exospheric temperature model (ETM) with the aid of the NRLMSISE-00 model. In the ETM, the temperature variations are characterized as a function of latitude, local time, season, and solar and geomagnetic activities. The ETM is validated by the independent GRACE measurements, and it is found that T∞ and thermospheric densities from the ETM are in better agreement with the GRACE data than those from the NRLMSISE-00 model. In addition, the ETM captures well the thermospheric equatorial anomaly feature, seasonal variation, and the hemispheric asymmetry in the thermosphere.
Understanding and quantifying foliar temperature acclimation for Earth System Models
NASA Astrophysics Data System (ADS)
Smith, N. G.; Dukes, J.
2015-12-01
Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly
Elmouttie, David; Flinn, Paul; Kiermeier, Andreas; Subramanyam, Bhadriraju; Hagstrum, David; Hamilton, Grant
2013-09-01
Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,(1) the Poisson model,(1) the double logarithmic model(2) and the compound model(3) - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage. © 2013 Society of Chemical Industry.
Ignition and temperature behavior of a single-wall carbon nanotube sample.
Volotskova, O; Shashurin, A; Keidar, M; Raitses, Y; Demidov, V; Adams, S
2010-03-05
The electrical resistance of mats of single-wall carbon nanotubes (SWNTs) is measured as a function of mat temperature under various helium pressures, in vacuum and in atmospheric air. The objective of this paper is to study the thermal stability of SWNTs produced in a helium arc discharge in the experimental conditions close to natural conditions of SWNT growth in an arc, using a furnace instead of an arc discharge. For each tested condition, there is a temperature threshold at which the mat's resistance reaches its minimum. The threshold value depends on the helium pressure. An increase of the temperature above the temperature threshold leads to the destruction of SWNT bundles at a certain critical temperature. For instance, the critical temperature is about 1100 K in the case of helium background at a pressure of about 500 Torr. Based on experimental data on critical temperature it is suggested that SWNTs produced by an anodic arc discharge and collected in the web area outside the arc plasma most likely originate from the arc discharge peripheral region.
LOW TEMPERATURE X-RAY DIFFRACTION STUDIES OF NATURAL GAS HYDRATE SAMPLES FROM THE GULF OF MEXICO
Rawn, Claudia J; Sassen, Roger; Ulrich, Shannon M; Phelps, Tommy Joe; Chakoumakos, Bryan C; Payzant, E Andrew
2008-01-01
Clathrate hydrates of methane and other small alkanes occur widespread terrestrially in marine sediments of the continental margins and in permafrost sediments of the arctic. Quantitative study of natural clathrate hydrates is hampered by the difficulty in obtaining pristine samples, particularly from submarine environments. Bringing samples of clathrate hydrate from the seafloor at depths without compromising their integrity is not trivial. Most physical property measurements are based on studies of laboratory-synthesized samples. Here we report X-ray powder diffraction measurements of a natural gas hydrate sample from the Green Canyon, Gulf of Mexico. The first data were collected in 2002 and revealed ice and structure II gas hydrate. In the subsequent time the sample has been stored in liquid nitrogen. More recent X-ray powder diffraction data have been collected as functions of temperature and time. This new data indicates that the larger sample is heterogeneous in ice content and shows that the amount of sII hydrate decreases with increasing temperature and time as expected. However, the dissociation rate is higher at lower temperatures and earlier in the experiment.
NASA Astrophysics Data System (ADS)
Beardsley, Christine; Moss, Shaun M.; Azam, Farooq
2008-06-01
Marine, pelagic prokaryotes commonly are visualized and enumerated by epifluorescence microscopy after staining with fluorescent, DNA-binding dyes and sample preparation and storage has a major influence on obtaining reliable estimates. However, sampling often takes place in remote locations and the recommended continuous sample storage at -20°C until further sample evaluation is often logistically challenging or infeasible. We investigated the effect of storage temperature on fixed and filtered seawater samples for subsequent enumeration of total prokaryotic cells and community composition analysis by fluorescence in situ hybridization (FISH). Prokaryotic abundance in surface seawater was not significantly different after 99 days when filters were stored either at room temperature (RT) or at -20°C. Furthermore, there was no loss in detection rates of phylotypes by FISH from filters stored at RT or -20°C for 28-30 days. We conclude that fixed and filtered seawater samples intended for total prokaryote counts or for FISH may be maintained long-term at room temperature, and this should logistically facilitate diverse studies of prokaryote ecology, biogeography, and the occurrence of human and fish/shellfish pathogens.
Measuring and modeling hemoglobin aggregation below the freezing temperature.
Rosa, Mónica; Lopes, Carlos; Melo, Eduardo P; Singh, Satish K; Geraldes, Vitor; Rodrigues, Miguel A
2013-08-01
Freezing of protein solutions is required for many applications such as storage, transport, or lyophilization; however, freezing has inherent risks for protein integrity. It is difficult to study protein stability below the freezing temperature because phase separation constrains solute concentration in solution. In this work, we developed an isochoric method to study protein aggregation in solutions at -5, -10, -15, and -20 °C. Lowering the temperature below the freezing point in a fixed volume prevents the aqueous solution from freezing, as pressure rises until equilibrium (P,T) is reached. Aggregation rates of bovine hemoglobin (BHb) increased at lower temperature (-20 °C) and higher BHb concentration. However, the addition of sucrose substantially decreased the aggregation rate and prevented aggregation when the concentration reached 300 g/L. The unfolding thermodynamics of BHb was studied using fluorescence, and the fraction of unfolded protein as a function of temperature was determined. A mathematical model was applied to describe BHb aggregation below the freezing temperature. This model was able to predict the aggregation curves for various storage temperatures and initial concentrations of BHb. The aggregation mechanism was revealed to be mediated by an unfolded state, followed by a fast growth of aggregates that readily precipitate. The aggregation kinetics increased for lower temperature because of the higher fraction of unfolded BHb closer to the cold denaturation temperature. Overall, the results obtained herein suggest that the isochoric method could provide a relatively simple approach to obtain fundamental thermodynamic information about the protein and the aggregation mechanism, thus providing a new approach to developing accelerated formulation studies below the freezing temperature.
Modeling Intertidal Species Body Temperatures Using A Modified land Surface Model
NASA Astrophysics Data System (ADS)
Wethey, D.; Chintalapati, S.; Lakshmi, V.
2008-12-01
The species in the coastal intertidal zone are exposed to both marine conditions (high tide) and terrestrial conditions (low tide). Modeling the body temperature of the species is critical to understand its physiologic response to climate variability along the coastline. We model the species body temperature and skin temperature using a modified biophysical model. The temperatures are predicted using a 1-D model of heat transport through three1-cm think layers representing the mussel bed, overlaying 20 1-cm think layers of bedrock. The biophysics of mass and heat transport is based on the NOAH land surface model. During high tide immersions, the outer layers of the mussel bed and the rock were fixed at satellite based MODIS observed sea surface temperature (SST). During low tide emersions, the temperatures are predicted using the heat transport from NOAH model. For each study location modeled, we use the North American Regional Reanalysis data to force the model. Compared to multiple years of field observations across various locations along the US west coast, the model was found to reproduce the monthly average temperatures within one percent accuracy. 75% of the model predicted daily maximum body temperatures fall within one standard deviation of observations made by replicate temperature loggers.
High-Temperature Expansions for Frenkel-Kontorova Model
NASA Astrophysics Data System (ADS)
Takahashi, K.; Mannari, I.; Ishii, T.
1995-02-01
Two high-temperature series expansions of the Frenkel-Kontorova (FK) model are investigated: the high-temperature approximation of Schneider-Stoll is extended to the FK model having the density ρ ≠ 1, and an alternative series expansion in terms of the modified Bessel function is examined. The first six-order terms for both expansions in free energy are explicitly obtained and compared with Ishii's approximation of the transfer-integral method. The specific heat based on the expansions is discussed by comparing with those of the transfer-integral method and Monte Carlo simulation.
Multicanonical sampling of the space of states of ℋ(2, n)-vector models
NASA Astrophysics Data System (ADS)
Shevchenko, Yu. A.; Makarov, A. G.; Andriushchenko, P. D.; Nefedev, K. V.
2017-06-01
Problems of temperature behavior of specific heat are solved by the entropy simulation method for Ising models on a simple square lattice and a square spin ice (SSI) lattice with nearest neighbor interaction, models of hexagonal lattices with short-range (SR) dipole interaction, as well as with long-range (LR) dipole interaction and free boundary conditions, and models of spin quasilattices with finite interaction radius. It is established that systems of a finite number of Ising spins with LR dipole interaction can have unusual thermodynamic properties characterized by several specific-heat peaks in the absence of an external magnetic field. For a parallel multicanonical sampling method, optimal schemes are found empirically for partitioning the space of states into energy bands for Ising and SSI models, methods of concatenation and renormalization of histograms are discussed, and a flatness criterion of histograms is proposed. It is established that there is no phase transition in a model with nearest neighbor interaction on a hexagonal lattice, while the temperature behavior of specific heat exhibits singularity in the same model, in case of LR interaction. A spin quasilattice is found that exhibits a nonzero value of residual entropy.
Statistical Modeling of Methane Production from Landfill Samples
Gurijala, K. R.; Sa, P.; Robinson, J. A.
1997-01-01
Multiple-regression analysis was conducted to evaluate the simultaneous effects of 10 environmental factors on the rate of methane production (MR) from 38 municipal solid-waste (MSW) samples collected from the Fresh Kills landfill, which is the world's largest landfill. The analyses showed that volatile solids (VS), moisture content (MO), sulfate (SO(inf4)(sup2-)), and the cellulose-to-lignin ratio (CLR) were significantly associated with MR from refuse. The remaining six factors did not show any significant effect on MR in the presence of the four significant factors. With the consideration of all possible linear, square, and cross-product terms of the four significant variables, a second-order statistical model was developed. This model incorporated linear terms of MO, VS, SO(inf4)(sup2-), and CLR, a square term of VS (VS(sup2)), and two cross-product terms, MO x CLR and VS x CLR. This model explained 95.85% of the total variability in MR as indicated by the coefficient of determination (R(sup2) value) and predicted 87% of the observed MR. Furthermore, the t statistics and their P values of least-squares parameter estimates and the coefficients of partial determination (R values) indicated that MO contributed the most (R = 0.7832, t = 7.60, and P = 0.0001), followed by VS, SO(inf4)(sup2-), VS(sup2), MO x CLR, and VS x CLR in that order, and that CLR contributed the least (R = 0.4050, t = -3.30, and P = 0.0045) to MR. The SO(inf4)(sup2-), VS(sup2), MO x CLR, and CLR showed an inhibitory effect on MR. The final fitted model captured the trends in the data by explaining vast majority of variation in MR and successfully predicted most of the observed MR. However, more analyses with data from other landfills around the world are needed to develop a generalized model to accurately predict MSW methanogenesis. PMID:16535704
Modeling acclimation of photosynthesis to temperature in evergreen conifer forests.
Gea-Izquierdo, Guillermo; Mäkelä, Annikki; Margolis, Hank; Bergeron, Yves; Black, T Andrew; Dunn, Allison; Hadley, Julian; Kyaw Tha Paw U; Falk, Matthias; Wharton, Sonia; Monson, Russell; Hollinger, David Y; Laurila, Tuomas; Aurela, Mika; McCaughey, Harry; Bourque, Charles; Vesala, Timo; Berninger, Frank
2010-10-01
• In this study, we used a canopy photosynthesis model which describes changes in photosynthetic capacity with slow temperature-dependent acclimations. • A flux-partitioning algorithm was applied to fit the photosynthesis model to net ecosystem exchange data for 12 evergreen coniferous forests from northern temperate and boreal regions. • The model accounted for much of the variation in photosynthetic production, with modeling efficiencies (mean > 67%) similar to those of more complex models. The parameter describing the rate of acclimation was larger at the northern sites, leading to a slower acclimation of photosynthesis to temperature. The response of the rates of photosynthesis to air temperature in spring was delayed up to several days at the coldest sites. Overall photosynthesis acclimation processes were slower at colder, northern locations than at warmer, more southern, and more maritime sites. • Consequently, slow changes in photosynthetic capacity were essential to explaining variations of photosynthesis for colder boreal forests (i.e. where acclimation of photosynthesis to temperature was slower), whereas the importance of these processes was minor in warmer conifer evergreen forests.
Modeling temperature inversion in southeastern Yellow Sea during winter 2016
NASA Astrophysics Data System (ADS)
Pang, Ig-Chan; Moon, Jae-Hong; Lee, Joon-Ho; Hong, Ji-Seok; Pang, Sung-Jun
2017-05-01
A significant temperature inversion with temperature differences larger than 3°C was observed in the southeastern Yellow Sea (YS) during February 2016. By analyzing in situ hydrographic profiles and results from a regional ocean model for the YS, this study examines the spatiotemporal evolution of the temperature inversion and its connection with wind-induced currents in winter. Observations reveal that in winter, when the northwesterly wind prevails over the YS, the temperature inversion occurs largely at the frontal zone southwest of Korea where warm/saline water of a Kuroshio origin meets cold/fresh coastal water. Our model successfully captures the temperature inversion observed in the winter of 2016 and suggests a close relation between northwesterly wind bursts and the occurrence of the large inversion. In this respect, the strong northwesterly wind drove cold coastal water southward in the upper layer via Ekman transport, which pushed the water mass southward and increased the sea level slope in the frontal zone in southeastern YS. The intensified sea level slope propagated northward away from the frontal zone as a shelf wave, causing a northward upwind flow response along the YS trough in the lower layer, thereby resulting in the large temperature inversion. Diagnostic analysis of the momentum balance shows that the westward pressure gradient, which developed with shelf wave propagation along the YS trough, was balanced with the Coriolis force in accordance with the northward upwind current in and around the inversion area.
NASA Astrophysics Data System (ADS)
Wüst, T.; Li, Y. W.; Landau, D. P.
2011-08-01
We describe a class of "bare bones" models of homopolymers which undergo coil-globule collapse and proteins which fold into their native states in free space or into denatured states when captured by an attractive substrate as the temperature is lowered. We then show how, with the use of a properly chosen trial move set, Wang-Landau Monte Carlo sampling can be used to study the rough free energy landscape and ground (native) states of these intriguingly simple systems and thus elucidate their thermodynamic complexity.
Madsen, Nis Dam; Kjelstrup-Hansen, Jakob
2017-01-01
We present a new method for measuring the piezoresistive gauge factor of a thin-film resistor based on three-point bending. A ceramic fixture has been designed and manufactured to fit a state-of-the-art mechanical testing apparatus (TA Instruments Q800). The method has been developed to test thin-film samples deposited on silicon substrates with an insulating layer of SiO2. The electrical connections to the resistor are achieved through contacts in the support points. This insures that the influence of the electrical contacts is reduced to a minimum and eliminates wire-bonding or connectors attached to the sample. During measurement, both force and deflection of the sample are recorded simultaneously with the electrical data. The data analysis extracts a precise measurement of the sample thickness (<1% error) in addition to the gauge factor and the temperature coefficient of resistivity. The sample thickness is a critical parameter for an accurate calculation of the strain in the thin-film resistor. This method provides a faster sample evaluation by eliminating an additional sample thickness measurement or alternatively an option for cross checking data. Furthermore, the method implements a full compensation of thermoelectrical effects, which could otherwise lead to significant errors at high temperature. We also discuss the magnitude of the error sources in the setup. The performance of the setup is demonstrated using a titanium nitride thin-film, which is tested up to 400 °C revealing the gauge factor behavior in this temperature span and the temperature coefficient of resistivity.
NASA Astrophysics Data System (ADS)
Madsen, Nis Dam; Kjelstrup-Hansen, Jakob
2017-01-01
We present a new method for measuring the piezoresistive gauge factor of a thin-film resistor based on three-point bending. A ceramic fixture has been designed and manufactured to fit a state-of-the-art mechanical testing apparatus (TA Instruments Q800). The method has been developed to test thin-film samples deposited on silicon substrates with an insulating layer of SiO2. The electrical connections to the resistor are achieved through contacts in the support points. This insures that the influence of the electrical contacts is reduced to a minimum and eliminates wire-bonding or connectors attached to the sample. During measurement, both force and deflection of the sample are recorded simultaneously with the electrical data. The data analysis extracts a precise measurement of the sample thickness (<1% error) in addition to the gauge factor and the temperature coefficient of resistivity. The sample thickness is a critical parameter for an accurate calculation of the strain in the thin-film resistor. This method provides a faster sample evaluation by eliminating an additional sample thickness measurement or alternatively an option for cross checking data. Furthermore, the method implements a full compensation of thermoelectrical effects, which could otherwise lead to significant errors at high temperature. We also discuss the magnitude of the error sources in the setup. The performance of the setup is demonstrated using a titanium nitride thin-film, which is tested up to 400 °C revealing the gauge factor behavior in this temperature span and the temperature coefficient of resistivity.
Thurber, Kent; Tycko, Robert
2016-03-01
We describe novel instrumentation for low-temperature solid state nuclear magnetic resonance (NMR) with dynamic nuclear polarization (DNP) and magic-angle spinning (MAS), focusing on aspects of this instrumentation that have not been described in detail in previous publications. We characterize the performance of an extended interaction oscillator (EIO) microwave source, operating near 264 GHz with 1.5 W output power, which we use in conjunction with a quasi-optical microwave polarizing system and a MAS NMR probe that employs liquid helium for sample cooling and nitrogen gas for sample spinning. Enhancement factors for cross-polarized (13)C NMR signals in the 100-200 range are demonstrated with DNP at 25K. The dependences of signal amplitudes on sample temperature, as well as microwave power, polarization, and frequency, are presented. We show that sample temperatures below 30K can be achieved with helium consumption rates below 1.3 l/h. To illustrate potential applications of this instrumentation in structural studies of biochemical systems, we compare results from low-temperature DNP experiments on a calmodulin-binding peptide in its free and bound states.
Modeling apple surface temperature dynamics based on weather data.
Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng
2014-10-27
The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management.
Modeling Apple Surface Temperature Dynamics Based on Weather Data
Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng
2014-01-01
The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00–18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of “Fuji” apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management. PMID:25350507
Modeling the effects of storage temperature excursions on shelf life.
Socarras, Sandy; Magari, Robert T
2009-02-20
Excursions from storage condition requirements may affect product performance and stability. The effects of temperature excursion on stability depend on the amount of time that a product is subjected to these conditions, temperature level, and activation energy. Both time at elevated temperature and the temperature level can be directly measured, while activation energy needs to be estimated from the accelerated stability tests. Coulter Clenz reagent degradation information is used to demonstrate the effects of temperature excursions. The stability of the product is affected by any excursion, but Coulter Clenz will not lose all of its stability for excursion of up to 30 days at 35 degrees C and 20 days at 40 degrees C. Temperature excursion for up to 20 days at 40 degrees C will reduce the stability of a product that has activation energy in the range of 26-30kcalmol(-1) approximately by 5-7 months. Products with lower activation energy will have a significantly lower reduction in stability. The effects of excursions on shelf life performance are less severe when lower level of risk is implemented to establish the claimed shelf life. The proposed model can effectively predict temperature excursion if used within the scope of a product performance and its characteristics.
Modeling Impacts of Climate Change on Stream Temperature
NASA Astrophysics Data System (ADS)
Tesfa, T. K.; Wigmosta, M. S.; Coleman, A. M.; Richmond, M. C.; Perkins, W. A.
2010-12-01
Understanding the impacts of climate change on stream temperature is essential to planning and future management of water resources to satisfy competing water uses without compromising the sustainability of riverine ecosystems. This requires specification of spatially distributed meteorological input data such as air temperature and solar radiation under current and future climatic scenarios. In this work, we simulate stream temperature in the Dworshak watershed located in Idaho State, which is part of the Columbia River Basin. The watershed drains to Dworshak Dam, which provides flood control, irrigation supply, recreation, and is also used to help regulate summer-time stream temperatures below the dam. Stream temperature is simulated by coupling the Distributed Hydrology Soil Vegetation Model (DHSVM) with the Modular Aquatic Simulation System 1D (MASS1). DHSVM is used to provide spatially distributed inflows to MASS1 along with meteorological data corrected for topography and canopy cover. MASS1 is used to simulate one-dimensional unsteady flow and stream temperature. In this presentation, we report preliminary results comparing stream temperature under current and future climate scenarios and discuss its implications on the riverine ecosystem and future management of water resources.
Empirical model of temperature structure, Anadarko basin, Oklahoma
Gallardo, J.D.; Blackwell, D.D. )
1989-08-01
Attempts at mapping the thermal structure of sedimentary basins most often are based on bottom-hole temperature (BHT) data. Aside from the inaccuracy of the BHT data itself, this approach uses a straight-line geothermal gradient, which is an unrealistic representation of the thermal structure. In fact, the temperature gradient is dependent upon the lithology of the rocks because each rock type has a different thermal conductivity. The mean gradient through a given sedimentary section is a composite of the gradients through the individual sedimentary units. Thus, a more accurate representation of the temperature variations within a basin can be obtained by calculating the temperature gradient through each layer of contrasting conductivity. In this study, synthetic temperature profiles are calculated from lithologic data interpreted from well logs, and these profiles are used to build a three-dimensional model of the temperature structure of the Anadarko basin. The lithologies that control the temperature in the Anadarko basin include very high-conductivity evaporites in the Permian, low-conductivity shales dominating the thick Pennsylvanian section, and relatively intermediate conductivity carbonates throughout the lower Paleozoic. Shale is the primary controlling factor because it is the most abundant lithology in the basin and has a low thermal conductivity. This is unfortunate because shale thermal conductivity is the factor least well constrained by laboratory measurements.
Can spatial statistical river temperature models be transferred between catchments?
NASA Astrophysics Data System (ADS)
Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.
2017-09-01
There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across
Modelling Brain Temperature and Perfusion for Cerebral Cooling
NASA Astrophysics Data System (ADS)
Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael
2015-11-01
Brain temperature relies heavily on two aspects: i) blood perfusion and porous heat transport through tissue and ii) blood flow and heat transfer through embedded arterial and venous vasculature. Moreover brain temperature cannot be measured directly unless highly invasive surgical procedures are used. A 3D two-phase fluid-porous model for mapping flow and temperature in brain is presented with arterial and venous vessels extracted from MRI scans. Heat generation through metabolism is also included. The model is robust and reveals flow and temperature maps in unprecedented 3D detail. However, the Karmen-Kozeny parameters of the porous (tissue) phase need to be optimised for expected perfusion profiles. In order to optimise the K-K parameters a reduced order two-phase model is developed where 1D vessels are created with a tree generation algorithm embedded inside a 3D porous domain. Results reveal that blood perfusion is a strong function of the porosity distribution in the tissue. We present a qualitative comparison between the simulated perfusion maps and those obtained clinically. We also present results studying the effect of scalp cooling on core brain temperature and preliminary results agree with those observed clinically.
3.5 D temperature model of a coal stockpile
Ozdeniz, A.H.; Corumluoglu, O.; Kalayci, I.; Sensogut, C.
2008-07-01
Overproduced coal mines that are not sold should remain in coal stock sites. If these coal stockpiles remain at the stock yards over a certain period of time, a spontaneous combustion can be started. Coal stocks under combustion threat can cost too much economically to coal companies. Therefore, it is important to take some precautions for saving the stockpiles from the spontaneous combustion. In this research, a coal stock which was 5 m wide, 10 m long, and 3 m in height, with a weight of 120 tons, was monitored to observe internal temperature changes with respect to time under normal atmospheric conditions. Internal temperature measurements were obtained at 20 points distributed all over the two layers in the stockpile. Temperatures measured by a specially designed mechanism were then stored into a computer every 3 h for a period of 3 months. Afterward, this dataset was used to delineate 3.5 D temporal temperature distribution models for these two levels, and they were used to analyze and interpret what was seen in these models to derive some conclusions. It was openly seen, followed, and analyzed that internal temperature changes in the stockpile went up to 31{sup o}C by 3.5 D models created for this research.
Finite-temperature phase transitions in the ionic Hubbard model
NASA Astrophysics Data System (ADS)
Kim, Aaram J.; Choi, M. Y.; Jeon, Gun Sang
2014-04-01
We investigate paramagnetic metal-insulator transitions in the infinite-dimensional ionic Hubbard model at finite temperatures. By means of the dynamical mean-field theory with an impurity solver of the continuous-time quantum Monte Carlo method, we show that an increase in the interaction strength brings about a crossover from a band insulating phase to a metallic one, followed by a first-order transition to a Mott insulating phase. The first-order transition turns into a crossover above a certain critical temperature, which becomes higher as the staggered lattice potential is increased. Further, analysis of the temperature dependence of the energy density discloses that the intermediate metallic phase is a Fermi liquid. It is also found that the metallic phase is stable against strong staggered potentials even at very low temperatures.
A simple model for electron temperature in dilute plasma flows
NASA Astrophysics Data System (ADS)
Cai, Chunpei; Cooke, David L.
2016-10-01
In this short note, we present some work on investigating electron temperatures and potentials in steady dilute plasma flows. The analysis is based on the detailed fluid model for electrons. Ionizations, normalized electron number density gradients, and magnetic fields are neglected. The transport properties are assumed as local constants. With these treatments, the partial differential equation for electron temperature degenerates as an ordinary differential equation. Along an electron streamline, two simple formulas for electron temperature and plasma potential are obtained. These formulas offer some insights, e.g., the electron temperature and plasma potential distributions along an electron streamline include two exponential functions, and the one for plasma potential includes an extra linear distribution function.
Boughariou, A; Damamme, G; Kallel, A
2015-04-01
This paper focuses on the effect of sample annealing temperature and crystallographic orientation on the secondary electron yield of MgO during charging by a defocused electron beam irradiation. The experimental results show that there are two regimes during the charging process that are better identified by plotting the logarithm of the secondary electron emission yield, lnσ, as function of the total trapped charge in the material QT. The impact of the annealing temperature and crystallographic orientation on the evolution of lnσ is presented here. The slope of the asymptotic regime of the curve lnσ as function of QT, expressed in cm(2) per trapped charge, is probably linked to the elementary cross section of electron-hole recombination, σhole, which controls the trapping evolution in the reach of the stationary flow regime.
Houessou, Justin Koffi; Goujot, Daniel; Heyd, Bertrand; Camel, Valerie
2008-05-28
Roasting is a critical process in coffee production, as it enables the development of flavor and aroma. At the same time, roasting may lead to the formation of nondesirable compounds, such as polycyclic aromatic hydrocarbons (PAHs). In this study, Arabica green coffee beans from Cuba were roasted under controlled conditions to monitor PAH formation during the roasting process. Roasting was performed in a pilot-spouted bed roaster, with the inlet air temperature varying from 180 to 260 degrees C, for roasting conditions ranging from 5 to 20 min. Several PAHs were determined in both roasted coffee samples and green coffee samples. Different models were tested, with more or less assumptions on the chemical phenomena, with a view to predict the system global behavior. Two kinds of models were used and compared: kinetic models (based on Arrhenius law) and statistical models (neural networks). The numbers of parameters to adjust differed for the tested models, varying from three to nine for the kinetic models and from five to 13 for the neural networks. Interesting results are presented, with satisfactory correlations between experimental and predicted concentrations for some PAHs, such as pyrene, benz[a]anthracene, chrysene, and anthracene.
NASA Astrophysics Data System (ADS)
Back, Pär-Erik
2007-04-01
A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.
Temperature driven annealing of perforations in bicellar model membranes.
Nieh, Mu-Ping; Raghunathan, V A; Pabst, Georg; Harroun, Thad; Nagashima, Kazuomi; Morales, Hannah; Katsaras, John; Macdonald, Peter
2011-04-19
Bicellar model membranes composed of 1,2-dimyristoylphosphatidylcholine (DMPC) and 1,2-dihexanoylphosphatidylcholine (DHPC), with a DMPC/DHPC molar ratio of 5, and doped with the negatively charged lipid 1,2-dimyristoylphosphatidylglycerol (DMPG), at DMPG/DMPC molar ratios of 0.02 or 0.1, were examined using small angle neutron scattering (SANS), (31)P NMR, and (1)H pulsed field gradient (PFG) diffusion NMR with the goal of understanding temperature effects on the DHPC-dependent perforations in these self-assembled membrane mimetics. Over the temperature range studied via SANS (300-330 K), these bicellar lipid mixtures exhibited a well-ordered lamellar phase. The interlamellar spacing d increased with increasing temperature, in direct contrast to the decrease in d observed upon increasing temperature with otherwise identical lipid mixtures lacking DHPC. (31)P NMR measurements on magnetically aligned bicellar mixtures of identical composition indicated a progressive migration of DHPC from regions of high curvature into planar regions with increasing temperature, and in accord with the "mixed bicelle model" (Triba, M. N.; Warschawski, D. E.; Devaux, P. E. Biophys. J.2005, 88, 1887-1901). Parallel PFG diffusion NMR measurements of transbilayer water diffusion, where the observed diffusion is dependent on the fractional surface area of lamellar perforations, showed that transbilayer water diffusion decreased with increasing temperature. A model is proposed consistent with the SANS, (31)P NMR, and PFG diffusion NMR data, wherein increasing temperature drives the progressive migration of DHPC out of high-curvature regions, consequently decreasing the fractional volume of lamellar perforations, so that water occupying these perforations redistributes into the interlamellar volume, thereby increasing the interlamellar spacing. © 2011 American Chemical Society
Temperature Driven Annealing of Perforations in Bicellar Model Membranes
Nieh, Mu-Ping; Raghunathan, V.A.; Pabst, Georg; Harroun, Thad; Nagashima, K; Morales, H; Katsaras, John; Macdonald, P
2011-01-01
Bicellar model membranes composed of 1,2-dimyristoylphosphatidylcholine (DMPC) and 1,2-dihexanoylphosphatidylcholine (DHPC), with a DMPC/DHPC molar ratio of 5, and doped with the negatively charged lipid 1,2-dimyristoylphosphatidylglycerol (DMPG), at DMPG/DMPC molar ratios of 0.02 or 0.1, were examined using small angle neutron scattering (SANS), {sup 31}P NMR, and {sup 1}H pulsed field gradient (PFG) diffusion NMR with the goal of understanding temperature effects on the DHPC-dependent perforations in these self-assembled membrane mimetics. Over the temperature range studied via SANS (300-330 K), these bicellar lipid mixtures exhibited a well-ordered lamellar phase. The interlamellar spacing d increased with increasing temperature, in direct contrast to the decrease in d observed upon increasing temperature with otherwise identical lipid mixtures lacking DHPC. {sup 31}P NMR measurements on magnetically aligned bicellar mixtures of identical composition indicated a progressive migration of DHPC from regions of high curvature into planar regions with increasing temperature, and in accord with the 'mixed bicelle model' (Triba, M. N.; Warschawski, D. E.; Devaux, P. E. Biophys. J.2005, 88, 1887-1901). Parallel PFG diffusion NMR measurements of transbilayer water diffusion, where the observed diffusion is dependent on the fractional surface area of lamellar perforations, showed that transbilayer water diffusion decreased with increasing temperature. A model is proposed consistent with the SANS, {sup 31}P NMR, and PFG diffusion NMR data, wherein increasing temperature drives the progressive migration of DHPC out of high-curvature regions, consequently decreasing the fractional volume of lamellar perforations, so that water occupying these perforations redistributes into the interlamellar volume, thereby increasing the interlamellar spacing.
Baatrup, G; Sturfelt, G; Junker, A; Svehag, S E
1992-01-01
Blood samples from 15 patients with systemic lupus erythematosus (SLE) and 15 healthy blood donors were allowed to coagulate for one hour at room temperature, followed by one hour at 4 or 37 degrees C. The complement activity of the serum samples was assessed by three different functional assays. Serum samples from patients with SLE obtained by coagulation at 37 degrees C had a lower complement activity than serum samples from blood coagulated at 4 degrees C when the capacity of the serum samples to solubilise precipitable immune complexes and to support the attachment of complement factors to solid phase immune complexes was determined. Haemolytic complement activity was not affected by the coagulation temperature. The content of C1q binding immune complexes in paired serum samples obtained after coagulation at 4 and 37 degrees C was similar and the size distribution of the immune complexes, determined by high performance gel permeation chromatography, was also similar. This study shows that the results of functional complement assays, applied to serum samples from patients with SLE cannot be compared unless the conditions for blood coagulation and serum handling are defined and are the same. The data also indicate that assays measuring complement mediated solubilisation of immune complexes and the fixation of complement factors to solid phase immune complexes are more sensitive indicators of complement activity than the haemolytic assay. PMID:1632665
Su, Xiang; Wang, Gang; Li, Jianfeng; Rong, Yiming
2016-01-01
The effects of strain rate and temperature on the dynamic behavior of Fe-based high temperature alloy was studied. The strain rates were 0.001-12,000 s(-1), at temperatures ranging from room temperature to 800 °C. A phenomenological constitutive model (Power-Law constitutive model) was proposed considering adiabatic temperature rise and accurate material thermal physical properties. During which, the effects of the specific heat capacity on the adiabatic temperature rise was studied. The constitutive model was verified to be accurate by comparison between predicted and experimental results.
Land-surface temperature measurement from space - Physical principles and inverse modeling
NASA Technical Reports Server (NTRS)
Wan, Zhengming; Dozier, Jeff
1989-01-01
To apply the multiple-wavelength (split-window) method used for satellite measurement of sea-surface temperature from thermal-infrared data to land-surface temperatures, the authors statistically analyze simulations using an atmospheric radiative transfer model. The range of atmospheric conditions and surface temperatures simulated is wide enough to cover variations in clear atmospheric properties and surface temperatures, both of which are larger over land than over sea. Surface elevation is also included in the simulation as the most important topographic effect. Land covers characterized by measured or modeled spectral emissivities include snow, clay, sands, and tree leaf samples. The empirical inverse model can estimate the surface temperature with a standard deviation less than 0.3 K and a maximum error less than 1 K, for viewing angles up to 40 degrees from nadir under cloud-free conditions, given satellite measurements in three infrared channels. A band in the region from 10.2 to 11.0 microns will usually give the most reliable single-band estimate of surface temperature. In addition, a band in either the 3.5-4.0-micron region or in the 11.5-12.6-micron region must be included for accurate atmospheric correction, and a band below the ozone absorption feature at 9.6 microns (e.g., 8.2-8.8 microns) will increase the accuracy of the estimate of surface temperature.
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, G.G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
Monitoring temperature for gas turbine blade: correction of reflection model
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi; Xiao, Yihan; Daniel, Ketui
2015-06-01
For a gas turbine blade working in a narrow space, the accuracy of blade temperature measurements is greatly impacted by environmental irradiation. A reflection model is established by using discrete irregular surfaces to calculate the angle factor between the blade surface and the hot adjacent parts. The model is based on the rotational angles and positions of the blades, and can correct for measurement error caused by background radiation when the blade is located at different rotational positions. This method reduces the impact of reflected radiation on the basis of the turbine's known geometry and the physical properties of the material. The experimental results show that when the blade temperature is 911.2±5 K and the vane temperature ranges from 1011.3 to 1065.8 K, the error decreases from 4.21 to 0.75%.
NASA Astrophysics Data System (ADS)
Zhou, Lejun; Wang, Wanlin; Liu, Rui; Thomas, Brian G.
2013-10-01
A three-dimensional finite-difference model has been developed to study heat transfer, fluid flow, and isothermal crystallization of mold slag during double hot thermocouple technique (DHTT) experiments. During the preheating stage, temperature in the middle of the mold slag sample was found to be significantly [~350 K (~77 °C)] lower than near the two thermocouples. During the quenching stage, the mold slag temperature decreases with the cooled thermocouple. The temperature across the mold slag achieves a steady, nonlinear temperature profile during the holding stage; the insulating effect of the crystallizing layer in the middle of the slag sample causes the high temperature region to become hotter, while the lower temperature mold slag becomes cooler. Fluid flow is driven by Marangoni forces along the mold slag surface from the hotter region to the cooler region, and then recirculates back through the interior. Slag velocities reach 7 mm/s. Crystallization is predicted to start in the middle of the slag sample first and then grows toward both thermocouples, which matches well with observations of the DHTT experiment.
Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...
Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling
ERIC Educational Resources Information Center
Nevitt, Jonathan; Hancock, Gregory R.
2004-01-01
Through Monte Carlo simulation, small sample methods for evaluating overall data-model fit in structural equation modeling were explored. Type I error behavior and power were examined using maximum likelihood (ML), Satorra-Bentler scaled and adjusted (SB; Satorra & Bentler, 1988, 1994), residual-based (Browne, 1984), and asymptotically…
Space Weathering of Olivine: Samples, Experiments and Modeling
NASA Technical Reports Server (NTRS)
Keller, L. P.; Berger, E. L.; Christoffersen, R.
2016-01-01
Olivine is a major constituent of chondritic bodies and its response to space weathering processes likely dominates the optical properties of asteroid regoliths (e.g. S- and many C-type asteroids). Analyses of olivine in returned samples and laboratory experiments provide details and insights regarding the mechanisms and rates of space weathering. Analyses of olivine grains from lunar soils and asteroid Itokawa reveal that they display solar wind damaged rims that are typically not amorphized despite long surface exposure ages, which are inferred from solar flare track densities (up to 10 (sup 7 y)). The olivine damaged rim width rapidly approaches approximately 120 nm in approximately 10 (sup 6 y) and then reaches steady-state with longer exposure times. The damaged rims are nanocrystalline with high dislocation densities, but crystalline order exists up to the outermost exposed surface. Sparse nanophase Fe metal inclusions occur in the damaged rims and are believed to be produced during irradiation through preferential sputtering of oxygen from the rims. The observed space weathering effects in lunar and Itokawa olivine grains are difficult to reconcile with laboratory irradiation studies and our numerical models that indicate that olivine surfaces should readily blister and amorphize on relatively short time scales (less than 10 (sup 3 y)). These results suggest that it is not just the ion fluence alone, but other variable, the ion flux that controls the type and extent of irradiation damage that develops in olivine. This flux dependence argues for caution in extrapolating between high flux laboratory experiments and the natural case. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind processing of olivine.
Wang, Jing; Zhang, Zhengfeng; Zhao, Weijing; Wang, Liying; Yang, Jun
2016-05-09
The MAS solid-state NMR has been a powerful technique for studying membrane proteins within the native-like lipid bilayer environment. In general, RF irradiation in MAS NMR experiments can heat and potentially destroy expensive membrane protein samples. However, under practical MAS NMR experimental conditions, detailed characterization of RF heating effect of lipid bilayer samples is still lacking. Herein, using (1) H chemical shift of water for temperature calibration, we systematically study the dependence of RF heating on hydration levels and salt concentrations of three lipids in MAS NMR experiments. Under practical (1) H decoupling conditions used in biological MAS NMR experiments, three lipids show different dependence of RF heating on hydration levels as well as salt concentrations, which are closely associated with the properties of lipids. The maximum temperature elevation of about 10 °C is similar for the three lipids containing 200% hydration, which is much lower than that in static solid-state NMR experiments. The RF heating due to salt is observed to be less than that due to hydration, with a maximum temperature elevation of less than 4 °C in the hydrated samples containing 120 mmol l(-1) of salt. Upon RF irradiation, the temperature gradient across the sample is observed to be greatly increased up to 20 °C, as demonstrated by the remarkable broadening of (1) H signal of water. Based on detailed characterization of RF heating effect, we demonstrate that RF heating and temperature gradient can be significantly reduced by decreasing the hydration levels of lipid bilayer samples from 200% to 30%. Copyright © 2016 John Wiley & Sons, Ltd.
On Modeling and Measuring the Temperature of the z ~ 5 Intergalactic Medium
NASA Astrophysics Data System (ADS)
Lidz, Adam; Malloy, Matthew
2014-06-01
The temperature of the low-density intergalactic medium (IGM) at high redshift is sensitive to the timing and nature of hydrogen and He II reionization, and can be measured from Lyman-alpha (Lyα) forest absorption spectra. Since the memory of intergalactic gas to heating during reionization gradually fades, measurements as close as possible to reionization are desirable. In addition, measuring the IGM temperature at sufficiently high redshifts should help to isolate the effects of hydrogen reionization since He II reionization starts later, at lower redshift. Motivated by this, we model the IGM temperature at z >~ 5 using semi-numeric models of patchy reionization. We construct mock Lyα forest spectra from these models and consider their observable implications. We find that the small-scale structure in the Lyα forest is sensitive to the temperature of the IGM even at redshifts where the average absorption in the forest is as high as 90%. We forecast the accuracy at which the z >~ 5 IGM temperature can be measured using existing samples of high resolution quasar spectra, and find that interesting constraints are possible. For example, an early reionization model in which reionization ends at z ~ 10 should be distinguishable—at high statistical significance—from a lower redshift model where reionization completes at z ~ 6. We discuss improvements to our modeling that may be required to robustly interpret future measurements.
Modeling Silicate Weathering for Elevated CO2 and Temperature
NASA Astrophysics Data System (ADS)
Bolton, E. W.
2016-12-01
A reactive transport model (RTM) is used to assess CO2 drawdown by silicate weathering over a wide range of temperature, pCO2, and infiltration rates for basalts and granites. Although RTM's have been used extensively to model weathering of basalts and granites for present-day conditions, we extend such modeling to higher CO2 that could have existed during the Archean and Proterozoic. We also consider a wide range of surface temperatures and infiltration rates. We consider several model basalt and granite compositions. We normally impose CO2 in equilibrium with the various atmospheric ranges modeled and CO2 is delivered to the weathering zone by aqueous transport. We also consider models with fixed CO2 (aq) throughout the weathering zone as could occur in soils with partial water saturation or with plant respiration, which can strongly influence pH and mineral dissolution rates. For the modeling, we use Kinflow: a model developed at Yale that includes mineral dissolution and precipitation under kinetic control, aqueous speciation, surface erosion, dynamic porosity, permeability, and mineral surface areas via sub-grid-scale grain models, and exchange of volatiles at the surface. Most of the modeling is done in 1D, but some comparisons to 2D domains with heterogeneous permeability are made. We find that when CO2 is fixed only at the surface, the pH tends toward higher values for basalts than granites, in large part due to the presence of more divalent than monovalent cations in the primary minerals, tending to decrease rates of mineral dissolution. Weathering rates increase (as expected) with increasing CO2 and temperature. This modeling is done with the support of the Virtual Planetary Laboratory.
Midnight Temperature Maximum (MTM) in Whole Atmosphere Model (WAM) Simulations
2016-04-14
Midnight temperature maximum (MTM) in Whole Atmosphere Model (WAM) simulations R. A. Akmaev,1 F. Wu,2 T. J. Fuller-Rowell,2 and H. Wang2 Received 13...been unsuccessful. First long-term simulations with the Whole Atmosphere Model (WAM) reveal the presence of a realistically prominent MTM and reproduce...involve nonlinear interactions between other tidal harmonics originating in the middle and lower atmosphere . Our results thus suggest that the MTM is
A computer model of global thermospheric winds and temperatures
NASA Technical Reports Server (NTRS)
Killeen, T. L.; Roble, R. G.; Spencer, N. W.
1987-01-01
Output data from the NCAR Thermospheric GCM and a vector-spherical-harmonic (VSH) representation of the wind field are used in constructing a computer model of time-dependent global horizontal vector neutral wind and temperature fields at altitude 130-300 km. The formulation of the VSH model is explained in detail, and some typical results obtained with a preliminary version (applicable to December solstice at solar maximum) are presented graphically. Good agreement with DE-2 satellite measurements is demonstrated.
Field and sample history dependence of the compensation temperature in Sm 0.97Gd 0.03Al 2
NASA Astrophysics Data System (ADS)
Vaidya, U. V.; Rakhecha, V. C.; Sumithra, S.; Ramakrishnan, S.; Grover, A. K.
2007-03-01
We present magnetization data on three polycrystalline specimens of Sm 0.97Gd 0.03Al 2: (1) as-cast (grainy texture), (2) powder, and (3) re-melted fast-quenched (plate). The data are presented for nominally zero- (ZFC) and high-field-cooling (HFC) histories. A zero cross-over in magnetization curve at some temperature T= T0 was seen in ZFC data on grainy and powder samples, but not in the plate sample. At fields surpassing magnetocrystalline anisotropy, a 4f magnetic moment flip was still evidenced by HFC data in all samples at a compensation temperature Tcomp, which must necessarily be treated as distinct from T0 ( T0 may not even exist). Proper understanding of Tcomp should take account of thermomagnetic history effects.
Modeling temperature variations in a pilot plant thermophilic anaerobic digester.
Valle-Guadarrama, Salvador; Espinosa-Solares, Teodoro; López-Cruz, Irineo L; Domaschko, Max
2011-05-01
A model that predicts temperature changes in a pilot plant thermophilic anaerobic digester was developed based on fundamental thermodynamic laws. The methodology utilized two simulation strategies. In the first, model equations were solved through a searching routine based on a minimal square optimization criterion, from which the overall heat transfer coefficient values, for both biodigester and heat exchanger, were determined. In the second, the simulation was performed with variable values of these overall coefficients. The prediction with both strategies allowed reproducing experimental data within 5% of the temperature span permitted in the equipment by the system control, which validated the model. The temperature variation was affected by the heterogeneity of the feeding and extraction processes, by the heterogeneity of the digestate recirculation through the heating system and by the lack of a perfect mixing inside the biodigester tank. The use of variable overall heat transfer coefficients improved the temperature change prediction and reduced the effect of a non-ideal performance of the pilot plant modeled.
Temperature dependence of bag pressure from quasiparticle model
NASA Astrophysics Data System (ADS)
Prasad, N.; Singh, C. P.
2001-03-01
A quasiparticle model with effective thermal gluon and quark masses is used to derive a temperature /T- and baryon chemical potential /μ-dependent bag constant /B(μ,T). Consequences of such a bag constant are obtained on the equation of state (EOS) for a deconfined quark-gluon plasma (QGP).
Apply a hydrological model to estimate local temperature trends
NASA Astrophysics Data System (ADS)
Igarashi, Masao; Shinozawa, Tatsuya
2014-03-01
Continuous times series {f(x)} such as a depth of water is written f(x) = T(x)+P(x)+S(x)+C(x) in hydrological science where T(x),P(x),S(x) and C(x) are called the trend, periodic, stochastic and catastrophic components respectively. We simplify this model and apply it to the local temperature data such as given E. Halley (1693), the UK (1853-2010), Germany (1880-2010), Japan (1876-2010). We also apply the model to CO2 data. The model coefficients are evaluated by a symbolic computation by using a standard personal computer. The accuracy of obtained nonlinear curve is evaluated by the arithmetic mean of relative errors between the data and estimations. E. Halley estimated the temperature of Gresham College from 11/1692 to 11/1693. The simplified model shows that the temperature at the time rather cold compared with the recent of London. The UK and Germany data sets show that the maximum and minimum temperatures increased slowly from the 1890s to 1940s, increased rapidly from the 1940s to 1980s and have been decreasing since the 1980s with the exception of a few local stations. The trend of Japan is similar to these results.
Models of Solar Irradiance Variability and the Instrumental Temperature Record
NASA Technical Reports Server (NTRS)
Marcus, S. L.; Ghil, M.; Ide, K.
1998-01-01
The effects of decade-to-century (Dec-Cen) variations in total solar irradiance (TSI) on global mean surface temperature Ts during the pre-Pinatubo instrumental era (1854-1991) are studied by using two different proxies for TSI and a simplified version of the IPCC climate model.
Forecasting alpine vegetation change using repeat sampling and a novel modeling approach.
Johnson, David R; Ebert-May, Diane; Webber, Patrick J; Tweedie, Craig E
2011-09-01
Global change affects alpine ecosystems by, among many effects, by altering plant distributions and community composition. However, forecasting alpine vegetation change is challenged by a scarcity of studies observing change in fixed plots spanning decadal-time scales. We present in this article a probabilistic modeling approach that forecasts vegetation change on Niwot Ridge, CO using plant abundance data collected from marked plots established in 1971 and resampled in 1991 and 2001. Assuming future change can be inferred from past change, we extrapolate change for 100 years from 1971 and correlate trends for each plant community with time series environmental data (1971-2001). Models predict a decreased extent of Snowbed vegetation and an increased extent of Shrub Tundra by 2071. Mean annual maximum temperature and nitrogen deposition were the primary a posteriori correlates of plant community change. This modeling effort is useful for generating hypotheses of future vegetation change that can be tested with future sampling efforts.
Modeling the effect of temperature on survival rate of Salmonella Enteritidis in yogurt.
Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A
2014-01-01
The aim of the study was to determine the inactivation rates of Salmonella Enteritidis in commercially produced yogurt and to generate primary and secondary mathematical models to predict the behaviour of these bacteria during storage at different temperatures. The samples were inoculated with the mixture of three S. Enteritidis strains and stored at 5 degrees C, 10 degrees C, 15 degrees C, 20 degrees C and 25 degrees C for 24 h. The number of salmonellae was determined every two hours. It was found that the number of bacteria decreased linearly with storage time in all samples. Storage temperature and pH of yogurt significantly influenced survival rate of S. Enteritidis (p < 0.05). In samples kept at 5 degrees C the number of salmonellae decreased at the lowest rate, whereas at 25 degrees C the reduction in number of bacteria was the most dynamic. The natural logarithm of mean inactivation rates of Salmonella calculated from primary model was fitted to two secondary models: linear and polynomial. Equations obtained from both secondary models can be applied as a tool for prediction of inactivation rate of Salmonella in yogurt stored under temperature range from 5 to 25 degrees C; however, polynomial model gave the better fit to the experimental data.
NASA Astrophysics Data System (ADS)
Wallace, W. E.; Blair, W. R.
2007-05-01
A pre-charged, low molecular mass, low polydispersity linear polyethylene was analyzed with matrix-assisted laser desorption/ionization (MALDI) mass spectrometry as a function of sample temperature between 25 °C and 150 °C. This temperature range crosses the polyethylene melting temperature. Buckminsterfullerene (C60) was used as MALDI matrix due to the high volatility of typical MALDI matrices making them unsuitable for heating in vacuum. Starting at 90 °C there is an increase in polyethylene ion intensity at fixed laser energy. By 150 °C the integrated total ion intensity had grown by six-fold indicating that melting did indeed increase ion yield. At 150 °C the threshold laser intensity to produce intact polyethylene ions decreased by about 25%. Nevertheless, significant fragmentation accompanied the intact polyethylene ions even at the highest temperatures and the lowest laser energies.
Tian, Tian; Wu, Lingtong; Henke, Michael; Ali, Basharat; Zhou, Weijun; Buck-Sorlin, Gerhard
2017-01-01
Functional–structural plant modeling (FSPM) is a fast and dynamic method to predict plant growth under varying environmental conditions. Temperature is a primary factor affecting the rate of plant development. In the present study, we used three different temperature treatments (10/14°C, 18/22°C, and 26/30°C) to test the effect of temperature on growth and development of rapeseed (Brassica napus L.) seedlings. Plants were sampled at regular intervals (every 3 days) to obtain growth data during the length of the experiment (1 month in total). Total leaf dry mass, leaf area, leaf mass per area (LMA), width-length ratio, and the ratio of petiole length to leaf blade length (PBR), were determined and statistically analyzed, and contributed to a morphometric database. LMA under high temperature was significantly smaller than LMA under medium and low temperature, while leaves at high temperature were significantly broader. An FSPM of rapeseed seedlings featuring a growth function used for leaf extension and biomass accumulation was implemented by combining measurement with literature data. The model delivered new insights into growth and development dynamics of winter oilseed rape seedlings. The present version of the model mainly focuses on the growth of plant leaves. However, future extensions of the model could be used in practice to better predict plant growth in spring and potential cold damage of the crop. PMID:28377775
Tian, Tian; Wu, Lingtong; Henke, Michael; Ali, Basharat; Zhou, Weijun; Buck-Sorlin, Gerhard
2017-01-01
Functional-structural plant modeling (FSPM) is a fast and dynamic method to predict plant growth under varying environmental conditions. Temperature is a primary factor affecting the rate of plant development. In the present study, we used three different temperature treatments (10/14°C, 18/22°C, and 26/30°C) to test the effect of temperature on growth and development of rapeseed (Brassica napus L.) seedlings. Plants were sampled at regular intervals (every 3 days) to obtain growth data during the length of the experiment (1 month in total). Total leaf dry mass, leaf area, leaf mass per area (LMA), width-length ratio, and the ratio of petiole length to leaf blade length (PBR), were determined and statistically analyzed, and contributed to a morphometric database. LMA under high temperature was significantly smaller than LMA under medium and low temperature, while leaves at high temperature were significantly broader. An FSPM of rapeseed seedlings featuring a growth function used for leaf extension and biomass accumulation was implemented by combining measurement with literature data. The model delivered new insights into growth and development dynamics of winter oilseed rape seedlings. The present version of the model mainly focuses on the growth of plant leaves. However, future extensions of the model could be used in practice to better predict plant growth in spring and potential cold damage of the crop.
HIGH TEMPERATURE HIGH PRESSURE THERMODYNAMIC MEASUREMENTS FOR COAL MODEL COMPOUNDS
Vinayak N. Kabadi
2000-05-01
The Vapor Liquid Equilibrium measurement setup of this work was first established several years ago. It is a flow type high temperature high pressure apparatus which was designed to operate below 500 C temperature and 2000 psia pressure. Compared with the static method, this method has three major advantages: the first is that large quantity of sample can be obtained from the system without disturbing the equilibrium state which was established before; the second is that the residence time of the sample in the equilibrium cell is greatly reduced, thus decomposition or contamination of the sample can be effectively prevented; the third is that the flow system allows the sample to degas as it heats up since any non condensable gas will exit in the vapor stream, accumulate in the vapor condenser, and not be recirculated. The first few runs were made with Quinoline-Tetralin system, the results were fairly in agreement with the literature data . The former graduate student Amad used the same apparatus acquired the Benzene-Ethylbenzene system VLE data. This work used basically the same setup (several modifications had been made) to get the VLE data of Ethylbenzene-Quinoline system.
Modeling the effect of temperature on survival rate of Listeria monocytogenes in yogurt.
Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A
2016-01-01
The aim of the study was to (i) evaluate the behavior of Listeria monocytogenes in a commercially produced yogurt, (ii) determine the survival/inactivation rates of L. monocytogenes during cold storage of yogurt and (iii) to generate primary and secondary mathematical models to predict the behavior of these bacteria during storage at different temperatures. The samples of yogurt were inoculated with the mixture of three L. monocytogenes strains and stored at 3, 6, 9, 12 and 15°C for 16 days. The number of listeriae was determined after 0, 1, 2, 3, 5, 7, 9, 12, 14 and 16 days of storage. From each sample a series of decimal dilutions were prepared and plated onto ALOA agar (agar for Listeria according to Ottaviani and Agosti). It was found that applied temperature and storage time significantly influenced the survival rate of listeriae (p<0.01). The number of L. monocytogenes in all the samples decreased linearly with storage time. The slowest decrease in the number of the bacteria was found in the samples stored at 6°C (D-10 value = 243.9 h), whereas the highest reduction in the number of the bacteria was observed in the samples stored at 15°C (D-10 value = 87.0 h). The number of L. monocytogenes was correlated with the pH value of the samples (p<0.01). The natural logarithm of the mean survival/inactivation rates of L. monocytogenes calculated from the primary model was fitted to two secondary models, namely linear and polynomial. Mathematical equations obtained from both secondary models can be applied as a tool for the prediction of the survival/inactivation rate of L. monocytogenes in yogurt stored under temperature range from 3 to 15°C, however, the polynomial model gave a better fit to the experimental data.
Modelling of temperature and perfusion during scalp cooling.
Janssen, F E M; Van Leeuwen, G M J; Van Steenhoven, A A
2005-09-07
Hair loss is a feared side effect of chemotherapy treatment. It may be prevented by cooling the scalp during administration of cytostatics. The supposed mechanism is that by cooling the scalp, both temperature and perfusion are diminished, affecting drug supply and drug uptake in the hair follicle. However, the effect of scalp cooling varies strongly. To gain more insight into the effect of cooling, a computer model has been developed that describes heat transfer in the human head during scalp cooling. Of main interest in this study are the mutual influences of scalp temperature and perfusion during cooling. Results of the standard head model show that the temperature of the scalp skin is reduced from 34.4 degrees C to 18.3 degrees C, reducing tissue blood flow to 25%. Based upon variations in both thermal properties and head anatomies found in the literature, a parameter study was performed. The results of this parameter study show that the most important parameters affecting both temperature and perfusion are the perfusion coefficient Q10 and the thermal resistances of both the fat and the hair layer. The variations in the parameter study led to skin temperature ranging from 10.1 degrees C to 21.8 degrees C, which in turn reduced relative perfusion to 13% and 33%, respectively.
Modelling of temperature and perfusion during scalp cooling
NASA Astrophysics Data System (ADS)
Janssen, F. E. M.; Van Leeuwen, G. M. J.; Van Steenhoven, A. A.
2005-09-01
Hair loss is a feared side effect of chemotherapy treatment. It may be prevented by cooling the scalp during administration of cytostatics. The supposed mechanism is that by cooling the scalp, both temperature and perfusion are diminished, affecting drug supply and drug uptake in the hair follicle. However, the effect of scalp cooling varies strongly. To gain more insight into the effect of cooling, a computer model has been developed that describes heat transfer in the human head during scalp cooling. Of main interest i