Sample records for thermal error model

  1. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  2. Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools

    NASA Astrophysics Data System (ADS)

    Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu

    2018-03-01

    Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.

  3. Predicting the thermal/structural performance of the atmospheric trace molecules spectroscopy /ATMOS/ Fourier transform spectrometer

    NASA Technical Reports Server (NTRS)

    Miller, J. M.

    1980-01-01

    ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.

  4. Numerical modeling of the divided bar measurements

    NASA Astrophysics Data System (ADS)

    LEE, Y.; Keehm, Y.

    2011-12-01

    The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.

  5. The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center

    NASA Astrophysics Data System (ADS)

    Tseng, Pai-Chung; Chen, Shen-Len

    The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.

  6. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  7. Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler

    2016-12-01

    An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.

  8. Asteroid thermal modeling in the presence of reflected sunlight

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2018-03-01

    A new derivation of simple asteroid thermal models is presented, investigating the need to account correctly for Kirchhoff's law of thermal radiation when IR observations contain substantial reflected sunlight. The framework applies to both the NEATM and related thermal models. A new parameterization of these models eliminates the dependence of thermal modeling on visible absolute magnitude H, which is not always available. Monte Carlo simulations are used to assess the potential impact of violating Kirchhoff's law on estimates of physical parameters such as diameter and IR albedo, with an emphasis on NEOWISE results. The NEOWISE papers use ten different models, applied to 12 different combinations of WISE data bands, in 47 different combinations. The most prevalent combinations are simulated and the accuracy of diameter estimates is found to be depend critically on the model and data band combination. In the best case of full thermal modeling of all four band the errors in an idealized model the 1σ (68.27%) confidence interval is -5% to +6%, but this combination is just 1.9% of NEOWISE results. Other combinations representing 42% of the NEOWISE results have about twice the CI at -10% to +12%, before accounting for errors due to irregular shape or other real world effects that are not simulated. The model and data band combinations found for the majority of NEOWISE results have much larger systematic and random errors. Kirchhoff's law violation by NEOWISE models leads to errors in estimation accuracy that are strongest for asteroids with W1, W2 band emissivity ɛ12 in both the lowest (0.605 ≤ɛ12 ≤ 0 . 780), and highest decile (0.969 ≤ɛ12 ≤ 0 . 988), corresponding to the highest and lowest deciles of near-IR albedo pIR. Systematic accuracy error between deciles ranges from a low of 5% to as much as 45%, and there are also differences in the random errors. Kirchhoff's law effects also produce large errors in NEOWISE estimates of pIR, particularly for high values. IR observations of asteroids in bands that have substantial reflected sunlight can largely avoid these problems by adopting the Kirchhoff law compliant modeling framework presented here, which is conceptually straightforward and comes without computational cost.

  9. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  10. Evaluation of algorithms for geological thermal-inertia mapping

    NASA Technical Reports Server (NTRS)

    Miller, S. H.; Watson, K.

    1977-01-01

    The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).

  11. Solar dynamic heat receiver thermal characteristics in low earth orbit

    NASA Technical Reports Server (NTRS)

    Wu, Y. C.; Roschke, E. J.; Birur, G. C.

    1988-01-01

    A simplified system model is under development for evaluating the thermal characteristics and thermal performance of a solar dynamic spacecraft energy system's heat receiver. Results based on baseline orbit, power system configuration, and operational conditions, are generated for three basic receiver concepts and three concentrator surface slope errors. Receiver thermal characteristics and thermal behavior in LEO conditions are presented. The configuration in which heat is directly transferred to the working fluid is noted to generate the best system and thermal characteristics. as well as the lowest performance degradation with increasing slope error.

  12. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  13. Error compensation for thermally induced errors on a machine tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  14. Method for automated building of spindle thermal model with use of CAE system

    NASA Astrophysics Data System (ADS)

    Kamenev, S. V.

    2018-03-01

    The spindle is one of the most important units of the metal-cutting machine tool. Its performance is critical to minimize the machining error, especially the thermal error. Various methods are applied to improve the thermal behaviour of spindle units. One of the most important methods is mathematical modelling based on the finite element analysis. The most common approach for its realization is the use of CAE systems. This approach, however, is not capable to address the number of important effects that need to be taken into consideration for proper simulation. In the present article, the authors propose the solution to overcome these disadvantages using automated thermal model building for the spindle unit utilizing the CAE system ANSYS.

  15. Thermal stability analysis and modelling of advanced perpendicular magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Van Beek, Simon; Martens, Koen; Roussel, Philippe; Wu, Yueh Chang; Kim, Woojin; Rao, Siddharth; Swerts, Johan; Crotti, Davide; Linten, Dimitri; Kar, Gouri Sankar; Groeseneken, Guido

    2018-05-01

    STT-MRAM is a promising non-volatile memory for high speed applications. The thermal stability factor (Δ = Eb/kT) is a measure for the information retention time, and an accurate determination of the thermal stability is crucial. Recent studies show that a significant error is made using the conventional methods for Δ extraction. We investigate the origin of the low accuracy. To reduce the error down to 5%, 1000 cycles or multiple ramp rates are necessary. Furthermore, the thermal stabilities extracted from current switching and magnetic field switching appear to be uncorrelated and this cannot be explained by a macrospin model. Measurements at different temperatures show that self-heating together with a domain wall model can explain these uncorrelated Δ. Characterizing self-heating properties is therefore crucial to correctly determine the thermal stability.

  16. Prediction of thermal conductivity of polyvinylpyrrolidone (PVP) electrospun nanocomposite fibers using artificial neural network and prey-predator algorithm.

    PubMed

    Khan, Waseem S; Hamadneh, Nawaf N; Khan, Waqar A

    2017-01-01

    In this study, multilayer perception neural network (MLPNN) was employed to predict thermal conductivity of PVP electrospun nanocomposite fibers with multiwalled carbon nanotubes (MWCNTs) and Nickel Zinc ferrites [(Ni0.6Zn0.4) Fe2O4]. This is the second attempt on the application of MLPNN with prey predator algorithm for the prediction of thermal conductivity of PVP electrospun nanocomposite fibers. The prey predator algorithm was used to train the neural networks to find the best models. The best models have the minimal of sum squared error between the experimental testing data and the corresponding models results. The minimal error was found to be 0.0028 for MWCNTs model and 0.00199 for Ni-Zn ferrites model. The predicted artificial neural networks (ANNs) responses were analyzed statistically using z-test, correlation coefficient, and the error functions for both inclusions. The predicted ANN responses for PVP electrospun nanocomposite fibers were compared with the experimental data and were found in good agreement.

  17. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  18. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  19. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  20. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    PubMed Central

    Lopez-Haro, S. A.; Leija, L.

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801

  1. Performance modeling of the effects of aperture phase error, turbulence, and thermal blooming on tiled subaperture systems

    NASA Astrophysics Data System (ADS)

    Leakeas, Charles L.; Capehart, Shay R.; Bartell, Richard J.; Cusumano, Salvatore J.; Whiteley, Matthew R.

    2011-06-01

    Laser weapon systems comprised of tiled subapertures are rapidly emerging in importance in the directed energy community. Performance models of these laser weapon systems have been developed from numerical simulations of a high fidelity wave-optics code called WaveTrain which is developed by MZA Associates. System characteristics such as mutual coherence, differential jitter, and beam quality rms wavefront error are defined for a focused beam on the target. Engagement scenarios are defined for various platform and target altitudes, speeds, headings, and slant ranges along with the natural wind speed and heading. Inputs to the performance model include platform and target height and velocities, Fried coherence length, Rytov number, isoplanatic angle, thermal blooming distortion number, Greenwood and Tyler frequencies, and atmospheric transmission. The performance model fit is based on power-in-the-bucket (PIB) values against the PIB from the simulation results for the vacuum diffraction-limited spot size as the bucket. The goal is to develop robust performance models for aperture phase error, turbulence, and thermal blooming effects in tiled subaperture systems.

  2. Report of the 1988 2-D Intercomparison Workshop, chapter 3

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar

    1989-01-01

    Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.

  3. A new adaptive estimation method of spacecraft thermal mathematical model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Akita, T.; Takaki, R.; Shima, E.

    2012-04-01

    An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.

  4. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method.

    PubMed

    Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison

    2017-03-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.

  5. Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhou, P.; Yan, H. J.

    2017-12-01

    In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037]. First, we emphasize that the replacement of ∇ .(λ ∇ T ) /∇.(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ∂t 0(T v ) +∇ .(T vv ) , which exist in the macroscopic temperature equation recovered from the previous model, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. Moreover, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the model for simulating liquid-vapor phase change. It is shown that the numerical results of the improved model agree well with those of a finite-difference scheme. Meanwhile, it is found that the replacement of ∇ .(λ ∇ T ) /∇ .(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.

  6. A Starshade Petal Error Budget for Exo-Earth Detection and Characterization

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy

    2011-01-01

    We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.

  7. Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data

    NASA Astrophysics Data System (ADS)

    Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher

    2017-05-01

    Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance

  8. Experiments and simulation of thermal behaviors of the dual-drive servo feed system

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Mei, Xuesong; Feng, Bin; Zhao, Liang; Ma, Chi; Shi, Hu

    2015-01-01

    The machine tool equipped with the dual-drive servo feed system could realize high feed speed as well as sharp precision. Currently, there is no report about the thermal behaviors of the dual-drive machine, and the current research of the thermal characteristics of machines mainly focuses on steady simulation. To explore the influence of thermal characterizations on the precision of a jib boring machine assembled dual-drive feed system, the thermal equilibrium tests and the research on thermal-mechanical transient behaviors are carried out. A laser interferometer, infrared thermography and a temperature-displacement acquisition system are applied to measure the temperature distribution and thermal deformation at different feed speeds. Subsequently, the finite element method (FEM) is used to analyze the transient thermal behaviors of the boring machine. The complex boundary conditions, such as heat sources and convective heat transfer coefficient, are calculated. Finally, transient variances in temperatures and deformations are compared with the measured values, and the errors between the measurement and the simulation of the temperature and the thermal error are 2 °C and 2.5 μm, respectively. The researching results demonstrate that the FEM model can predict the thermal error and temperature distribution very well under specified operating condition. Moreover, the uneven temperature gradient is due to the asynchronous dual-drive structure that results in thermal deformation. Additionally, the positioning accuracy decreases as the measured point became further away from the motor, and the thermal error and equilibrium period both increase with feed speeds. The research proposes a systematical method to measure and simulate the boring machine transient thermal behaviors.

  9. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method

    PubMed Central

    Leyde, Brian P.; Klein, Sanford A; Nellis, Gregory F.; Skye, Harrison

    2017-01-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model. PMID:28785125

  10. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  11. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  12. Combustion Device Failures During Space Shuttle Main Engine Development

    NASA Technical Reports Server (NTRS)

    Goetz, Otto K.; Monk, Jan C.

    2005-01-01

    Major Causes: Limited Initial Materials Properties. Limited Structural Models - especially fatigue. Limited Thermal Models. Limited Aerodynamic Models. Human Errors. Limited Component Test. High Pressure. Complicated Control.

  13. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  14. An improved model for soil surface temperature from air temperature in permafrost regions of Qinghai-Xizang (Tibet) Plateau of China

    NASA Astrophysics Data System (ADS)

    Hu, Guojie; Wu, Xiaodong; Zhao, Lin; Li, Ren; Wu, Tonghua; Xie, Changwei; Pang, Qiangqiang; Cheng, Guodong

    2017-08-01

    Soil temperature plays a key role in hydro-thermal processes in environments and is a critical variable linking surface structure to soil processes. There is a need for more accurate temperature simulation models, particularly in Qinghai-Xizang (Tibet) Plateau (QXP). In this study, a model was developed for the simulation of hourly soil surface temperatures with air temperatures. The model incorporated the thermal properties of the soil, vegetation cover, solar radiation, and water flux density and utilized field data collected from Qinghai-Xizang (Tibet) Plateau (QXP). The model was used to simulate the thermal regime at soil depths of 5 cm, 10 cm and 20 cm and results were compared with those from previous models and with experimental measurements of ground temperature at two different locations. The analysis showed that the newly developed model provided better estimates of observed field temperatures, with an average mean absolute error (MAE), root mean square error (RMSE), and the normalized standard error (NSEE) of 1.17 °C, 1.30 °C and 13.84 %, 0.41 °C, 0.49 °C and 5.45 %, 0.13 °C, 0.18 °C and 2.23 % at 5 cm, 10 cm and 20 cm depths, respectively. These findings provide a useful reference for simulating soil temperature and may be incorporated into other ecosystem models requiring soil temperature as an input variable for modeling permafrost changes under global warming.

  15. Nonlinear dynamic modeling of a V-shaped metal based thermally driven MEMS actuator for RF switches

    NASA Astrophysics Data System (ADS)

    Bakri-Kassem, Maher; Dhaouadi, Rached; Arabi, Mohamed; Estahbanati, Shahabeddin V.; Abdel-Rahman, Eihab

    2018-05-01

    In this paper, we propose a new dynamic model to describe the nonlinear characteristics of a V-shaped (chevron) metallic-based thermally driven MEMS actuator. We developed two models for the thermal actuator with two configurations. The first MEMS configuration has a small tip connected to the shuttle, while the second configuration has a folded spring and a wide beam attached to the shuttle. A detailed finite element model (FEM) and a lumped element model (LEM) are proposed for each configuration to completely characterize the electro-thermal and thermo-mechanical behaviors. The nonlinear resistivity of the polysilicon layer is extracted from the measured current-voltage (I-V) characteristics of the actuator and the simulated corresponding temperatures in the FEM model, knowing the resistivity of the polysilicon at room temperature from the manufacture’s handbook. Both developed models include the nonlinear temperature-dependent material properties. Numerical simulations in comparison with experimental data using a dedicated MEMS test apparatus verify the accuracy of the proposed LEM model to represent the complex dynamics of the thermal MEMS actuator. The LEM and FEM simulation results show an accuracy ranging from a maximum of 13% error down to a minimum of 1.4% error. The actuator with the lower thermal load to air that includes a folded spring (FS), also known as high surface area actuator is compared to the actuator without FS, also known as low surface area actuator, in terms of the I-V characteristics, power consumption, and experimental static and dynamic responses of the tip displacement.

  16. Design and study of water supply system for supercritical unit boiler in thermal power station

    NASA Astrophysics Data System (ADS)

    Du, Zenghui

    2018-04-01

    In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.

  17. Fuel thermal conductivity (FTHCON). Status report. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagrman, D. L.

    1979-02-01

    An improvement of the fuel thermal conductivity subcode is described which is part of the fuel rod behavior modeling task performed at EG and G Idaho, Inc. The original version was published in the Materials Properties (MATPRO) Handbook, Section A-2 (Fuel Thermal Conductivity). The improved version incorporates data which were not included in the previous work and omits some previously used data which are believed to come from cracked specimens. The models for the effect of porosity on thermal conductivity and for the electronic contribution to thermal coductivity have been completely revised in order to place these models on amore » more mechanistic basis. As a result of modeling improvements the standard error of the model with respect to its data base has been significantly reduced.« less

  18. Local air gap thickness and contact area models for realistic simulation of human thermo-physiological response

    NASA Astrophysics Data System (ADS)

    Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.

    2018-02-01

    To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.

  19. Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field

    NASA Technical Reports Server (NTRS)

    Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.

    2012-01-01

    We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.

  20. Modelling thermal comfort of visitors at urban squares in hot and arid climate using NN-ARX soft computing method

    NASA Astrophysics Data System (ADS)

    Kariminia, Shahab; Motamedi, Shervin; Shamshirband, Shahaboddin; Piri, Jamshid; Mohammadi, Kasra; Hashim, Roslan; Roy, Chandrabhushan; Petković, Dalibor; Bonakdari, Hossein

    2016-05-01

    Visitors utilize the urban space based on their thermal perception and thermal environment. The thermal adaptation engages the user's behavioural, physiological and psychological aspects. These aspects play critical roles in user's ability to assess the thermal environments. Previous studies have rarely addressed the effects of identified factors such as gender, age and locality on outdoor thermal comfort, particularly in hot, dry climate. This study investigated the thermal comfort of visitors at two city squares in Iran based on their demographics as well as the role of thermal environment. Assessing the thermal comfort required taking physical measurement and questionnaire survey. In this study, a non-linear model known as the neural network autoregressive with exogenous input (NN-ARX) was employed. Five indices of physiological equivalent temperature (PET), predicted mean vote (PMV), standard effective temperature (SET), thermal sensation votes (TSVs) and mean radiant temperature ( T mrt) were trained and tested using the NN-ARX. Then, the results were compared to the artificial neural network (ANN) and the adaptive neuro-fuzzy inference system (ANFIS). The findings showed the superiority of the NN-ARX over the ANN and the ANFIS. For the NN-ARX model, the statistical indicators of the root mean square error (RMSE) and the mean absolute error (MAE) were 0.53 and 0.36 for the PET, 1.28 and 0.71 for the PMV, 2.59 and 1.99 for the SET, 0.29 and 0.08 for the TSV and finally 0.19 and 0.04 for the T mrt.

  1. Space Suit Thermal Dynamics

    NASA Technical Reports Server (NTRS)

    Campbell, Anthony B.; Nair, Satish S.; Miles, John B.; Iovine, John V.; Lin, Chin H.

    1998-01-01

    The present NASA space suit (the Shuttle EMU) is a self-contained environmental control system, providing life support, environmental protection, earth-like mobility, and communications. This study considers the thermal dynamics of the space suit as they relate to astronaut thermal comfort control. A detailed dynamic lumped capacitance thermal model of the present space suit is used to analyze the thermal dynamics of the suit with observations verified using experimental and flight data. Prior to using the model to define performance characteristics and limitations for the space suit, the model is first evaluated and improved. This evaluation includes determining the effect of various model parameters on model performance and quantifying various temperature prediction errors in terms of heat transfer and heat storage. The observations from this study are being utilized in two future design efforts, automatic thermal comfort control design for the present space suit and design of future space suit systems for Space Station, Lunar, and Martian missions.

  2. Stability Evaluation of Buildings in Urban Area Using Persistent Scatterer Interfometry -Focused on Thermal Expansion Effect

    NASA Astrophysics Data System (ADS)

    Choi, J. H.; Kim, S. W.; Won, J. S.

    2017-12-01

    The objective of this study is monitoring and evaluating the stability of buildings in Seoul, Korea. This study includes both algorithm development and application to a case study. The development focuses on improving the PSI approach for discriminating various geophysical phase components and separating them from the target displacement phase. A thermal expansion is one of the key components that make it difficult for precise displacement measurement. The core idea is to optimize the thermal expansion factor using air temperature data and to model the corresponding phase by fitting the residual phase. We used TerraSAR-X SAR data acquired over two years from 2011 to 2013 in Seoul, Korea. The temperature fluctuation according to seasons is considerably high in Seoul, Korea. Other problem is the highly-developed skyscrapers in Seoul, which seriously contribute to DEM errors. To avoid a high computational burden and unstable solution of the nonlinear equation due to unknown parameters (a thermal expansion parameter as well as two conventional parameters: linear velocity and DEM errors), we separate a phase model into two main steps as follows. First, multi-baseline pairs with very short time interval in which deformation components and thermal expansion can be negligible were used to estimate DEM errors first. Second, single-baseline pairs were used to estimate two remaining parameters, linear deformation rate and thermal expansion. The thermal expansion of buildings closely correlate with the seasonal temperature fluctuation. Figure 1 shows deformation patterns of two selected buildings in Seoul. In the figures of left column (Figure 1), it is difficult to observe the true ground subsidence due to a large cyclic pattern caused by thermal dilation of the buildings. The thermal dilation often mis-leads the results into wrong conclusions. After the correction by the proposed method, true ground subsidence was able to be precisely measured as in the bottom right figure in Figure 1. The results demonstrate how the thermal expansion phase blinds the time-series measurement of ground motion and how well the proposed approach able to remove the noise phases caused by thermal expansion and DEM errors. Some of the detected displacements matched well with the pre-reported events, such as ground subsidence and sinkhole.

  3. Simple Forest Canopy Thermal Exitance Model

    NASA Technical Reports Server (NTRS)

    Smith J. A.; Goltz, S. M.

    1999-01-01

    We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.

  4. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors.

    PubMed

    Wang, Shuang; Geng, Yunhai; Jin, Rongyu

    2015-12-12

    In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF) and Least Square Methods (LSM) is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  5. A simulation technique for predicting thickness of thermal sprayed coatings

    NASA Technical Reports Server (NTRS)

    Goedjen, John G.; Miller, Robert A.; Brindley, William J.; Leissler, George W.

    1995-01-01

    The complexity of many of the components being coated today using the thermal spray process makes the trial and error approach traditionally followed in depositing a uniform coating inadequate, thereby necessitating a more analytical approach to developing robotic trajectories. A two dimensional finite difference simulation model has been developed to predict the thickness of coatings deposited using the thermal spray process. The model couples robotic and component trajectories and thermal spraying parameters to predict coating thickness. Simulations and experimental verification were performed on a rotating disk to evaluate the predictive capabilities of the approach.

  6. Integrated Modeling Activities for the James Webb Space Telescope: Structural-Thermal-Optical Analysis

    NASA Technical Reports Server (NTRS)

    Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.

    2004-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.

  7. Thermal Model Development for an X-Ray Mirror Assembly

    NASA Technical Reports Server (NTRS)

    Bonafede, Joseph A.

    2015-01-01

    Space-based x-ray optics require stringent thermal environmental control to achieve the desired image quality. Future x-ray telescopes will employ hundreds of nearly cylindrical, thin mirror shells to maximize effective area, with each shell built from small azimuthal segment pairs for manufacturability. Thermal issues with these thin optics are inevitable because the mirrors must have a near unobstructed view of space while maintaining near uniform 20 C temperature to avoid thermal deformations. NASA Goddard has been investigating the thermal characteristics of a future x-ray telescope with an image requirement of 5 arc-seconds and only 1 arc-second focusing error allocated for thermal distortion. The telescope employs 135 effective mirror shells formed from 7320 individual mirror segments mounted in three rings of 18, 30, and 36 modules each. Thermal requirements demand a complex thermal control system and detailed thermal modeling to verify performance. This presentation introduces innovative modeling efforts used for the conceptual design of the mirror assembly and presents results demonstrating potential feasibility of the thermal requirements.

  8. Improved tests of extra-dimensional physics and thermal quantum field theory from new Casimir force measurements

    NASA Astrophysics Data System (ADS)

    Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.

    2003-12-01

    We report new constraints on extra-dimensional models and other physics beyond the standard model based on measurements of the Casimir force between two dissimilar metals for separations in the range 0.2 1.2 μm. The Casimir force between a Au-coated sphere and a Cu-coated plate of a microelectromechanical torsional oscillator was measured statically with an absolute error of 0.3 pN. In addition, the Casimir pressure between two parallel plates was determined dynamically with an absolute error of ≈0.6 mPa. Within the limits of experimental and theoretical errors, the results are in agreement with a theory that takes into account the finite conductivity and roughness of the two metals. The level of agreement between experiment and theory was then used to set limits on the predictions of extra-dimensional physics and thermal quantum field theory. It is shown that two theoretical approaches to the thermal Casimir force which predict effects linear in temperature are ruled out by these experiments. Finally, constraints on Yukawa corrections to Newton’s law of gravity are strengthened by more than an order of magnitude in the range 56 330 nm.

  9. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  10. Thermal error analysis and compensation for digital image/volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  11. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  12. A Reduced Order Model for Whole-Chip Thermal Analysis of Microfluidic Lab-on-a-Chip Systems

    PubMed Central

    Wang, Yi; Song, Hongjun; Pant, Kapil

    2013-01-01

    This paper presents a Krylov subspace projection-based Reduced Order Model (ROM) for whole microfluidic chip thermal analysis, including conjugate heat transfer. Two key steps in the reduced order modeling procedure are described in detail, including (1) the acquisition of a 3D full-scale computational model in the state-space form to capture the dynamic thermal behavior of the entire microfluidic chip; and (2) the model order reduction using the Block Arnoldi algorithm to markedly lower the dimension of the full-scale model. Case studies using practically relevant thermal microfluidic chip are undertaken to establish the capability and to evaluate the computational performance of the reduced order modeling technique. The ROM is compared against the full-scale model and exhibits good agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) and over three orders-of-magnitude acceleration in computational speed. The salient model reusability and real-time simulation capability renders it amenable for operational optimization and in-line thermal control and management of microfluidic systems and devices. PMID:24443647

  13. Reduction in the write error rate of voltage-induced dynamic magnetization switching using the reverse bias method

    NASA Astrophysics Data System (ADS)

    Ikeura, Takuro; Nozaki, Takayuki; Shiota, Yoichi; Yamamoto, Tatsuya; Imamura, Hiroshi; Kubota, Hitoshi; Fukushima, Akio; Suzuki, Yoshishige; Yuasa, Shinji

    2018-04-01

    Using macro-spin modeling, we studied the reduction in the write error rate (WER) of voltage-induced dynamic magnetization switching by enhancing the effective thermal stability of the free layer using a voltage-controlled magnetic anisotropy change. Marked reductions in WER can be achieved by introducing reverse bias voltage pulses both before and after the write pulse. This procedure suppresses the thermal fluctuations of magnetization in the initial and final states. The proposed reverse bias method can offer a new way of improving the writing stability of voltage-driven spintronic devices.

  14. Dynamic thermal characteristics of heat pipe via segmented thermal resistance model for electric vehicle battery cooling

    NASA Astrophysics Data System (ADS)

    Liu, Feifei; Lan, Fengchong; Chen, Jiqing

    2016-07-01

    Heat pipe cooling for battery thermal management systems (BTMSs) in electric vehicles (EVs) is growing due to its advantages of high cooling efficiency, compact structure and flexible geometry. Considering the transient conduction, phase change and uncertain thermal conditions in a heat pipe, it is challenging to obtain the dynamic thermal characteristics accurately in such complex heat and mass transfer process. In this paper, a ;segmented; thermal resistance model of a heat pipe is proposed based on thermal circuit method. The equivalent conductivities of different segments, viz. the evaporator and condenser of pipe, are used to determine their own thermal parameters and conditions integrated into the thermal model of battery for a complete three-dimensional (3D) computational fluid dynamics (CFD) simulation. The proposed ;segmented; model shows more precise than the ;non-segmented; model by the comparison of simulated and experimental temperature distribution and variation of an ultra-thin micro heat pipe (UMHP) battery pack, and has less calculation error to obtain dynamic thermal behavior for exact thermal design, management and control of heat pipe BTMSs. Using the ;segmented; model, the cooling effect of the UMHP pack with different natural/forced convection and arrangements is predicted, and the results correspond well to the tests.

  15. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  16. Ellipsoidal geometry in asteroid thermal models - The standard radiometric model

    NASA Technical Reports Server (NTRS)

    Brown, R. H.

    1985-01-01

    The major consequences of ellipsoidal geometry in an othewise standard radiometric model for asteroids are explored. It is shown that for small deviations from spherical shape a spherical model of the same projected area gives a reasonable aproximation to the thermal flux from an ellipsoidal body. It is suggested that large departures from spherical shape require that some correction be made for geometry. Systematic differences in the radii of asteroids derived radiometrically at 10 and 20 microns may result partly from nonspherical geometry. It is also suggested that extrapolations of the rotational variation of thermal flux from a nonspherical body based solely on the change in cross-sectional area are in error.

  17. Performance limitations of temperature-emissivity separation techniques in long-wave infrared hyperspectral imaging applications

    NASA Astrophysics Data System (ADS)

    Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew

    2017-08-01

    Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.

  18. Adaptive Photothermal Emission Analysis Techniques for Robust Thermal Property Measurements of Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Valdes, Raymond

    The characterization of thermal barrier coating (TBC) systems is increasingly important because they enable gas turbine engines to operate at high temperatures and efficiency. Phase of photothermal emission analysis (PopTea) has been developed to analyze the thermal behavior of the ceramic top-coat of TBCs, as a nondestructive and noncontact method for measuring thermal diffusivity and thermal conductivity. Most TBC allocations are on actively-cooled high temperature turbine blades, which makes it difficult to precisely model heat transfer in the metallic subsystem. This reduces the ability of rote thermal modeling to reflect the actual physical conditions of the system and can lead to higher uncertainty in measured thermal properties. This dissertation investigates fundamental issues underpinning robust thermal property measurements that are adaptive to non-specific, complex, and evolving system characteristics using the PopTea method. A generic and adaptive subsystem PopTea thermal model was developed to account for complex geometry beyond a well-defined coating and substrate system. Without a priori knowledge of the subsystem characteristics, two different measurement techniques were implemented using the subsystem model. In the first technique, the properties of the subsystem were resolved as part of the PopTea parameter estimation algorithm; and, the second technique independently resolved the subsystem properties using a differential "bare" subsystem. The confidence in thermal properties measured using the generic subsystem model is similar to that from a standard PopTea measurement on a "well-defined" TBC system. Non-systematic bias-error on experimental observations in PopTea measurements due to generic thermal model discrepancies was also mitigated using a regression-based sensitivity analysis. The sensitivity analysis reported measurement uncertainty and was developed into a data reduction method to filter out these "erroneous" observations. It was found that the adverse impact of bias-error can be greatly reduced, leaving measurement observations with only random Gaussian noise in PopTea thermal property measurements. Quantifying the influence of the coating-substrate interface in PopTea measurements is important to resolving the thermal conductivity of the coating. However, the reduced significance of this interface in thicker coating systems can give rise to large uncertainties in thermal conductivity measurements. A first step towards improving PopTea measurements for such circumstances has been taken by implementing absolute temperature measurements using harmonically-sustained two-color pyrometry. Although promising, even small uncertainties in thermal emission observations were found to lead to significant noise in temperature measurements. However, PopTea analysis on bulk graphite samples were able to resolve its thermal conductivity to the expected literature values.

  19. Estimating Top-of-Atmosphere Thermal Infrared Radiance Using MERRA-2 Atmospheric Data

    NASA Astrophysics Data System (ADS)

    Kleynhans, Tania

    Space borne thermal infrared sensors have been extensively used for environmental research as well as cross-calibration of other thermal sensing systems. Thermal infrared data from satellites such as Landsat and Terra/MODIS have limited temporal resolution (with a repeat cycle of 1 to 2 days for Terra/MODIS, and 16 days for Landsat). Thermal instruments with finer temporal resolution on geostationary satellites have limited utility for cross-calibration due to their large view angles. Reanalysis atmospheric data is available on a global spatial grid at three hour intervals making it a potential alternative to existing satellite image data. This research explores using the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product to predict top-of-atmosphere (TOA) thermal infrared radiance globally at time scales finer than available satellite data. The MERRA-2 data product provides global atmospheric data every three hours from 1980 to the present. Due to the high temporal resolution of the MERRA-2 data product, opportunities for novel research and applications are presented. While MERRA-2 has been used in renewable energy and hydrological studies, this work seeks to leverage the model to predict TOA thermal radiance. Two approaches have been followed, namely physics-based approach and a supervised learning approach, using Terra/MODIS band 31 thermal infrared data as reference. The first physics-based model uses forward modeling to predict TOA thermal radiance. The second model infers the presence of clouds from the MERRA-2 atmospheric data, before applying an atmospheric radiative transfer model. The last physics-based model parameterized the previous model to minimize computation time. The second approach applied four different supervised learning algorithms to the atmospheric data. The algorithms included a linear least squares regression model, a non-linear support vector regression (SVR) model, a multi-layer perceptron (MLP), and a convolutional neural network (CNN). This research found that the multi-layer perceptron model produced the lowest error rates overall, with an RMSE of 1.22W / m2 sr mum when compared to actual Terra/MODIS band 31 image data. This research further aimed to characterize the errors associated with each method so that any potential user will have the best information available should they wish to apply these methods towards their own application.

  20. Evaluation of thermal data for geologic applications

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Palluconi, F. D.; Levine, C. J.; Abrams, M. J.; Nash, D. B.; Alley, R. E.; Schieldge, J. P.

    1982-01-01

    Sensitivity studies using thermal models indicated sources of errors in the determination of thermal inertia from HCMM data. Apparent thermal inertia, with only simple atmospheric radiance corrections to the measured surface temperature, would be sufficient for most operational requirements for surface thermal inertia. Thermal data does have additional information about the nature of surface material that is not available in visible and near infrared reflectance data. Color composites of daytime temperature, nighttime temperature, and albedo were often more useful than thermal inertia images alone for discrimination of lithologic boundaries. A modeling study, using the annual heating cycle, indicated the feasibility of looking for geologic features buried under as much as a meter of alluvial material. The spatial resolution of HCMM data is a major limiting factor in the usefulness of the data for geologic applications. Future thermal infrared satellite sensors should provide spatial resolution comparable to that of the LANDSAT data.

  1. Equivalent model optimization with cyclic correction approximation method considering parasitic effect for thermoelectric coolers.

    PubMed

    Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi

    2017-11-21

    As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.

  2. Predication of skin temperature and thermal comfort under two-way transient environments.

    PubMed

    Zhou, Xin; Xiong, Jing; Lian, Zhiwei

    2017-12-01

    In this study, three transient environmental conditions consisting of one high-temperature phase within two low-temperature phases were developed, thus creating a temperature rise followed by a temperature fall. Twenty-four subjects (including 12 males and 12 females) were recruited and they underwent all three test scenarios. Skin temperature on seven body parts were measured during the whole period of the experiment. Besides, thermal sensation was investigated at specific moments by questionnaires. Thermal sensation models including PMV model, Fiala model and the Chinese model were applied to predict subjects' thermal sensation with comparisons carried out among them. Results show that most predicated thermal sensation by Chinese model lies within the range of 0.5 scale of the observed sensation vote, and it agrees best with the observed thermal sensation in transient thermal environment than PMV and Fiala model. Further studies should be carried out to improve performance of Chinese model for temperature alterations between "very hot" to "hot" environment, for prediction error in the temperature-fall situation of C5 (37-32°C) was over 0.5 scale. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Modeling validation and control analysis for controlled temperature and humidity of air conditioning system.

    PubMed

    Lee, Jing-Nang; Lin, Tsung-Min; Chen, Chien-Chih

    2014-01-01

    This study constructs an energy based model of thermal system for controlled temperature and humidity air conditioning system, and introduces the influence of the mass flow rate, heater and humidifier for proposed control criteria to achieve the controlled temperature and humidity of air conditioning system. Then, the reliability of proposed thermal system model is established by both MATLAB dynamic simulation and the literature validation. Finally, the PID control strategy is applied for controlling the air mass flow rate, humidifying capacity, and heating, capacity. The simulation results show that the temperature and humidity are stable at 541 sec, the disturbance of temperature is only 0.14 °C, 0006 kg(w)/kg(da) in steady-state error of humidity ratio, and the error rate is only 7.5%. The results prove that the proposed system is an effective controlled temperature and humidity of an air conditioning system.

  4. Modeling Validation and Control Analysis for Controlled Temperature and Humidity of Air Conditioning System

    PubMed Central

    Lee, Jing-Nang; Lin, Tsung-Min

    2014-01-01

    This study constructs an energy based model of thermal system for controlled temperature and humidity air conditioning system, and introduces the influence of the mass flow rate, heater and humidifier for proposed control criteria to achieve the controlled temperature and humidity of air conditioning system. Then, the reliability of proposed thermal system model is established by both MATLAB dynamic simulation and the literature validation. Finally, the PID control strategy is applied for controlling the air mass flow rate, humidifying capacity, and heating, capacity. The simulation results show that the temperature and humidity are stable at 541 sec, the disturbance of temperature is only 0.14°C, 0006 kgw/kgda in steady-state error of humidity ratio, and the error rate is only 7.5%. The results prove that the proposed system is an effective controlled temperature and humidity of an air conditioning system. PMID:25250390

  5. Thermal Testing and Model Correlation of the Magnetospheric Multiscale (MMS) Observatories

    NASA Technical Reports Server (NTRS)

    Kim, Jong S.; Teti, Nicholas M.

    2015-01-01

    International Conference on Envronmental Systems (ICES), Seattle WA NCTS 20964-15. The Magnetospheric Multiscale (MMS) mission is a Solar Terrestrial Probes mission comprising four identically instrumented spacecraft that will use Earths magnetosphere as a laboratory tostudy the microphysics of three fundamental plasma processes: magnetic reconnection, energetic particle acceleration, and turbulence. This paper presents the complete thermal balance (TB) test performed on the first of four observatories to go through thermal vacuum (TV) and the minibalance testing that was performed on the subsequent observatories to provide a comparison of all four. The TV and TB tests were conducted in a thermal vacuum chamber at the Naval Research Laboratory (NRL) in Washington, D.C. with the vacuum level higher than 1.3 x 10-4 Pa (10-6 torr)and the surrounding temperature achieving -180 C. Three TB test cases were performed that included hot operational science, cold operational science and a cold survival case. In addition to the three balance cases a two hour eclipse and a four hour eclipse simulation was performed during the TV test to provide additional transient data points that represent the orbit in eclipse (or Earth's shadow) The goal was to perform testing such that the flight orbital environments could be simulated as closely as possible. A thermal model correlation between the thermal analysis and the test results was completed. Over 400 1-Wire temperature sensors, 200 thermocouples and 125 flight thermistor temperature sensors recorded data during TV and TB testing. These temperatureversus time profiles and their agreements with the analytical results obtained using Thermal Desktop and SINDAFLUINT are discussed. The model correlation for the thermal mathematical model (TMM) is conducted based on the numerical analysis results and the test data. The philosophy of model correlation was to correlate the model to within 3 C of the test data using the standard deviation and mean deviation error calculation. Individual temperature error goal is to be within 5 C and the heater power goal is to be within 5 of test data. The results of the model correlation are discussed and the effect of some material and interface parameters on the temperature profiles are presented.

  6. Thermal Testing and Model Correlation of the Magnetospheric Multiscale (MMS) Observatories

    NASA Technical Reports Server (NTRS)

    Kim, Jong S.; Teti, Nicholas M.

    2015-01-01

    The Magnetospheric Multiscale (MMS) mission is a Solar Terrestrial Probes mission comprising four identically instrumented spacecraft that will use Earth's magnetosphere as a laboratory to study the microphysics of three fundamental plasma processes: magnetic reconnection, energetic particle acceleration, and turbulence. This paper presents the complete thermal balance (TB) test performed on the first of four observatories to go through thermal vacuum (TV) and the minibalance testing that was performed on the subsequent observatories to provide a comparison of all four. The TV and TB tests were conducted in a thermal vacuum chamber at the Naval Research Laboratory (NRL) in Washington, D.C. with the vacuum level higher than 1.3 x 10 (sup -4) pascals (10 (sup -6) torr) and the surrounding temperature achieving -180 degrees Centigrade. Three TB test cases were performed that included hot operational science, cold operational science and a cold survival case. In addition to the three balance cases a two hour eclipse and a four hour eclipse simulation was performed during the TV test to provide additional transient data points that represent the orbit in eclipse (or Earth's shadow) The goal was to perform testing such that the flight orbital environments could be simulated as closely as possible. A thermal model correlation between the thermal analysis and the test results was completed. Over 400 1-Wire temperature sensors, 200 thermocouples and 125 flight thermistor temperature sensors recorded data during TV and TB testing. These temperature versus time profiles and their agreements with the analytical results obtained using Thermal Desktop and SINDA/FLUINT are discussed. The model correlation for the thermal mathematical model (TMM) is conducted based on the numerical analysis results and the test data. The philosophy of model correlation was to correlate the model to within 3 degrees Centigrade of the test data using the standard deviation and mean deviation error calculation. Individual temperature error goal is to be within 5 degrees Centigrade and the heater power goal is to be within 5 percent of test data. The results of the model correlation are discussed and the effect of some material and interface parameters on the temperature profiles are presented.

  7. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  8. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  9. Shocked plagioclase signatures in Thermal Emission Spectrometer data of Mars

    USGS Publications Warehouse

    Johnson, J. R.; Staid, M.I.; Titus, T.N.; Becker, K.

    2006-01-01

    The extensive impact cratering record on Mars combined with evidence from SNC meteorites suggests that a significant fraction of the surface is composed of materials subjected to variable shock pressures. Pressure-induced structural changes in minerals during high-pressure shock events alter their thermal infrared spectral emission features, particularly for feldspars, in a predictable fashion. To understand the degree to which the distribution and magnitude of shock effects influence martian surface mineralogy, we used standard spectral mineral libraries supplemented by laboratory spectra of experimentally shocked bytownite feldspar [Johnson, J.R., Ho??rz, F., Christensen, P., Lucey, P.G., 2002b. J. Geophys. Res. 107 (E10), doi:10.1029/2001JE001517] to deconvolve Thermal Emission Spectrometer (TES) data from six relatively large (>50 km) impact craters on Mars. We used both TES orbital data and TES mosaics (emission phase function sequences) to study local and regional areas near the craters, and compared the differences between models using single TES detector data and 3 ?? 2 detector-averaged data. Inclusion of shocked feldspar spectra in the deconvolution models consistently improved the rms errors compared to models in which the spectra were not used, and resulted in modeled shocked feldspar abundances of >15% in some regions. However, the magnitudes of model rms error improvements were within the noise equivalent rms errors for the TES instrument [Hamilton V., personal communication]. This suggests that while shocked feldspars may be a component of the regions studied, their presence cannot be conclusively demonstrated in the TES data analyzed here. If the distributions of shocked feldspars suggested by the models are real, the lack of spatial correlation to crater materials may reflect extensive aeolian mixing of martian regolith materials composed of variably shocked impact ejecta from both local and distant sources. ?? 2005 Elsevier Inc. All rights reserved.

  10. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  11. Inverse problem to constrain the controlling parameters of large-scale heat transport processes: The Tiberias Basin example

    NASA Astrophysics Data System (ADS)

    Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien

    2015-04-01

    Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media. Springer- Verlag Berlin Heidelberg ,996p. Doherty J., 2010, PEST: Model-Independent Parameter Estimation. user manual 5th Edition. Watermark, Brisbane, Australia Magri, F., Inbar, N., Siebert C., Rosenthal, E., Guttman, J., Möller, P., 2015. Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520(0), 342-355.

  12. Optical analysis and thermal management of 2-cell strings linear concentrating photovoltaic system

    NASA Astrophysics Data System (ADS)

    Reddy, K. S.; Kamnapure, Nikhilesh R.

    2015-09-01

    This paper presents the optical and thermal analyses for a linear concentrating photovoltaic/thermal collector under different operating conditions. Linear concentrating photovoltaic system (CPV) consists of a highly reflective mirror, a receiver and semi-dual axis tracking mechanism. The CPV receiver embodies two strings of triple-junction cells (100 cells in each string) adhered to a mild steel circular tube mounted at the focal length of trough. This system provides 560 W of electricity and 1580 W of heat which needs to be dissipated by active cooling. The Al2O3/Water nanofluid is used as heat transfer fluid (HTF) flowing through circular receiver for CPV cells cooling. Optical analysis of linear CPV system with 3.35 m2 aperture and geometric concentration ratio (CR) of 35 is carried out using Advanced System Analysis Program (ASAP) an optical simulation tool. Non-uniform intensity distribution model of solar disk is used to model the sun in ASAP. The impact of random errors including slope error (σslope), tracking error (σtrack) and apparent change in sun's width (σsun) on optical performance of collector is shown. The result from the optical simulations shows the optical efficiency (ηo) of 88.32% for 2-cell string CPV concentrator. Thermal analysis of CPV receiver is carried out with conjugate heat transfer modeling in ANSYS FLUENT-14. Numerical simulations of Al2O3/Water nanofluid turbulent forced convection are performed for various parameters such as nanoparticle volume fraction (φ), Reynolds number (Re). The addition of the nanoparticle in water enhances the heat transfer in the ranges of 3.28% - 35.6% for φ = 1% - 6%. Numerical results are compared with literature data which shows the reasonable agreement.

  13. Sharpening method of satellite thermal image based on the geographical statistical model

    NASA Astrophysics Data System (ADS)

    Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng

    2016-04-01

    To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.

  14. Thermal imbalance force modelling for a GPS satellite using the finite element method

    NASA Technical Reports Server (NTRS)

    Vigue, Yvonne; Schutz, Bob E.

    1991-01-01

    Methods of analyzing the perturbation due to thermal radiation and determining its effects on the orbits of GPS satellites are presented, with emphasis on the FEM technique to calculate satellite solar panel temperatures which are used to determine the magnitude and direction of the thermal imbalance force. Although this force may not be responsible for all of the force mismodeling, conditions may work in combination with the thermal imbalance force to produce such accelerations on the order of 1.e-9 m/sq s. If submeter accurate orbits and centimeter-level accuracy for geophysical applications are desired, a time-dependent model of the thermal imbalance force should be used, especially when satellites are eclipsing, where the observed errors are larger than for satellites in noneclipsing orbits.

  15. Research on simulation of supercritical steam turbine system in large thermal power station

    NASA Astrophysics Data System (ADS)

    Zhou, Qiongyang

    2018-04-01

    In order to improve the stability and safety of supercritical steam turbine system operation in large thermal power station, the body of the steam turbine is modeled in this paper. And in accordance with the hierarchical modeling idea, the steam turbine body model, condensing system model, deaeration system model and regenerative system model are combined to build a simulation model of steam turbine system according to the connection relationship of each subsystem of steam turbine. Finally, the correctness of the model is verified by design and operation data of the 600MW supercritical unit. The results show that the maximum simulation error of the model is 2.15%, which meets the requirements of the engineering. This research provides a platform for the research on the variable operating conditions of the turbine system, and lays a foundation for the construction of the whole plant model of the thermal power plant.

  16. Development of thermal models of footwear using finite element analysis.

    PubMed

    Covill, D; Guan, Z W; Bailey, M; Raval, H

    2011-03-01

    Thermal comfort is increasingly becoming a crucial factor to be considered in footwear design. The climate inside a shoe is controlled by thermal and moisture conditions and is crucial to attain comfort. Research undertaken has shown that thermal conditions play a dominant role in shoe climate. Development of thermal models that are capable of predicting in-shoe temperature distributions is an effective way forward to undertake extensive parametric studies to assist optimized design. In this paper, two-dimensional and three-dimensional thermal models of in-shoe climate were developed using finite element analysis through commercial code Abaqus. The thermal material properties of the upper shoe, sole, and air were considered. Dry heat flux from the foot was calculated on the basis of typical blood flow in the arteries on the foot. Using the thermal models developed, in-shoe temperatures were predicted to cover various locations for controlled ambient temperatures of 15, 25, and 35 degrees C respectively. The predicted temperatures were compared with multipoint measured temperatures through microsensor technology. Reasonably good correlation was obtained, with averaged errors of 6, 2, and 1.5 per cent, based on the averaged in-shoe temperature for the above three ambient temperatures. The models can be further used to help design shoes with optimized thermal comfort.

  17. Estimation of Thermal Sensation Based on Wrist Skin Temperatures.

    PubMed

    Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk

    2016-03-23

    Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one's thermal environment.

  18. Estimation of Thermal Sensation Based on Wrist Skin Temperatures

    PubMed Central

    Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk

    2016-01-01

    Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one’s thermal environment. PMID:27023538

  19. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  20. Kalman filtered MR temperature imaging for laser induced thermal therapies.

    PubMed

    Fuentes, D; Yung, J; Hazle, J D; Weinberg, J S; Stafford, R J

    2012-04-01

    The feasibility of using a stochastic form of Pennes bioheat model within a 3-D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L(2) (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, ∆t < 10 s, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss ∆t > 10 sec.

  1. Calibration and temperature correction of heat dissipation matric potential sensors

    USGS Publications Warehouse

    Flint, A.L.; Campbell, G.S.; Ellett, K.M.; Calissendorff, C.

    2002-01-01

    This paper describes how heat dissipation sensors, used to measure soil water matric potential, were analyzed to develop a normalized calibration equation and a temperature correction method. Inference of soil matric potential depends on a correlation between the variable thermal conductance of the sensor's porous ceramic and matric poten-tial. Although this correlation varies among sensors, we demonstrate a normalizing procedure that produces a single calibration relationship. Using sensors from three sources and different calibration methods, the normalized calibration resulted in a mean absolute error of 23% over a matric potential range of -0.01 to -35 MPa. Because the thermal conductivity of variably saturated porous media is temperature dependent, a temperature correction is required for application of heat dissipation sensors in field soils. A temperature correction procedure is outlined that reduces temperature dependent errors by 10 times, which reduces the matric potential measurement errors by more than 30%. The temperature dependence is well described by a thermal conductivity model that allows for the correction of measurements at any temperature to measurements at the calibration temperature.

  2. Optimum employment of satellite indirect soundings as numerical model input

    NASA Technical Reports Server (NTRS)

    Horn, L. H.; Derber, J. C.; Koehler, T. L.; Schmidt, B. D.

    1981-01-01

    The characteristics of satellite-derived temperature soundings that would significantly affect their use as input for numerical weather prediction models were examined. Independent evaluations of satellite soundings were emphasized to better define error characteristics. Results of a Nimbus-6 sounding study reveal an underestimation of the strength of synoptic scale troughs and ridges, and associated gradients in isobaric height and temperature fields. The most significant errors occurred near the Earth's surface and the tropopause. Soundings from the TIROS-N and NOAA-6 satellites were also evaluated. Results again showed an underestimation of upper level trough amplitudes leading to weaker thermal gradient depictions in satellite-only fields. These errors show a definite correlation to the synoptic flow patterns. In a satellite-only analysis used to initialize a numerical model forecast, it was found that these synoptically correlated errors were retained in the forecast sequence.

  3. Optical Modeling Activities for the James Webb Space Telescope (JWST) Project. II; Determining Image Motion and Wavefront Error Over an Extended Field of View with a Segmented Optical System

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Ha, Kong Q.

    2004-01-01

    This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.

  4. Investigation of approximate models of experimental temperature characteristics of machines

    NASA Astrophysics Data System (ADS)

    Parfenov, I. V.; Polyakov, A. N.

    2018-05-01

    This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.

  5. Analytical model for thermal lensing and spherical aberration in diode side-pumped Nd:YAG laser rod having Gaussian pump profile

    NASA Astrophysics Data System (ADS)

    M, H. Moghtader Dindarlu; M Kavosh, Tehrani; H, Saghafifar; A, Maleki

    2015-12-01

    In this paper, according to the temperature and strain distribution obtained by considering the Gaussian pump profile and dependence of physical properties on temperature, we derive an analytical model for refractive index variations of the diode side-pumped Nd:YAG laser rod. Then we evaluate this model by numerical solution and our maximum relative errors are 5% and 10% for variations caused by thermo-optical and thermo-mechanical effects; respectively. Finally, we present an analytical model for calculating the focal length of the thermal lens and spherical aberration. This model is evaluated by experimental results.

  6. Development of analysis technique to predict the material behavior of blowing agent

    NASA Astrophysics Data System (ADS)

    Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo

    2014-11-01

    In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.

  7. Measuring the thermal boundary resistance of van der Waals contacts using an individual carbon nanotube.

    PubMed

    Hirotani, Jun; Ikuta, Tatsuya; Nishiyama, Takashi; Takahashi, Koji

    2013-01-16

    Interfacial thermal transport via van der Waals interaction is quantitatively evaluated using an individual multi-walled carbon nanotube bonded on a platinum hot-film sensor. The thermal boundary resistance per unit contact area was obtained at the interface between the closed end or sidewall of the nanotube and platinum, gold, or a silicon dioxide surface. When taking into consideration the surface roughness, the thermal boundary resistance at the sidewall is found to coincide with that at the closed end. A new finding is that the thermal boundary resistance between a carbon nanotube and a solid surface is independent of the materials within the experimental errors, which is inconsistent with a traditional phonon mismatch model, which shows a clear material dependence of the thermal boundary resistance. Our data indicate the inapplicability of existing phonon models when weak van der Waals forces are dominant at the interfaces.

  8. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    DTIC Science & Technology

    1979-08-01

    fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the

  9. Regional-scale estimates of surface moisture availability and thermal inertia using remote thermal measurements

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1986-01-01

    A review is presented of numerical models which were developed to interpret thermal IR data and to identify the governing parameters and surface energy fluxes recorded in the images. Analytic, predictive, diagnostic and empirical models are described. The limitations of each type of modeling approach are explored in terms of the error sources and inherent constraints due to theoretical or measurement limitations. Sample results of regional-scale soil moisture or evaporation patterns derived from the Heat Capacity Mapping Mission and GOES satellite data through application of the predictive model devised by Carlson (1981) are discussed. The analysis indicates that pattern recognition will probably be highest when data are collected over flat, arid, sparsely vegetated terrain. The soil moisture data then obtained may be accurate to within 10-20 percent.

  10. Location precision analysis of stereo thermal anti-sniper detection system

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Lu, Ya; Zhang, Xiaoyan; Jin, Weiqi

    2012-06-01

    Anti-sniper detection devices are the urgent requirement in modern warfare. The precision of the anti-sniper detection system is especially important. This paper discusses the location precision analysis of the anti-sniper detection system based on the dual-thermal imaging system. It mainly discusses the following two aspects which produce the error: the digital quantitative effects of the camera; effect of estimating the coordinate of bullet trajectory according to the infrared images in the process of image matching. The formula of the error analysis is deduced according to the method of stereovision model and digital quantitative effects of the camera. From this, we can get the relationship of the detecting accuracy corresponding to the system's parameters. The analysis in this paper provides the theory basis for the error compensation algorithms which are put forward to improve the accuracy of 3D reconstruction of the bullet trajectory in the anti-sniper detection devices.

  11. Advancing Technology for Starlight Suppression via an External Occulter

    NASA Technical Reports Server (NTRS)

    Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.; hide

    2011-01-01

    External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.

  12. Robust optimization of a tandem grating solar thermal absorber

    NASA Astrophysics Data System (ADS)

    Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae

    2018-04-01

    Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.

  13. A Nonlinear Adaptive Filter for Gyro Thermal Bias Error Cancellation

    NASA Technical Reports Server (NTRS)

    Galante, Joseph M.; Sanner, Robert M.

    2012-01-01

    Deterministic errors in angular rate gyros, such as thermal biases, can have a significant impact on spacecraft attitude knowledge. In particular, thermal biases are often the dominant error source in MEMS gyros after calibration. Filters, such as J\\,fEKFs, are commonly used to mitigate the impact of gyro errors and gyro noise on spacecraft closed loop pointing accuracy, but often have difficulty in rapidly changing thermal environments and can be computationally expensive. In this report an existing nonlinear adaptive filter is used as the basis for a new nonlinear adaptive filter designed to estimate and cancel thermal bias effects. A description of the filter is presented along with an implementation suitable for discrete-time applications. A simulation analysis demonstrates the performance of the filter in the presence of noisy measurements and provides a comparison with existing techniques.

  14. Fiber Bragg grating temperature sensors in a 6.5-MW generator exciter bridge and the development and simulation of its thermal model.

    PubMed

    de Morais Sousa, Kleiton; Probst, Werner; Bortolotti, Fernando; Martelli, Cicero; da Silva, Jean Carlos Cardozo

    2014-09-05

    This work reports the thermal modeling and characterization of a thyristor. The thyristor is used in a 6.5-MW generator excitation bridge. Temperature measurements are performed using fiber Bragg grating (FBG) sensors. These sensors have the benefits of being totally passive and immune to electromagnetic interference and also multiplexed in a single fiber. The thyristor thermal model consists of a second order equivalent electric circuit, and its power losses lead to an increase in temperature, while the losses are calculated on the basis of the excitation current in the generator. Six multiplexed FBGs are used to measure temperature and are embedded to avoid the effect of the strain sensitivity. The presented results show a relationship between field current and temperature oscillation and prove that this current can be used to determine the thermal model of a thyristor. The thermal model simulation presents an error of 1.5 °C, while the FBG used allows for the determination of the thermal behavior and the field current dependence. Since the temperature is a function of the field current, the corresponding simulation can be used to estimate the temperature in the thyristors.

  15. Fiber Bragg Grating Temperature Sensors in a 6.5-MW Generator Exciter Bridge and the Development and Simulation of Its Thermal Model

    PubMed Central

    de Morais Sousa, Kleiton; Probst, Werner; Bortolotti, Fernando; Martelli, Cicero; da Silva, Jean Carlos Cardozo

    2014-01-01

    This work reports the thermal modeling and characterization of a thyristor. The thyristor is used in a 6.5-MW generator excitation bridge. Temperature measurements are performed using fiber Bragg grating (FBG) sensors. These sensors have the benefits of being totally passive and immune to electromagnetic interference and also multiplexed in a single fiber. The thyristor thermal model consists of a second order equivalent electric circuit, and its power losses lead to an increase in temperature, while the losses are calculated on the basis of the excitation current in the generator. Six multiplexed FBGs are used to measure temperature and are embedded to avoid the effect of the strain sensitivity. The presented results show a relationship between field current and temperature oscillation and prove that this current can be used to determine the thermal model of a thyristor. The thermal model simulation presents an error of 1.5 °C, while the FBG used allows for the determination of the thermal behavior and the field current dependence. Since the temperature is a function of the field current, the corresponding simulation can be used to estimate the temperature in the thyristors. PMID:25198007

  16. Correlation of spacecraft thermal mathematical models to reference data

    NASA Astrophysics Data System (ADS)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  17. Thermal conductivity of molten salt mixtures: Theoretical model supported by equilibrium molecular dynamics simulations.

    PubMed

    Gheribi, Aïmen E; Chartrand, Patrice

    2016-02-28

    A theoretical model for the description of thermal conductivity of molten salt mixtures as a function of composition and temperature is presented. The model is derived by considering the classical kinetic theory and requires, for its parametrization, only information on thermal conductivity of pure compounds. In this sense, the model is predictive. For most molten salt mixtures, no experimental data on thermal conductivity are available in the literature. This is a hindrance for many industrial applications (in particular for thermal energy storage technologies) as well as an obvious barrier for the validation of the theoretical model. To alleviate this lack of data, a series of equilibrium molecular dynamics (EMD) simulations has been performed on several molten chloride systems in order to determine their thermal conductivity in the entire range of composition at two different temperatures: 1200 K and 1300 K. The EMD simulations are first principles type, as the potentials used to describe the interactions have been parametrized on the basis of first principle electronic structure calculations. In addition to the molten chlorides system, the model predictions are also compared to a recent similar EMD study on molten fluorides and with the few reliable experimental data available in the literature. The accuracy of the proposed model is within the reported numerical and/or experimental errors.

  18. The effects of nuclear data library processing on Geant4 and MCNP simulations of the thermal neutron scattering law

    NASA Astrophysics Data System (ADS)

    Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.

    2018-05-01

    Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.

  19. Step - wise transient method - Influence of heat source inertia

    NASA Astrophysics Data System (ADS)

    Malinarič, Svetozár; Dieška, Peter

    2016-07-01

    Step-wise transient (SWT) method is an experimental technique for measuring the thermal diffusivity and conductivity of materials. Theoretical models and experimental apparatus are presented and the influence of the heat source capacity are investigated using the experiment simulation. The specimens from low density polyethylene (LDPE) were measured yielding the thermal diffusivity 0.165 mm2/s and thermal conductivity 0.351 W/mK with the coefficient of variation less than 1.4 %. The heat source capacity caused the systematic error of the results smaller than 1 %.

  20. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.

    2012-01-01

    The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.

  1. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    NASA Astrophysics Data System (ADS)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  2. On the collaborative design and simulation of space camera: stop structural/thermal/optical) analysis

    NASA Astrophysics Data System (ADS)

    Duan, Pengfei; Lei, Wenping

    2017-11-01

    A number of disciplines (mechanics, structures, thermal, and optics) are needed to design and build Space Camera. Separate design models are normally constructed by each discipline CAD/CAE tools. Design and analysis is conducted largely in parallel subject to requirements that have been levied on each discipline, and technical interaction between the different disciplines is limited and infrequent. As a result a unified view of the Space Camera design across discipline boundaries is not directly possible in the approach above, and generating one would require a large manual, and error-prone process. A collaborative environment that is built on abstract model and performance template allows engineering data and CAD/CAE results to be shared across above discipline boundaries within a common interface, so that it can help to attain speedy multivariate design and directly evaluate optical performance under environment loadings. A small interdisciplinary engineering team from Beijing Institute of Space Mechanics and Electricity has recently conducted a Structural/Thermal/Optical (STOP) analysis of a space camera with this collaborative environment. STOP analysis evaluates the changes in image quality that arise from the structural deformations when the thermal environment of the camera changes throughout its orbit. STOP analyses were conducted for four different test conditions applied during final thermal vacuum (TVAC) testing of the payload on the ground. The STOP Simulation Process begins with importing an integrated CAD model of the camera geometry into the collaborative environment, within which 1. Independent thermal and structural meshes are generated. 2. The thermal mesh and relevant engineering data for material properties and thermal boundary conditions are then used to compute temperature distributions at nodal points in both the thermal and structures mesh through Thermal Desktop, a COTS thermal design and analysis code. 3. Thermally induced structural deformations of the camera are then evaluated in Nastran, an industry standard code for structural design and analysis. 4. Thermal and structural results are next imported into SigFit, another COTS tool that computes deformation and best fit rigid body displacements for the optical surfaces. 5. SigFit creates a modified optical prescription that is imported into CODE V for evaluation of optical performance impacts. The integrated STOP analysis was validated using TVAC test data. For the four different TVAC tests, the relative errors between simulation and test data of measuring points temperatures were almost around 5%, while in some test conditions, they were even much lower to 1%. As to image quality MTF, relative error between simulation and test was 8.3% in the worst condition, others were all below 5%. Through the validation, it has been approved that the collaborative design and simulation environment can achieved the integrated STOP analysis of Space Camera efficiently. And further, the collaborative environment allows an interdisciplinary analysis that formerly might take several months to perform to be completed in two or three weeks, which is very adaptive to scheme demonstration of projects in earlier stages.

  3. Kalman Filtered MR Temperature Imaging for Laser Induced Thermal Therapies

    PubMed Central

    Fuentes, D.; Yung, J.; Hazle, J. D.; Weinberg, J. S.; Stafford, R. J.

    2013-01-01

    The feasibility of using a stochastic form of Pennes bioheat model within a 3D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L2 (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, Δt < 10sec, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss Δt > 10sec. PMID:22203706

  4. Study of the variation of thermal conductivity with water saturation using nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Jorand, Rachel; Fehr, Annick; Koch, Andreas; Clauser, Christoph

    2011-08-01

    In this paper, we present a method that allows one to correct thermal conductivity measurements for the effect of water loss when extrapolating laboratory data to in situ conditions. The water loss in shales and unconsolidated rocks is a serious problem that can introduce errors in the characterization of reservoirs. For this study, we measure the thermal conductivity of four sandstones with and without clay minerals according to different water saturation levels using an optical scanner. Thermal conductivity does not decrease linearly with water saturation. At high saturation and very low saturation, thermal conductivity decreases more quickly because of spontaneous liquid displacement and capillarity effects. Apart from these two effects, thermal conductivity decreases quasi-linearly. We also notice that the samples containing clay minerals are not completely drained, and thermal conductivity reaches a minimum value. In order to fit the variation of thermal conductivity with the water saturation as a whole, we used modified models commonly presented in thermal conductivity studies: harmonic and arithmetic mean and geometric models. These models take into account different types of porosity, especially those attributable to the abundance of clay, using measurements obtained from nuclear magnetic resonance (NMR). For argillaceous sandstones, a modified arithmetic-harmonic model fits the data best. For clean quartz sandstones under low water saturation, the closest fit to the data is obtained with the modified arithmetic-harmonic model, while for high water saturation, a modified geometric mean model proves to be the best.

  5. Landsat-8 TIRS thermal radiometric calibration status

    USGS Publications Warehouse

    Barsi, Julia A.; Markham, Brian L.; Montanaro, Matthew; Gerace, Aaron; Hook, Simon; Schott, John R.; Raqueno, Nina G.; Morfitt, Ron

    2017-01-01

    The Thermal Infrared Sensor (TIRS) instrument is the thermal-band imager on the Landsat-8 platform. The initial onorbit calibration estimates of the two TIRS spectral bands indicated large average radiometric calibration errors, -0.29 and -0.51 W/m2 sr μm or -2.1K and -4.4K at 300K in Bands 10 and 11, respectively, as well as high variability in the errors, 0.87K and 1.67K (1-σ), respectively. The average error was corrected in operational processing in January 2014, though, this adjustment did not improve the variability. The source of the variability was determined to be stray light from far outside the field of view of the telescope. An algorithm for modeling the stray light effect was developed and implemented in the Landsat-8 processing system in February 2017. The new process has improved the overall calibration of the two TIRS bands, reducing the residual variability in the calibration from 0.87K to 0.51K at 300K for Band 10 and from 1.67K to 0.84K at 300K for Band 11. There are residual average lifetime bias errors in each band: 0.04 W/m2 sr μm (0.30K) and -0.04 W/m2 sr μm (-0.29K), for Bands 10 and 11, respectively.

  6. Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting

    NASA Astrophysics Data System (ADS)

    Abarr, Miles L. Lindsey

    This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.

  7. Recent progress on air-bearing slumping of segmented thin-shell mirrors for x-ray telescopes: experiments and numerical analysis

    NASA Astrophysics Data System (ADS)

    Zuo, Heng E.; Yao, Youwei; Chalifoux, Brandon D.; DeTienne, Michael D.; Heilmann, Ralf K.; Schattenburg, Mark L.

    2017-08-01

    Slumping (or thermal-shaping) of thin glass sheets onto high precision mandrels was used successfully by NASA Goddard Space Flight Center to fabricate the NuSTAR telescope. But this process requires long thermal cycles and produces mid-range spatial frequency errors due to the anti-stick mandrel coatings. Over the last few years, we have designed and tested non-contact horizontal slumping of round flat glass sheets floating on thin layers of nitrogen between porous air-bearings using fast position control algorithms and precise fiber sensing techniques during short thermal cycles. We recently built a finite element model with ADINA to simulate the viscoelastic behavior of glass during the slumping process. The model utilizes fluid-structure interaction (FSI) to understand the deformation and motion of glass under the influence of air flow. We showed that for the 2D axisymmetric model, experimental and numerical approaches have comparable results. We also investigated the impact of bearing permeability on the resulting shape of the wafers. A novel vertical slumping set-up is also under development to eliminate the undesirable influence of gravity. Progress towards generating mirrors for good angular resolution and low mid-range spatial frequency errors is reported.

  8. Integrated Modeling Activities for the James Webb Space Telescope (JWST): Structural-Thermal-Optical Analysis

    NASA Technical Reports Server (NTRS)

    Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.

    2004-01-01

    This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.

  9. 3D-modelling of the thermal circumstances of a lake under artificial aeration

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoqing; Pan, Huachen; Köngäs, Petrina; Horppila, Jukka

    2017-12-01

    A 3D-model was developed to study the effects of hypolimnetic aeration on the temperature profile of a thermally stratified Lake Vesijärvi (southern Finland). Aeration was conducted by pumping epilimnetic water through the thermocline to the hypolimnion without breaking the thermal stratification. The model used time transient equation based on Navier-Stokes equation. The model was fitted to the vertical temperature distribution and environmental parameters (wind, air temperature, and solar radiation) before the onset of aeration, and the model was used to predict the vertical temperature distribution 3 and 15 days after the onset of aeration (1 August and 22 August). The difference between the modelled and observed temperature was on average 0.6 °C. The average percentage model error was 4.0% on 1 August and 3.7% on 22 August. In the epilimnion, model accuracy depended on the difference between the observed temperature and boundary conditions. In the hypolimnion, the model residual decreased with increasing depth. On 1 August, the model predicted a homogenous temperature profile in the hypolimnion, while the observed temperature decreased moderately from the thermocline to the bottom. This was because the effect of sediment was not included in the model. On 22 August, the modelled and observed temperatures near the bottom were identical demonstrating that the heat transfer by the aerator masked the effect of sediment and that exclusion of sediment heat from the model does not cause considerable error unless very short-term effects of aeration are studied. In all, the model successfully described the effects of the aerator on the lake's temperature profile. The results confirmed the validity of the applied computational fluid dynamic in artificial aeration; based on the simulated results, the effect of aeration can be predicted.

  10. Thermal infrared spectroscopy and modeling of experimentally shocked plagioclase feldspars

    USGS Publications Warehouse

    Johnson, J. R.; Horz, F.; Staid, M.I.

    2003-01-01

    Thermal infrared emission and reflectance spectra (250-1400 cm-1; ???7???40 ??m) of experimentally shocked albite- and anorthite-rich rocks (17-56 GPa) demonstrate that plagioclase feldspars exhibit characteristic degradations in spectral features with increasing pressure. New measurements of albite (Ab98) presented here display major spectral absorptions between 1000-1250 cm-1 (8-10 ??m) (due to Si-O antisymmetric stretch motions of the silica tetrahedra) and weaker absorptions between 350-700 cm-1 (14-29 ??m) (due to Si-O-Si octahedral bending vibrations). Many of these features persist to higher pressures compared to similar features in measurements of shocked anorthite, consistent with previous thermal infrared absorption studies of shocked feldspars. A transparency feature at 855 cm-1 (11.7 ??m) observed in powdered albite spectra also degrades with increasing pressure, similar to the 830 cm-1 (12.0 ??m) transparency feature in spectra of powders of shocked anorthite. Linear deconvolution models demonstrate that combinations of common mineral and glass spectra can replicate the spectra of shocked anorthite relatively well until shock pressures of 20-25 GPa, above which model errors increase substantially, coincident with the onset of diaplectic glass formation. Albite deconvolutions exhibit higher errors overall but do not change significantly with pressure, likely because certain clay minerals selected by the model exhibit absorption features similar to those in highly shocked albite. The implication for deconvolution of thermal infrared spectra of planetary surfaces (or laboratory spectra of samples) is that the use of highly shocked anorthite spectra in end-member libraries could be helpful in identifying highly shocked calcic plagioclase feldspars.

  11. Thermal performances of vertical hybrid PV/T air collector

    NASA Astrophysics Data System (ADS)

    Tabet, I.; Touafek, K.; Bellel, N.; Khelifa, A.

    2016-11-01

    In this work, numerical analyses and the experimental validation of the thermal behavior of a vertical photovoltaic thermal air collector are investigated. The thermal model is developed using the energy balance equations of the PV/T air collector. Experimental tests are conducted to validate our mathematical model. The tests are performed in the southern Algerian region (Ghardaïa) under clear sky conditions. The prototype of the PV/T air collector is vertically erected and south oriented. The absorber upper plate temperature, glass cover temperature, air temperature in the inlet and outlet of the collector, ambient temperature, wind speed, and solar radiation are measured. The efficiency of the collector increases with increase in mass flow of air, but the increase in mass flow of air reduces the temperature of the system. The increase in efficiency of the PV/T air collector is due to the increase in the number of fins added. In the experiments, the air temperature difference between the inlet and the outlet of the PV/T air collector reaches 10 ° C on November 21, 2014, the interval time is between 10:00 and 14:00, and the temperature of the upper plate reaches 45 ° C at noon. The mathematical model describing the dynamic behavior of the typical PV/T air collector is evaluated by calculating the root mean square error and mean absolute percentage error. A good agreement between the experiment and the simulation results is obtained.

  12. Twenty-Five Years of Landsat Thermal Band Calibration

    NASA Technical Reports Server (NTRS)

    Barsi, Julia A.; Markham, Brian L.; Schoff, John R.; Hook, Simon J.; Raqueno, Nina G.

    2010-01-01

    Landsat-7 Enhanced Thematic Mapper+ (ETM+), launched in April 1999, and Landsat-5 Thematic Mapper (TM), launched in 1984, both have a single thermal band. Both instruments thermal band calibrations have been updated previously: ETM+ in 2001 for a pre-launch calibration error and TM in 2007 for data acquired since the current era of vicarious calibration has been in place (1999). Vicarious calibration teams at Rochester Institute of Technology (RIT) and NASA/Jet Propulsion Laboratory (JPL) have been working to validate the instrument calibration since 1999. Recent developments in their techniques and sites have expanded the temperature and temporal range of the validation. The new data indicate that the calibration of both instruments had errors: the ETM+ calibration contained a gain error of 5.8% since launch; the TM calibration contained a gain error of 5% and an additional offset error between 1997 and 1999. Both instruments required adjustments in their thermal calibration coefficients in order to correct for the errors. The new coefficients were calculated and added to the Landsat operational processing system in early 2010. With the corrections, both instruments are calibrated to within +/-0.7K.

  13. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Marshall, J. A.

    1992-01-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  14. Coupled thermal-fluid analysis with flowpath-cavity interaction in a gas turbine engine

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, John Nathan

    This study seeks to improve the understanding of inlet conditions of a large rotor-stator cavity in a turbofan engine, often referred to as the drive cone cavity (DCC). The inlet flow is better understood through a higher fidelity computational fluid dynamics (CFD) modeling of the inlet to the cavity, and a coupled finite element (FE) thermal to CFD fluid analysis of the cavity in order to accurately predict engine component temperatures. Accurately predicting temperature distribution in the cavity is important because temperatures directly affect the material properties including Young's modulus, yield strength, fatigue strength, creep properties. All of these properties directly affect the life of critical engine components. In addition, temperatures cause thermal expansion which changes clearances and in turn affects engine efficiency. The DCC is fed from the last stage of the high pressure compressor. One of its primary functions is to purge the air over the rotor wall to prevent it from overheating. Aero-thermal conditions within the DCC cavity are particularly challenging to predict due to the complex air flow and high heat transfer in the rotating component. Thus, in order to accurately predict metal temperatures a two-way coupled CFD-FE analysis is needed. Historically, when the cavity airflow is modeled for engine design purposes, the inlet condition has been over-simplified for the CFD analysis which impacts the results, particularly in the region around the compressor disc rim. The inlet is typically simplified by circumferentially averaging the velocity field at the inlet to the cavity which removes the effect of pressure wakes from the upstream rotor blades. The way in which these non-axisymmetric flow characteristics affect metal temperatures is not well understood. In addition, a constant air temperature scaled from a previous analysis is used as the simplified cavity inlet air temperature. Therefore, the objectives of this study are: (a) model the DCC cavity with a more physically representative inlet condition while coupling the solid thermal analysis and compressible air flow analysis that includes the fluid velocity, pressure, and temperature fields; (b) run a coupled analysis whose boundary conditions come from computational models, rather than thermocouple data; (c) validate the model using available experimental data; and (d) based on the validation, determine if the model can be used to predict air inlet and metal temperatures for new engine geometries. Verification with experimental results showed that the coupled analysis with the 3D no-bolt CFD model with predictive boundary conditions, over-predicted the HP6 offtake temperature by 16k. The maximum error was an over-prediction of 50k while the average error was 17k. The predictive model with 3D bolts also predicted cavity temperatures with an average error of 17k. For the two CFD models with predicted boundary conditions, the case without bolts performed better than the case with bolts. This is due to the flow errors caused by placing stationary bolts in a rotating reference frame. Therefore it is recommended that this type of analysis only be attempted for drive cone cavities with no bolts or shielded bolts.

  15. A quantitative model to evaluate solubility relationship of polymorphs from their thermal properties.

    PubMed

    Mao, Chen; Pinal, Rodolfo; Morris, Kenneth R

    2005-07-01

    The objective of the study is to develop a model to estimate the solubility ratio of two polymorphic forms based on the calculation of the free energy difference of two forms at any temperature. This model can be used for compounds with low solubility (a few mole percent) in which infinite dilution can be approximated. The model is derived using the melting temperature and heat of fusion for apparent monotropic systems, and the solid-solid transition temperature and heat of transition for apparent enantiotropic systems. A rigorous derivation also requires heat capacity (Cp) measurement of liquid and two solid forms. This model is validated by collecting thermal properties of polymorphs for several drugs using conventional or modulated differential scanning calorimetry. From these properties the solubility ratio of two polymorphs is evaluated using the model and compared with the experimental value at different temperatures. The predicted values using the full model agree well with the experimental ones. For the purpose of easy measurement, working equations without Cp terms are also applied. Ignoring Cp may result in an error of 10% or less, suggesting that the working equation is applicable in practice. Additional error may be generated for the apparent enantiotropic systems due to the inconsistency between the observed solid-solid transition temperature and the true thermodynamic transition temperature. This inconsistency allows the predicted solubility ratios (low melt/high melt) to be smaller. Therefore, a correction factor of 1.1 is recommended to reduce the error when the working equation is used to estimate the solubility ratio of an enantiotropic system. The study of the free energy changes of two crystalline forms of a drug allows for the development of a model that successfully predicts the solubility ratio at any temperature from their thermal properties. This model provides a thermodynamic foundation as to how the free energy difference of two polymorphs is reflected by their equilibrium solubilities. It also provides a quick and practical way of evaluating the relative solubility of two polymorphs from single differential scanning calorimetry runs.

  16. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  17. End-to-end Coronagraphic Modeling Including a Low-order Wavefront Sensor

    NASA Technical Reports Server (NTRS)

    Krist, John E.; Trauger, John T.; Unwin, Stephen C.; Traub, Wesley A.

    2012-01-01

    To evaluate space-based coronagraphic techniques, end-to-end modeling is necessary to simulate realistic fields containing speckles caused by wavefront errors. Real systems will suffer from pointing errors and thermal and motioninduced mechanical stresses that introduce time-variable wavefront aberrations that can reduce the field contrast. A loworder wavefront sensor (LOWFS) is needed to measure these changes at a sufficiently high rate to maintain the contrast level during observations. We implement here a LOWFS and corresponding low-order wavefront control subsystem (LOWFCS) in end-to-end models of a space-based coronagraph. Our goal is to be able to accurately duplicate the effect of the LOWFS+LOWFCS without explicitly evaluating the end-to-end model at numerous time steps.

  18. A Model of Thermal Conductivity for Planetary Soils: 1. Theory for Unconsolidated Soils

    NASA Technical Reports Server (NTRS)

    Piqueux, S.; Christensen, P. R.

    2009-01-01

    We present a model of heat conduction for mono-sized spherical particulate media under stagnant gases based on the kinetic theory of gases, numerical modeling of Fourier s law of heat conduction, theoretical constraints on the gas thermal conductivity at various Knudsen regimes, and laboratory measurements. Incorporating the effect of the temperature allows for the derivation of the pore-filling gas conductivity and bulk thermal conductivity of samples using additional parameters (pressure, gas composition, grain size, and porosity). The radiative and solid-to-solid conductivities are also accounted for. Our thermal model reproduces the well-established bulk thermal conductivity dependency of a sample with the grain size and pressure and also confirms laboratory measurements finding that higher porosities generally lead to lower conductivities. It predicts the existence of the plateau conductivity at high pressure, where the bulk conductivity does not depend on the grain size. The good agreement between the model predictions and published laboratory measurements under a variety of pressures, temperatures, gas compositions, and grain sizes provides additional confidence in our results. On Venus, Earth, and Titan, the pressure and temperature combinations are too high to observe a soil thermal conductivity dependency on the grain size, but each planet has a unique thermal inertia due to their different surface temperatures. On Mars, the temperature and pressure combination is ideal to observe the soil thermal conductivity dependency on the average grain size. Thermal conductivity models that do not take the temperature and the pore-filling gas composition into account may yield significant errors.

  19. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  20. Towards a realistic simulation of boreal summer tropical rainfall climatology in state-of-the-art coupled models: role of the background snow-free land albedo

    NASA Astrophysics Data System (ADS)

    Terray, P.; Sooraj, K. P.; Masson, S.; Krishna, R. P. M.; Samson, G.; Prajeesh, A. G.

    2017-07-01

    State-of-the-art global coupled models used in seasonal prediction systems and climate projections still have important deficiencies in representing the boreal summer tropical rainfall climatology. These errors include prominently a severe dry bias over all the Northern Hemisphere monsoon regions, excessive rainfall over the ocean and an unrealistic double inter-tropical convergence zone (ITCZ) structure in the tropical Pacific. While these systematic errors can be partly reduced by increasing the horizontal atmospheric resolution of the models, they also illustrate our incomplete understanding of the key mechanisms controlling the position of the ITCZ during boreal summer. Using a large collection of coupled models and dedicated coupled experiments, we show that these tropical rainfall errors are partly associated with insufficient surface thermal forcing and incorrect representation of the surface albedo over the Northern Hemisphere continents. Improving the parameterization of the land albedo in two global coupled models leads to a large reduction of these systematic errors and further demonstrates that the Northern Hemisphere subtropical deserts play a seminal role in these improvements through a heat low mechanism.

  1. Use of advanced modeling techniques to optimize thermal packaging designs.

    PubMed

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed during its validation. Thermal packaging is routinely used by the pharmaceutical industry to provide passive and active temperature control of their thermally sensitive products from manufacture through end use (termed the cold chain). In this study, the authors focus on passive temperature control (passive control does not require any external energy source and is entirely based on specific and/or latent heat of shipper components). As temperature-sensitive pharmaceuticals are being transported over longer distances, cold chain reliability is essential. To achieve reliability, a significant amount of time and resources must be invested in design, test, and production of optimized temperature-controlled packaging solutions. To shorten the cumbersome trial and error approach (design/test/design/test …), computer simulation (virtual prototyping and testing of thermal shippers) is a promising method. Although several companies have attempted to develop such a tool, there has been limited success to date. Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a coupled conductive/convective-based thermal shipper. A modeling technique capable of correctly capturing shipper thermal behavior can be used to develop packaging designs more quickly, reducing up-front costs while also improving shipper performance.

  2. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  3. H2-norm for mesh optimization with application to electro-thermal modeling of an electric wire in automotive context

    NASA Astrophysics Data System (ADS)

    Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia

    2017-04-01

    In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.

  4. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  5. Model-Based Angular Scan Error Correction of an Electrothermally-Actuated MEMS Mirror

    PubMed Central

    Zhang, Hao; Xu, Dacheng; Zhang, Xiaoyang; Chen, Qiao; Xie, Huikai; Li, Suiqiong

    2015-01-01

    In this paper, the actuation behavior of a two-axis electrothermal MEMS (Microelectromechanical Systems) mirror typically used in miniature optical scanning probes and optical switches is investigated. The MEMS mirror consists of four thermal bimorph actuators symmetrically located at the four sides of a central mirror plate. Experiments show that an actuation characteristics difference of as much as 4.0% exists among the four actuators due to process variations, which leads to an average angular scan error of 0.03°. A mathematical model between the actuator input voltage and the mirror-plate position has been developed to predict the actuation behavior of the mirror. It is a four-input, four-output model that takes into account the thermal-mechanical coupling and the differences among the four actuators; the vertical positions of the ends of the four actuators are also monitored. Based on this model, an open-loop control method is established to achieve accurate angular scanning. This model-based open loop control has been experimentally verified and is useful for the accurate control of the mirror. With this control method, the precise actuation of the mirror solely depends on the model prediction and does not need the real-time mirror position monitoring and feedback, greatly simplifying the MEMS control system. PMID:26690432

  6. Coupled thermal-fluid-mechanics analysis of twin roll casting of A7075 aluminum alloy

    NASA Astrophysics Data System (ADS)

    Lee, Yun-Soo; Kim, Hyoung-Wook; Cho, Jae-Hyung; Chun, Se-Hwan

    2017-09-01

    Better understanding of temperature distribution and roll separation force during twin roll casting of aluminum alloys is critical to successfully fabricate good quality of aluminum strips. Therefore, the simulation techniques are widely applied to understand the twin roll casting process in a comprehensive way and to reduce the experimental time and cost of trial and error. However, most of the conventional approaches are considered thermally coupled flow, or thermally coupled mechanical behaviors. In this study, a fully coupled thermal-fluid-mechanical analysis of twin roll casting of A7075 aluminum strips was carried out using the finite element method. Temperature profile, liquid fraction and metal flow of aluminum strips with different thickness were predicted. Roll separation force and roll temperatures were experimentally obtained from a pilot-scale twin roll caster, and those results were compared with model predictions. Coupling the fluid of the liquid melt to the thermal and mechanical modeling reasonably predicted roll temperature distribution and roll separation force during twin roll casting.

  7. A Model for Hydrogen Thermal Conductivity and Viscosity Including the Critical Point

    NASA Technical Reports Server (NTRS)

    Wagner, Howard A.; Tunc, Gokturk; Bayazitoglu, Yildiz

    2001-01-01

    In order to conduct a thermal analysis of heat transfer to liquid hydrogen near the critical point, an accurate understanding of the thermal transport properties is required. A review of the available literature on hydrogen transport properties identified a lack of useful equations to predict the thermal conductivity and viscosity of liquid hydrogen. The tables published by the National Bureau of Standards were used to perform a series of curve fits to generate the needed correlation equations. These equations give the thermal conductivity and viscosity of hydrogen below 100 K. They agree with the published NBS tables, with less than a 1.5 percent error for temperatures below 100 K and pressures from the triple point to 1000 KPa. These equations also capture the divergence in the thermal conductivity at the critical point

  8. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Incorporating neurophysiological concepts in mathematical thermoregulation models

    NASA Astrophysics Data System (ADS)

    Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.

    2014-01-01

    Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR < 0.27. Tskin simulation results were within 0.37 °C of the measured mean skin temperature. This study shows that (1) thermal reception and neurophysiological pathways involved in thermoregulatory SBF control can be captured in a mathematical model, and (2) human thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.

  10. Fundamental limits in heat-assisted magnetic recording and methods to overcome it with exchange spring structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suess, D.; Abert, C.; Bruckner, F.

    2015-04-28

    The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePtmore » film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5–0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10{sup −4} if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.« less

  11. Importance of interpolation and coincidence errors in data fusion

    NASA Astrophysics Data System (ADS)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  12. Temperature induced distortions in space telescope mirrors

    NASA Technical Reports Server (NTRS)

    Nied, H. F.; Rudmann, A. A.

    1993-01-01

    In this paper, it is illustrated how measured instantaneous coefficients of thermal expansion (CTE) can be accurately taken into account when modeling the structural behavior of space based optical systems. In particular, the importance of including CTE spatial variations in the analysis of optical elements is emphasized. A comparison is made between the CTE's of three optical materials commonly used in the construction of space mirrors (ULE, Zerodur, and beryllium). The overall impact that selection of any one of these materials has on thermal distortions is briefly discussed. As an example of how temperature dependent spatial variations in thermal strain can be accurately incorporated in the thermo-structural analysis of a precision optical system, a finite element model is developed, which is used to estimate the thermally induced distortions in the Hubble Space Telescope's (HST) primary mirror. In addition to the structural analysis, the optical aberrations due to thermally induced distortions are also examined. These calculations indicate that thermal distortions in HST's primary mirror contribute mainly to defocus error with a relatively small contribution to spherical aberration.

  13. Development and application of artificial neural network models to estimate values of a complex human thermal comfort index associated with urban heat and cool island patterns using air temperature data from a standard meteorological station

    NASA Astrophysics Data System (ADS)

    Moustris, Konstantinos; Tsiros, Ioannis X.; Tseliou, Areti; Nastos, Panagiotis

    2018-04-01

    The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.

  14. Development and application of artificial neural network models to estimate values of a complex human thermal comfort index associated with urban heat and cool island patterns using air temperature data from a standard meteorological station.

    PubMed

    Moustris, Konstantinos; Tsiros, Ioannis X; Tseliou, Areti; Nastos, Panagiotis

    2018-04-11

    The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.

  15. Effect of the forcing term in the pseudopotential lattice Boltzmann modeling of thermal flows

    NASA Astrophysics Data System (ADS)

    Li, Qing; Luo, K. H.

    2014-05-01

    The pseudopotential lattice Boltzmann (LB) model is a popular model in the LB community for simulating multiphase flows. Recently, several thermal LB models, which are based on the pseudopotential LB model and constructed within the framework of the double-distribution-function LB method, were proposed to simulate thermal multiphase flows [G. Házi and A. Márkus, Phys. Rev. E 77, 026305 (2008), 10.1103/PhysRevE.77.026305; L. Biferale, P. Perlekar, M. Sbragaglia, and F. Toschi, Phys. Rev. Lett. 108, 104502 (2012), 10.1103/PhysRevLett.108.104502; S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037; M. R. Kamali et al., Phys. Rev. E 88, 033302 (2013), 10.1103/PhysRevE.88.033302]. The objective of the present paper is to show that the effect of the forcing term on the temperature equation must be eliminated in the pseudopotential LB modeling of thermal flows. First, the effect of the forcing term on the temperature equation is shown via the Chapman-Enskog analysis. For comparison, alternative treatments that are free from the forcing-term effect are provided. Subsequently, numerical investigations are performed for two benchmark tests. The numerical results clearly show that the existence of the forcing-term effect will lead to significant numerical errors in the pseudopotential LB modeling of thermal flows.

  16. Demonstration of spectral calibration for stellar interferometry

    NASA Technical Reports Server (NTRS)

    Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun

    2006-01-01

    A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.

  17. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  18. A landscape-scale wildland fire study using coupled weather-wildland fire model and airborne remote sensing

    Treesearch

    J.L. Coen; Philip Riggan

    2011-01-01

    We examine the Esperanza fire, a Santa Ana-driven wildland fire that occurred in complex terrain in spatially heterogeneous chaparral fuels, using airborne remote sensing imagery from the FireMapper thermal-imaging radiometer and a coupled weather-wildland fire model. The radiometer data maps fire intensity and is used to evaluate the error in the extent of the...

  19. Ion beam figuring approach for thermally sensitive space optics.

    PubMed

    Yin, Xiaolin; Deng, Weijie; Tang, Wa; Zhang, Binzhi; Xue, Donglin; Zhang, Feng; Zhang, Xuejun

    2016-10-01

    During the ion beam figuring (IBF) of a space mirror, thermal radiation of the neutral filament and particle collisions will heat the mirror. The adhesive layer used to bond the metal parts and the mirror is very sensitive to temperature rise. When the temperature exceeds the designed value, the mirror surface shape will change markedly because of the thermal deformation and stress release of the adhesive layer, thereby reducing the IBF accuracy. To suppress the thermal effect, we analyzed the heat generation mechanism. By using thermal radiation theory, we established a thermal radiation model of the neutral filament. Additionally, we acquired a surface-type Gaussian heat source model of the ion beam sputtering based on the removal function and Faraday scan result. Using the finite-element-method software ABAQUS, we developed a method that can simulate the thermal effect of the IBF for the full path and all dwell times. Based on the thermal model, which was experimentally confirmed, we simulated the thermal effects for a 675  mm×374  mm rectangular SiC space mirror. By optimizing the dwell time distribution, the peak temperature value of the adhesive layer during the figuring process was reduced under the designed value. After one round of figuring, the RMS value of the surface error changed from 0.094 to 0.015λ (λ=632.8  nm), which proved the effectiveness of the thermal analysis and suppression method.

  20. Monitoring Method of Cutting Force by Using Additional Spindle Sensors

    NASA Astrophysics Data System (ADS)

    Sarhan, Ahmed Aly Diaa; Matsubara, Atsushi; Sugihara, Motoyuki; Saraie, Hidenori; Ibaraki, Soichi; Kakino, Yoshiaki

    This paper describes a monitoring method of cutting forces for end milling process by using displacement sensors. Four eddy-current displacement sensors are installed on the spindle housing of a machining center so that they can detect the radial motion of the rotating spindle. Thermocouples are also attached to the spindle structure in order to examine the thermal effect in the displacement sensing. The change in the spindle stiffness due to the spindle temperature and the speed is investigated as well. Finally, the estimation performance of cutting forces using the spindle displacement sensors is experimentally investigated by machining tests on carbon steel in end milling operations under different cutting conditions. It is found that the monitoring errors are attributable to the thermal displacement of the spindle, the time lag of the sensing system, and the modeling error of the spindle stiffness. It is also shown that the root mean square errors between estimated and measured amplitudes of cutting forces are reduced to be less than 20N with proper selection of the linear stiffness.

  1. Sensitivity analysis of hydraulic and thermal parameters inducing anomalous heat flow in the Lower Yarmouk Gorge

    NASA Astrophysics Data System (ADS)

    Goretzki, Nora; Inbar, Nimrod; Kühn, Michael; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Siebert, Christian; Magri, Fabien

    2016-04-01

    The Lower Yarmouk Gorge, at the border between Israel and Jordan, is characterized by an anomalous temperature gradient of 46 °C/km. Numerical simulations of thermally-driven flow show that ascending thermal waters are the result of mixed convection, i.e. the interaction between the regional flow from the surrounding heights and buoyant flow within permeable faults [1]. Those models were calibrated against available temperature logs by running several forward problems (FP), with a classic "trial and error" method. In the present study, inverse problems (IP) are applied to find alternative parameter distributions that also lead to the observed thermal anomalies. The investigated physical parameters are hydraulic conductivity and thermal conductivity. To solve the IP, the PEST® code [2] is applied via the graphical interface FEPEST® in FEFLOW® [3]. The results show that both hydraulic and thermal conductivity are consistent with the values determined with the trial and error calibrations, which precede this study. However, the IP indicates that the hydraulic conductivity of the Senonian Paleocene aquitard can be 8.54*10-3 m/d, which is three times lower than the originally estimated value in [1]. Moreover, the IP suggests that the hydraulic conductivity in the faults can increase locally up to 0.17 m/d. These highly permeable areas can be interpreted as local damage zones at the faults/units intersections. They can act as lateral pathways in the deep aquifers that allow deep outflow of thermal water. This presentation provides an example about the application of FP and IP to infer a wide range of parameter values that reproduce observed environmental issues. [1] Magri F, Inbar N, Siebert C, Rosenthal E, Guttman J, Möller P (2015) Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520, 342-355 [2] Doherty J (2010) PEST: Model-Independent Parameter Estimation. user manual 5th Edition. Watermark, Brisbane, Australia [3] Diersch H.-J.G. (2014) FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media. Springer- Verlag Berlin Heidelberg, 996p

  2. Implications of Thermal Diffusity being Inversely Proportional to Temperature Times Thermal Expansivity on Lower Mantle Heat Transport

    NASA Astrophysics Data System (ADS)

    Hofmeister, A.

    2010-12-01

    Many measurements and models of heat transport in lower mantle candidate phases contain systematic errors: (1) conventional methods of insulators involve thermal losses that are pressure (P) and temperature (T) dependent due to physical contact with metal thermocouples, (2) measurements frequently contain unwanted ballistic radiative transfer which hugely increases with T, (3) spectroscopic measurements of dense samples in diamond anvil cells involve strong refraction by which has not been accounted for in analyzing transmission data, (4) the role of grain boundary scattering in impeding heat and light transfer has largely been overlooked, and (5) essentially harmonic physical properties have been used to predict anharmonic behavior. Improving our understanding of the physics of heat transport requires accurate data, especially as a function of temperature, where anharmonicity is the key factor. My laboratory provides thermal diffusivity (D) at T from laser flash analysis, which lacks the above experimental errors. Measuring a plethora of chemical compositions in diverse dense structures (most recently, perovskites, B1, B2, and glasses) as a function of temperature provides a firm basis for understanding microscopic behavior. Given accurate measurements for all quantities: (1) D is inversely proportional to [T x alpha(T)] from ~0 K to melting, where alpha is thermal expansivity, and (2) the damped harmonic oscillator model matches measured D(T), using only two parameters (average infrared dielectric peak width and compressional velocity), both acquired at temperature. These discoveries pertain to the anharmonic aspects of heat transport. I have previously discussed the easily understood quasi-harmonic pressure dependence of D. Universal behavior makes application to the Earth straightforward: due to the stiffness and slow motions of the plates and interior, and present-day, slow planetary cooling rates, Earth can be approximated as being in quasi-steady-state. Because cooling conditions are not transient and pressures are high, vibrational mechanisms overshadow radiative diffusion. On this basis, lower mantle thermal conductivity and temperatures, are modeled from seismic data, using available experimental constraints on T for the melted core. A steep thermal gradient existing just above the core is unlikely.

  3. Three-Dimensional Blood Vessel Model with Temperature-Indicating Function for Evaluation of Thermal Damage during Surgery

    PubMed Central

    Watanabe, Takafumi; Arai, Fumihito

    2018-01-01

    Surgical simulators have recently attracted attention because they enable the evaluation of the surgical skills of medical doctors and the performance of medical devices. However, thermal damage to the human body during surgery is difficult to evaluate using conventional surgical simulators. In this study, we propose a functional surgical model with a temperature-indicating function for the evaluation of thermal damage during surgery. The simulator is made of a composite material of polydimethylsiloxane and a thermochromic dye, which produces an irreversible color change as the temperature increases. Using this material, we fabricated a three-dimensional blood vessel model using the lost-wax process. We succeeded in fabricating a renal vessel model for simulation of catheter ablation. Increases in the temperature of the materials can be measured by image analysis of their color change. The maximum measurement error of the temperature was approximately −1.6 °C/+2.4 °C within the range of 60 °C to 100 °C. PMID:29370139

  4. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  5. Piezocomposite Actuator Arrays for Correcting and Controlling Wavefront Error in Reflectors

    NASA Technical Reports Server (NTRS)

    Bradford, Samuel Case; Peterson, Lee D.; Ohara, Catherine M.; Shi, Fang; Agnes, Greg S.; Hoffman, Samuel M.; Wilkie, William Keats

    2012-01-01

    Three reflectors have been developed and tested to assess the performance of a distributed network of piezocomposite actuators for correcting thermal deformations and total wave-front error. The primary testbed article is an active composite reflector, composed of a spherically curved panel with a graphite face sheet and aluminum honeycomb core composite, and then augmented with a network of 90 distributed piezoelectric composite actuators. The piezoelectric actuator system may be used for correcting as-built residual shape errors, and for controlling low-order, thermally-induced quasi-static distortions of the panel. In this study, thermally-induced surface deformations of 1 to 5 microns were deliberately introduced onto the reflector, then measured using a speckle holography interferometer system. The reflector surface figure was subsequently corrected to a tolerance of 50 nm using the actuators embedded in the reflector's back face sheet. Two additional test articles were constructed: a borosilicate at window at 150 mm diameter with 18 actuators bonded to the back surface; and a direct metal laser sintered reflector with spherical curvature, 230 mm diameter, and 12 actuators bonded to the back surface. In the case of the glass reflector, absolute measurements were performed with an interferometer and the absolute surface was corrected. These test articles were evaluated to determine their absolute surface control capabilities, as well as to assess a multiphysics modeling effort developed under this program for the prediction of active reflector response. This paper will describe the design, construction, and testing of active reflector systems under thermal loads, and subsequent correction of surface shape via distributed peizeoelctric actuation.

  6. Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)

    DOE PAGES

    Atchley, A. L.; Painter, S. L.; Harp, D. R.; ...

    2015-04-14

    Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. However, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth System Models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth System Models challenge validation and parameterization of hydrothermal models. A recently developed surface/subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to calibrate and identify fine scale controls of ALT in ice wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze/thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g. troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less

  7. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  8. Thermal conductivity measurements of proton-heated warm dense aluminum

    NASA Astrophysics Data System (ADS)

    McKelvey, A.; Kemp, G.; Sterne, P.; Fernandez, A.; Shepherd, R.; Marinak, M.; Link, A.; Collins, G.; Sio, H.; King, J.; Freeman, R.; Hua, R.; McGuffey, C.; Kim, J.; Beg, F.; Ping, Y.

    2017-10-01

    We present the first thermal conductivity measurements of warm dense aluminum at 0.5-2.7 g/cc and 2-10 eV, using a recently developed platform of differential heating. A temperature gradient is induced in a Au/Al dual-layer target by proton heating, and subsequent heat flow from the hotter Au to the Al rear surface is detected by two simultaneous time-resolved diagnostics. A systematic data set allows for constraining both thermal conductivity and equation-of-state models. Simulations using Purgatorio model or Sesame S27314 for Al thermal conductivity and LEOS for Au/Al release equation-of-state show good agreement with data after 15 ps. Predictions by other models, such Lee-More, Sesame 27311 and 29373, are outside of experimental error bars. Discrepancy still exists at early time 0-15 ps, likely due to non-equilibrium conditions. (Y. Ping et al. Phys. Plasmas, 2015, A. Mckelvey, et al. Sci. Reports 2017). This work was performed under the auspices of the DOE by LLNL under contract DE-AC52-07NA27344 with support from DOE OFES Early Career program and LLNL LDRD program.

  9. Sensitivity of thermal inertia calculations to variations in environmental factors. [in mapping of Earth's surface by remote sensing

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Alley, R. E.; Schieldge, J. P.

    1984-01-01

    The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.

  10. Two-phase adiabatic pressure drop experiments and modeling under micro-gravity conditions

    NASA Astrophysics Data System (ADS)

    Longeot, Matthieu J.; Best, Frederick R.

    1995-01-01

    Thermal systems for space applications based on two phase flow have several advantages over single phase systems. Two phase thermal energy management and dynamic power conversion systems have the capability of achieving high specific power levels. However, before two phase systems for space applications can be designed effectively, knowledge of the flow behavior in a ``0-g'' acceleration environment is necessary. To meet this need, two phase flow experiments were conducted by the Interphase Transport Phenomena Laboratory Group (ITP) aboard the National Aeronautics and Space Administration's (NASA) KC-135, using R12 as the working fluid. The present work is concerned with modeling of two-phase pressure drop under 0-g conditions, for bubbly and slug flow regimes. The set of data from the ITP group includes 3 bubbly points, 9 bubbly/slug points and 6 slug points. These two phase pressure drop data were collected in 1991 and 1992. A methodology to correct and validate the data was developed to achieve high levels of confidence. A homogeneous model was developed to predict the pressure drop for particular flow conditions. This model, which uses the Blasius Correlation, was found to be accurate for bubbly and bubbly/slug flows, with errors not larger than 28%. For slug flows, however, the errors are greater, attaining values up to 66%.

  11. Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Daniel S.; Mines, Gregory L.; Turchi, Craig S.

    2015-09-02

    The Stillwater Power Plant is the first hybrid plant in the world able to bring together a medium-enthalpy geothermal unit with solar thermal and solar photovoltaic systems. Solar field and power plant models have been developed to predict the performance of the Stillwater geothermal / solar-thermal hybrid power plant. The models have been validated using operational data from the Stillwater plant. A preliminary effort to optimize performance of the Stillwater hybrid plant using optical characterization of the solar field has been completed. The Stillwater solar field optical characterization involved measurement of mirror reflectance, mirror slope error, and receiver position error.more » The measurements indicate that the solar field may generate 9% less energy than the design value if an appropriate tracking offset is not employed. A perfect tracking offset algorithm may be able to boost the solar field performance by about 15%. The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV position optimization, turbine inlet entropy control using optimization of multiple process variables, and mixed working fluid substitution. The hybrid plant models predict that each of these operating strategies could increase net power generation relative to the baseline Stillwater hybrid plant operations.« less

  12. Design verification of large time constant thermal shields for optical reference cavities.

    PubMed

    Zhang, J; Wu, W; Shi, X H; Zeng, X Y; Deng, K; Lu, Z H

    2016-02-01

    In order to achieve high frequency stability in ultra-stable lasers, the Fabry-Pérot reference cavities shall be put inside vacuum chambers with large thermal time constants to reduce the sensitivity to external temperature fluctuations. Currently, the determination of thermal time constants of vacuum chambers is based either on theoretical calculation or time-consuming experiments. The first method can only apply to simple system, while the second method will take a lot of time to try out different designs. To overcome these limitations, we present thermal time constant simulation using finite element analysis (FEA) based on complete vacuum chamber models and verify the results with measured time constants. We measure the thermal time constants using ultrastable laser systems and a frequency comb. The thermal expansion coefficients of optical reference cavities are precisely measured to reduce the measurement error of time constants. The simulation results and the experimental results agree very well. With this knowledge, we simulate several simplified design models using FEA to obtain larger vacuum thermal time constants at room temperature, taking into account vacuum pressure, shielding layers, and support structure. We adopt the Taguchi method for shielding layer optimization and demonstrate that layer material and layer number dominate the contributions to the thermal time constant, compared with layer thickness and layer spacing.

  13. Modified Laser Flash Method for Thermal Properties Measurements and the Influence of Heat Convection

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalia N.; Su, Ching-Hua; Lehoczky, Sandor L.

    2003-01-01

    The study examined the effect of natural convection in applying the modified laser flash method to measure thermal properties of semiconductor melts. Common laser flash method uses a laser pulse to heat one side of a thin circular sample and measures the temperature response of the other side. Thermal diffusivity can be calculations based on a heat conduction analysis. For semiconductor melt, the sample is contained in a specially designed quartz cell with optical windows on both sides. When laser heats the vertical melt surface, the resulting natural convection can introduce errors in calculation based on heat conduction model alone. The effect of natural convection was studied by CFD simulations with experimental verification by temperature measurement. The CFD results indicated that natural convection would decrease the time needed for the rear side to reach its peak temperature, and also decrease the peak temperature slightly in our experimental configuration. Using the experimental data, the calculation using only heat conduction model resulted in a thermal diffusivity value is about 7.7% lower than that from the model with natural convection. Specific heat capacity was about the same, and the difference is within 1.6%, regardless of heat transfer models.

  14. Studying the Transient Thermal Contact Conductance Between the Exhaust Valve and Its Seat Using the Inverse Method

    NASA Astrophysics Data System (ADS)

    Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid

    2016-02-01

    In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.

  15. Implicit Monte Carlo with a linear discontinuous finite element material solution and piecewise non-constant opacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.

    2016-02-23

    Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less

  16. Dendritic solidification. I - Analysis of current theories and models. II - A model for dendritic growth under an imposed thermal gradient

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1985-01-01

    A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.

  17. Simultaneous Measurement of Thermal Conductivity and Specific Heat in a Single TDTR Experiment

    NASA Astrophysics Data System (ADS)

    Sun, Fangyuan; Wang, Xinwei; Yang, Ming; Chen, Zhe; Zhang, Hang; Tang, Dawei

    2018-01-01

    Time-domain thermoreflectance (TDTR) technique is a powerful thermal property measurement method, especially for nano-structures and material interfaces. Thermal properties can be obtained by fitting TDTR experimental data with a proper thermal transport model. In a single TDTR experiment, thermal properties with different sensitivity trends can be extracted simultaneously. However, thermal conductivity and volumetric heat capacity usually have similar trends in sensitivity for most materials; it is difficult to measure them simultaneously. In this work, we present a two-step data fitting method to measure the thermal conductivity and volumetric heat capacity simultaneously from a set of TDTR experimental data at single modulation frequency. This method takes full advantage of the information carried by both amplitude and phase signals; it is a more convenient and effective solution compared with the frequency-domain thermoreflectance method. The relative error is lower than 5 % for most cases. A silicon wafer sample was measured by TDTR method to verify the two-step fitting method.

  18. Calculations of thermal radiation transfer of C2H2 and C2H4 together with H2O, CO2, and CO in a one-dimensional enclosure using LBL and SNB models

    NASA Astrophysics Data System (ADS)

    Qi, Chaobo; Zheng, Shu; Zhou, Huaichun

    2017-08-01

    Generally, the involvement of hydrocarbons such as C2H4 and its derivative C2H2 in thermal radiation has not been accounted in the numerical simulation of their flames, which may cause serious error for estimation of temperature in the early stage of combustion. At the first, the Statistical Narrow-Band (SNB) model parameters for C2H2 and C2H4 are generated from line by line (LBL) calculations. The distributions of the concentrations of radiating gases such as H2O, CO2, CO, C2H2 and C2H4, and the temperature along the centerline of a laminar ethylene/air diffusion flame were chosen to form a one-dimensional, planar enclosure to be tested in this study. Thermal radiation transfer in such an enclosure was calculated using the LBL approach and the SNB model, most of the relative errors are less than 8% and the results of these two models shows an excellent agreement. Below the height of 20 mm, which is the early stage of the flame, the average fraction contributed by C2H2 and C2H4 in the radiative heat source is 33.8%, while that by CO is only 5.8%. This result indicates that the involvement of C2H2 and C2H4 in radiation heat transfer needs to be taken into account in the numerical modeling of the ethylene/air diffusion flame, especially in the early stage of combustion.

  19. Multisensor systems today and tomorrow: Machine control, diagnosis and thermal compensation

    NASA Astrophysics Data System (ADS)

    Nunzio, D'Addea

    2000-05-01

    Multisensor techniques that deal with control of tribology test rig and with diagnosis and thermal error compensation of machine tools are the starting point for some consideration about the use of these techniques as in fuzzy and neural net systems. The author comes to conclusion that anticipatory systems and multisensor techniques will have in the next future a great improvement and a great development mainly in the thermal error compensation of machine tools.

  20. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  1. Effects of meteorological models on the solution of the surface energy balance and soil temperature variations in bare soils

    NASA Astrophysics Data System (ADS)

    Saito, Hirotaka; Šimůnek, Jiri

    2009-07-01

    SummaryA complete evaluation of the soil thermal regime can be obtained by evaluating the movement of liquid water, water vapor, and thermal energy in the subsurface. Such an evaluation requires the simultaneous solution of the system of equations for the surface water and energy balance, and subsurface heat transport and water flow. When only daily climatic data is available, one needs not only to estimate diurnal cycles of climatic data, but to calculate the continuous values of various components in the energy balance equation, using different parameterization methods. The objective of this study is to quantify the impact of the choice of different estimation and parameterization methods, referred together to as meteorological models in this paper, on soil temperature predictions in bare soils. A variety of widely accepted meteorological models were tested on the dataset collected at a proposed low-level radioactive-waste disposal site in the Chihuahua Desert in West Texas. As the soil surface was kept bare during the study, no vegetation effects were evaluated. A coupled liquid water, water vapor, and heat transport model, implemented in the HYDRUS-1D program, was used to simulate diurnal and seasonal soil temperature changes in the engineered cover installed at the site. The modified version of HYDRUS provides a flexible means for using various types of information and different models to evaluate surface mass and energy balance. Different meteorological models were compared in terms of their prediction errors for soil temperatures at seven observation depths. The results obtained indicate that although many available meteorological models can be used to solve the energy balance equation at the soil-atmosphere interface in coupled water, vapor, and heat transport models, their impact on overall simulation results varies. For example, using daily average climatic data led to greater prediction errors, while relatively simple meteorological models may significantly improve soil temperature predictions. On the other hand, while models for the albedo and soil emissivity had little impact on soil temperature predictions, the choice of the atmospheric emissivity models had a greater impact. A comparison of all the different models indicates that the error introduced at the soil atmosphere interface propagates to deeper layers. Therefore, attention needs to be paid not only to the precise determination of the soil hydraulic and thermal properties, but also to the selection of proper meteorological models for the components involved in the surface energy balance calculations.

  2. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  3. Extension of similarity test procedures to cooled engine components with insulating ceramic coatings

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.

    1980-01-01

    Material thermal conductivity was analyzed for its effect on the thermal performance of air cooled gas turbine components, both with and without a ceramic thermal-barrier material, tested at reduced temperatures and pressures. The analysis shows that neglecting the material thermal conductivity can contribute significant errors when metal-wall-temperature test data taken on a turbine vane are extrapolated to engine conditions. This error in metal temperature for an uncoated vane is of opposite sign from that for a ceramic-coated vane. A correction technique is developed for both ceramic-coated and uncoated components.

  4. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  5. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  6. Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)

    DOE PAGES

    Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...

    2015-09-01

    Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less

  7. Optimization of fixture layouts of glass laser optics using multiple kernel regression.

    PubMed

    Su, Jianhua; Cao, Enhua; Qiao, Hong

    2014-05-10

    We aim to build an integrated fixturing model to describe the structural properties and thermal properties of the support frame of glass laser optics. Therefore, (a) a near global optimal set of clamps can be computed to minimize the surface shape error of the glass laser optic based on the proposed model, and (b) a desired surface shape error can be obtained by adjusting the clamping forces under various environmental temperatures based on the model. To construct the model, we develop a new multiple kernel learning method and call it multiple kernel support vector functional regression. The proposed method uses two layer regressions to group and order the data sources by the weights of the kernels and the factors of the layers. Because of that, the influences of the clamps and the temperature can be evaluated by grouping them into different layers.

  8. Optical system components for navigation grade fiber optic gyroscopes

    NASA Astrophysics Data System (ADS)

    Heimann, Marcus; Liesegang, Maximilian; Arndt-Staufenbiel, Norbert; Schröder, Henning; Lang, Klaus-Dieter

    2013-10-01

    Interferometric fiber optic gyroscopes belong to the class of inertial sensors. Due to their high accuracy they are used for absolute position and rotation measurement in manned/unmanned vehicles, e.g. submarines, ground vehicles, aircraft or satellites. The important system components are the light source, the electro optical phase modulator, the optical fiber coil and the photodetector. This paper is focused on approaches to realize a stable light source and fiber coil. Superluminescent diode and erbium doped fiber laser were studied to realize an accurate and stable light source. Therefor the influence of the polarization grade of the source and the effects due to back reflections to the source were studied. During operation thermal working conditions severely affect accuracy and stability of the optical fiber coil, which is the sensor element. Thermal gradients that are applied to the fiber coil have large negative effects on the achievable system accuracy of the optic gyroscope. Therefore a way of calculating and compensating the rotation rate error of a fiber coil due to thermal change is introduced. A simplified 3 dimensional FEM of a quadrupole wound fiber coil is used to determine the build-up of thermal fields in the polarization maintaining fiber due to outside heating sources. The rotation rate error due to these sources is then calculated and compared to measurement data. A simple regression model is used to compensate the rotation rate error with temperature measurement at the outside of the fiber coil. To realize a compact and robust optical package for some of the relevant optical system components an approach based on ion exchanged waveguides in thin glass was developed. This waveguides are used to realize 1x2 and 1x4 splitter with fiber coupling interface or direct photodiode coupling.

  9. Thermal Properties of West Siberian Sediments in Application to Basin and Petroleum Systems Modeling

    NASA Astrophysics Data System (ADS)

    Romushkevich, Raisa; Popov, Evgeny; Popov, Yury; Chekhonin, Evgeny; Myasnikov, Artem; Kazak, Andrey; Belenkaya, Irina; Zagranovskaya, Dzhuliya

    2016-04-01

    Quality of heat flow and rock thermal property data is the crucial question in basin and petroleum system modeling. A number of significant deviations in thermal conductivity values were observed during our integral geothermal study of West Siberian platform reporting that the corrections should be carried out in basin models. The experimental data including thermal anisotropy and heterogeneity measurements were obtained along of more than 15 000 core samples and about 4 500 core plugs. The measurements were performed in 1993-2015 with the optical scanning technique within the Continental Super-Deep Drilling Program (Russia) for scientific super-deep well Tyumenskaya SG-6, parametric super-deep well Yen-Yakhinskaya, and deep well Yarudeyskaya-38 as well as for 13 oil and gas fields in the West Siberia. Variations of the thermal conductivity tensor components in parallel and perpendicular direction to the layer stratification (assessed for 2D anisotropy model of the rock studied), volumetric heat capacity and thermal anisotropy coefficient values and average values of the thermal properties were the subject of statistical analysis for the uppermost deposits aged by: T3-J2 (200-165 Ma); J2-J3 (165-150 Ma); J3 (150-145 Ma); K1 (145-136 Ma); K1 (136-125 Ma); K1-K2 (125-94 Ma); K2-Pg+Ng+Q (94-0 Ma). Uncertainties caused by deviations of thermal conductivity data from its average values were found to be as high as 45 % leading to unexpected errors in the basin heat flow determinations. Also, the essential spatial-temporal variations in the thermal rock properties in the study area is proposed to be taken into account in thermo-hydrodynamic modeling of hydrocarbon recovery with thermal methods. The research work was done with financial support of the Russian Ministry of Education and Science (unique identification number RFMEFI58114X0008).

  10. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  11. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  12. Influence of defects on the thermal conductivity of compressed LiF

    DOE PAGES

    Jones, R. E.; Ward, D. K.

    2018-02-08

    We report defect formation in LiF, which is used as an observation window in ramp and shock experiments, has significant effects on its transmission properties. Given the extreme conditions of the experiments it is hard to measure the change in transmission directly. Using molecular dynamics, we estimate the change in conductivity as a function of the concentration of likely point and extended defects using a Green-Kubo technique with careful treatment of size effects. With this data, we form a model of the mean behavior and its estimated error; then, we use this model to predict the conductivity of a largemore » sample of defective LiF resulting from a direct simulation of ramp compression as a demonstration of the accuracy of its predictions. Given estimates of defect densities in a LiF window used in an experiment, the model can be used to correct the observations of thermal energy through the window. Also, the methodology we develop is extensible to modeling, with quantified uncertainty, the effects of a variety of defects on the thermal conductivity of solid materials.« less

  13. Influence of defects on the thermal conductivity of compressed LiF

    NASA Astrophysics Data System (ADS)

    Jones, R. E.; Ward, D. K.

    2018-02-01

    Defect formation in LiF, which is used as an observation window in ramp and shock experiments, has significant effects on its transmission properties. Given the extreme conditions of the experiments it is hard to measure the change in transmission directly. Using molecular dynamics, we estimate the change in conductivity as a function of the concentration of likely point and extended defects using a Green-Kubo technique with careful treatment of size effects. With this data, we form a model of the mean behavior and its estimated error; then, we use this model to predict the conductivity of a large sample of defective LiF resulting from a direct simulation of ramp compression as a demonstration of the accuracy of its predictions. Given estimates of defect densities in a LiF window used in an experiment, the model can be used to correct the observations of thermal energy through the window. In addition, the methodology we develop is extensible to modeling, with quantified uncertainty, the effects of a variety of defects on the thermal conductivity of solid materials.

  14. Influence of defects on the thermal conductivity of compressed LiF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R. E.; Ward, D. K.

    We report defect formation in LiF, which is used as an observation window in ramp and shock experiments, has significant effects on its transmission properties. Given the extreme conditions of the experiments it is hard to measure the change in transmission directly. Using molecular dynamics, we estimate the change in conductivity as a function of the concentration of likely point and extended defects using a Green-Kubo technique with careful treatment of size effects. With this data, we form a model of the mean behavior and its estimated error; then, we use this model to predict the conductivity of a largemore » sample of defective LiF resulting from a direct simulation of ramp compression as a demonstration of the accuracy of its predictions. Given estimates of defect densities in a LiF window used in an experiment, the model can be used to correct the observations of thermal energy through the window. Also, the methodology we develop is extensible to modeling, with quantified uncertainty, the effects of a variety of defects on the thermal conductivity of solid materials.« less

  15. Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James T.; Suo-Anttila, Jill M.; Zepper, Ethan T.

    Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of themore » data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .« less

  16. A quantitative comparison of soil moisture inversion algorithms

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Kim, Y.

    2001-01-01

    This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.

  17. Minimal entropy reconstructions of thermal images for emissivity correction

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.

    1999-03-01

    Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.

  18. Compensation method for the influence of angle of view on animal temperature measurement using thermal imaging camera combined with depth image.

    PubMed

    Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng

    2016-12-01

    In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Thermal effects in photomask engineering and nano-thermometry

    NASA Astrophysics Data System (ADS)

    Chu, Dachen

    Electron Beam Lithography (EBL) in photomask fabrication results in heating of the resist films. The local heating can change the chemical properties of resist, leading to placement errors. The heating induced error has been believed to be increasingly significant as the transistor minimum feature size approaches the sub 100 nm region. A Green's function approach has been developed to calculate four-dimensional temperature profiles in complex structures such as the multi-layer work-pieces being exposed in EBL. The model is being used to characterize different ebeam writing strategies to find the optimum. To provide the parameters for the model, two independent techniques have been employed: a thin film electrode method and a laser thermal-reflectance method. Unlike earlier results from polyimide films, no appreciable anisotropy was observed in thermal conductivities for the polymeric resists tested. Gold/nickel thin film thermocouples with minimum junction area of 100nm by 100nm were fabricated and calibrated. These thermocouple demonstrated a 400ns response time when heated by a 10ns laser pulse. Using these nano thermocouples, transient resist heating temperature profiles were for the first time measured at room temperature. Experimental results showed a good agreement with the Green's function model. We also observed a tradeoff in the scaling of thermocouple sensors. The smaller thermocouples may provide higher spatial and temporal resolutions but have poorer temperature resolution. In conclusion, we both modeled and measured the resist heating in EBL. In short exposure time (˜1us or less) the resist heating is nearly adiabatic, while in longer time the heating is dominated by substrate. Nano scale metallic thermocouples were explored and tradeoff was observed in dimension scaling.

  20. Statistical modeling of an integrated boiler for coal fired thermal power plant.

    PubMed

    Chandrasekharan, Sreepradha; Panda, Rames Chandra; Swaminathan, Bhuvaneswari Natrajan

    2017-06-01

    The coal fired thermal power plants plays major role in the power production in the world as they are available in abundance. Many of the existing power plants are based on the subcritical technology which can produce power with the efficiency of around 33%. But the newer plants are built on either supercritical or ultra-supercritical technology whose efficiency can be up to 50%. Main objective of the work is to enhance the efficiency of the existing subcritical power plants to compensate for the increasing demand. For achieving the objective, the statistical modeling of the boiler units such as economizer, drum and the superheater are initially carried out. The effectiveness of the developed models is tested using analysis methods like R 2 analysis and ANOVA (Analysis of Variance). The dependability of the process variable (temperature) on different manipulated variables is analyzed in the paper. Validations of the model are provided with their error analysis. Response surface methodology (RSM) supported by DOE (design of experiments) are implemented to optimize the operating parameters. Individual models along with the integrated model are used to study and design the predictive control of the coal-fired thermal power plant.

  1. Locality of Temperature

    NASA Astrophysics Data System (ADS)

    Kliesch, M.; Gogolin, C.; Kastoryano, M. J.; Riera, A.; Eisert, J.

    2014-07-01

    This work is concerned with thermal quantum states of Hamiltonians on spin- and fermionic-lattice systems with short-range interactions. We provide results leading to a local definition of temperature, thereby extending the notion of "intensivity of temperature" to interacting quantum models. More precisely, we derive a perturbation formula for thermal states. The influence of the perturbation is exactly given in terms of a generalized covariance. For this covariance, we prove exponential clustering of correlations above a universal critical temperature that upper bounds physical critical temperatures such as the Curie temperature. As a corollary, we obtain that above the critical temperature, thermal states are stable against distant Hamiltonian perturbations. Moreover, our results imply that above the critical temperature, local expectation values can be approximated efficiently in the error and the system size.

  2. Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Nutter, Paul

    1997-01-01

    The Applied Meteorology Unit (AMU) conducted a year-long evaluation of NCEP's 29-km mesoscale Eta (meso-eta) weather prediction model in order to identify added value to forecast operations in support of the United States space program. The evaluation was stratified over warm and cool seasons and considered both objective and subjective verification methodologies. Objective verification results generally indicate that meso-eta model point forecasts at selected stations exhibit minimal error growth in terms of RMS errors and are reasonably unbiased. Conversely, results from the subjective verification demonstrate that model forecasts of developing weather events such as thunderstorms, sea breezes, and cold fronts, are not always as accurate as implied by the seasonal error statistics. Sea-breeze case studies reveal that the model generates a dynamically-consistent thermally direct circulation over the Florida peninsula, although at a larger scale than observed. Thunderstorm verification reveals that the meso-eta model is capable of predicting areas of organized convection, particularly during the late afternoon hours but is not capable of forecasting individual thunderstorms. Verification of cold fronts during the cool season reveals that the model is capable of forecasting a majority of cold frontal passages through east central Florida to within +1-h of observed frontal passage.

  3. Remote Sensing of Soil Moisture: A Comparison of Optical and Thermal Methods

    NASA Astrophysics Data System (ADS)

    Foroughi, H.; Naseri, A. A.; Boroomandnasab, S.; Sadeghi, M.; Jones, S. B.; Tuller, M.; Babaeian, E.

    2017-12-01

    Recent technological advances in satellite and airborne remote sensing have provided new means for large-scale soil moisture monitoring. Traditional methods for soil moisture retrieval require thermal and optical RS observations. In this study we compared the traditional trapezoid model parameterized based on the land surface temperature - normalized difference vegetation index (LST-NDVI) space with the recently developed optical trapezoid model OPTRAM parameterized based on the shortwave infrared transformed reflectance (STR)-NDVI space for an extensive sugarcane field located in Southwestern Iran. Twelve Landsat-8 satellite images were acquired during the sugarcane growth season (April to October 2016). Reference in situ soil moisture data were obtained at 22 locations at different depths via core sampling and oven-drying. The obtained results indicate that the thermal/optical and optical prediction methods are comparable, both with volumetric moisture content estimation errors of about 0.04 cm3 cm-3. However, the OPTRAM model is more efficient because it does not require thermal data and can be universally parameterized for a specific location, because unlike the LST-soil moisture relationship, the reflectance-soil moisture relationship does not significantly vary with environmental variables (e.g., air temperature, wind speed, etc.).

  4. Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.

    1990-01-01

    It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.

  5. Assessment of Mars Atmospheric Temperature Retrievals from the Thermal Emission Spectrometer Radiances

    NASA Technical Reports Server (NTRS)

    Hoffman, Matthew J.; Eluszkiewicz, Janusz; Weisenstein, Deborah; Uymin, Gennady; Moncet, Jean-Luc

    2012-01-01

    Motivated by the needs of Mars data assimilation. particularly quantification of measurement errors and generation of averaging kernels. we have evaluated atmospheric temperature retrievals from Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES) radiances. Multiple sets of retrievals have been considered in this study; (1) retrievals available from the Planetary Data System (PDS), (2) retrievals based on variants of the retrieval algorithm used to generate the PDS retrievals, and (3) retrievals produced using the Mars 1-Dimensional Retrieval (M1R) algorithm based on the Optimal Spectral Sampling (OSS ) forward model. The retrieved temperature profiles are compared to the MGS Radio Science (RS) temperature profiles. For the samples tested, the M1R temperature profiles can be made to agree within 2 K with the RS temperature profiles, but only after tuning the prior and error statistics. Use of a global prior that does not take into account the seasonal dependence leads errors of up 6 K. In polar samples. errors relative to the RS temperature profiles are even larger. In these samples, the PDS temperature profiles also exhibit a poor fit with RS temperatures. This fit is worse than reported in previous studies, indicating that the lack of fit is due to a bias correction to TES radiances implemented after 2004. To explain the differences between the PDS and Ml R temperatures, the algorithms are compared directly, with the OSS forward model inserted into the PDS algorithm. Factors such as the filtering parameter, the use of linear versus nonlinear constrained inversion, and the choice of the forward model, are found to contribute heavily to the differences in the temperature profiles retrieved in the polar regions, resulting in uncertainties of up to 6 K. Even outside the poles, changes in the a priori statistics result in different profile shapes which all fit the radiances within the specified error. The importance of the a priori statistics prevents reliable global retrievals based a single a priori and strongly implies that a robust science analysis must instead rely on retrievals employing localized a priori information, for example from an ensemble based data assimilation system such as the Local Ensemble Transform Kalman Filter (LETKF).

  6. High frequency observations of Iapetus on the Green Bank Telescope aided by improvements in understanding the telescope response to wind

    NASA Astrophysics Data System (ADS)

    Ries, Paul A.

    2012-05-01

    The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.

  7. Steady-state low thermal resistance characterization apparatus: The bulk thermal tester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burg, Brian R.; Kolly, Manuel; Blasakis, Nicolas

    The reliability of microelectronic devices is largely dependent on electronic packaging, which includes heat removal. The appropriate packaging design therefore necessitates precise knowledge of the relevant material properties, including thermal resistance and thermal conductivity. Thin materials and high conductivity layers make their thermal characterization challenging. A steady state measurement technique is presented and evaluated with the purpose to characterize samples with a thermal resistance below 100 mm{sup 2} K/W. It is based on the heat flow meter bar approach made up by two copper blocks and relies exclusively on temperature measurements from thermocouples. The importance of thermocouple calibration is emphasizedmore » in order to obtain accurate temperature readings. An in depth error analysis, based on Gaussian error propagation, is carried out. An error sensitivity analysis highlights the importance of the precise knowledge of the thermal interface materials required for the measurements. Reference measurements on Mo samples reveal a measurement uncertainty in the range of 5% and most accurate measurements are obtained at high heat fluxes. Measurement techniques for homogeneous bulk samples, layered materials, and protruding cavity samples are discussed. Ultimately, a comprehensive overview of a steady state thermal characterization technique is provided, evaluating the accuracy of sample measurements with thermal resistances well below state of the art setups. Accurate characterization of materials used in heat removal applications, such as electronic packaging, will enable more efficient designs and ultimately contribute to energy savings.« less

  8. On-Line, Self-Learning, Predictive Tool for Determining Payload Thermal Response

    NASA Technical Reports Server (NTRS)

    Jen, Chian-Li; Tilwick, Leon

    2000-01-01

    This paper will present the results of a joint ManTech / Goddard R&D effort, currently under way, to develop and test a computer based, on-line, predictive simulation model for use by facility operators to predict the thermal response of a payload during thermal vacuum testing. Thermal response was identified as an area that could benefit from the algorithms developed by Dr. Jeri for complex computer simulations. Most thermal vacuum test setups are unique since no two payloads have the same thermal properties. This requires that the operators depend on their past experiences to conduct the test which requires time for them to learn how the payload responds while at the same time limiting any risk of exceeding hot or cold temperature limits. The predictive tool being developed is intended to be used with the new Thermal Vacuum Data System (TVDS) developed at Goddard for the Thermal Vacuum Test Operations group. This model can learn the thermal response of the payload by reading a few data points from the TVDS, accepting the payload's current temperature as the initial condition for prediction. The model can then be used as a predictive tool to estimate the future payload temperatures according to a predetermined shroud temperature profile. If the error of prediction is too big, the model can be asked to re-learn the new situation on-line in real-time and give a new prediction. Based on some preliminary tests, we feel this predictive model can forecast the payload temperature of the entire test cycle within 5 degrees Celsius after it has learned 3 times during the beginning of the test. The tool will allow the operator to play "what-if' experiments to decide what is his best shroud temperature set-point control strategy. This tool will save money by minimizing guess work and optimizing transitions as well as making the testing process safer and easier to conduct.

  9. Deformation measurements by ESPI of the surface of a heated mirror and comparison with numerical model

    NASA Astrophysics Data System (ADS)

    Languy, Fabian; Vandenrijt, Jean-François; Saint-Georges, Philippe; Georges, Marc P.

    2017-06-01

    The manufacture of mirrors for space application is expensive and the requirements on the optical performance increase over years. To achieve higher performance, larger mirrors are manufactured but the larger the mirror the higher the sensitivity to temperature variation and therefore the higher the degradation of optical performances. To avoid the use of an expensive thermal regulation, we need to develop tools able to predict how optics behaves with thermal constraints. This paper presents the comparison between experimental surface mirror deformation and theoretical results from a multiphysics model. The local displacements of the mirror surface have been measured with the use of electronic speckle pattern interferometry (ESPI) and the deformation itself has been calculated by subtracting the rigid body motion. After validation of the mechanical model, experimental and numerical wave front errors are compared.

  10. The detection error of thermal test low-frequency cable based on M sequence correlation algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin

    2018-04-01

    The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.

  11. Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).

    PubMed

    Domené, E A; Martínez, O E

    2013-01-01

    An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).

  12. Thermal, Structural, and Optical Analysis of a Balloon-Based Imaging System

    NASA Astrophysics Data System (ADS)

    Borden, Michael; Lewis, Derek; Ochoa, Hared; Jones-Wilson, Laura; Susca, Sara; Porter, Michael; Massey, Richard; Clark, Paul; Netterfield, Barth

    2017-03-01

    The Subarcsecond Telescope And BaLloon Experiment, STABLE, is the fine stage of a guidance system for a high-altitude ballooning platform designed to demonstrate subarcsecond pointing stability over one minute using relatively dim guide stars in the visible spectrum. The STABLE system uses an attitude rate sensor and the motion of the guide star on a detector to control a Fast Steering Mirror to stabilize the image. The characteristics of the thermal-optical-mechanical elements in the system directly affect the quality of the point-spread function of the guide star on the detector, so a series of thermal, structural, and optical models were built to simulate system performance and ultimately inform the final pointing stability predictions. This paper describes the modeling techniques employed in each of these subsystems. The results from those models are discussed in detail, highlighting the development of the worst-case cold and hot cases, the optical metrics generated from the finite element model, and the expected STABLE residual wavefront error and decenter. Finally, the paper concludes with the predicted sensitivities in the STABLE system, which show that thermal deadbanding, structural pre-loading, and self-deflection under different loading conditions, and the speed of individual optical elements were particularly important to the resulting STABLE optical performance.

  13. Statistical modelling of thermal annealing of fission tracks in apatite

    NASA Astrophysics Data System (ADS)

    Laslett, G. M.; Galbraith, R. F.

    1996-12-01

    We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider "fanning Arrhenius" models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen. This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix. We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.

  14. Performance Evaluation of Dual-axis Tracking System of Parabolic Trough Solar Collector

    NASA Astrophysics Data System (ADS)

    Ullah, Fahim; Min, Kang

    2018-01-01

    A parabolic trough solar collector with the concentration ratio of 24 was developed in the College of Engineering; Nanjing Agricultural University, China with the using of the TracePro software an optical model built. Effects of single-axis and dual-axis tracking modes, azimuth and elevating angle tracking errors on the optical performance were investigated and the thermal performance of the solar collector was experimentally measured. The results showed that the optical efficiency of the dual-axis tracking was 0.813% and its year average value was 14.3% and 40.9% higher than that of the eat-west tracking mode and north-south tracking mode respectively. Further, form the results of the experiment, it was concluded that the optical efficiency was affected significantly by the elevation angle tracking errors which should be kept below 0.6o. High optical efficiency could be attained by using dual-tracking mode even though the tracking precision of one axis was degraded. The real-time instantaneous thermal efficiency of the collector reached to 0.775%. In addition, the linearity of the normalized efficiency was favorable. The curve of the calculated thermal efficiency agreed well with the normalized instantaneous efficiency curve derived from the experimental data and the maximum difference between them was 10.3%. This type of solar collector should be applied in middle-scale thermal collection systems.

  15. Thermal transport at the nanoscale: A Fourier's law vs. phonon Boltzmann equation study

    NASA Astrophysics Data System (ADS)

    Kaiser, J.; Feng, T.; Maassen, J.; Wang, X.; Ruan, X.; Lundstrom, M.

    2017-01-01

    Steady-state thermal transport in nanostructures with dimensions comparable to the phonon mean-free-path is examined. Both the case of contacts at different temperatures with no internal heat generation and contacts at the same temperature with internal heat generation are considered. Fourier's law results are compared to finite volume method solutions of the phonon Boltzmann equation in the gray approximation. When the boundary conditions are properly specified, results obtained using Fourier's law without modifying the bulk thermal conductivity are in essentially exact quantitative agreement with the phonon Boltzmann equation in the ballistic and diffusive limits. The errors between these two limits are examined in this paper. For the four cases examined, the error in the apparent thermal conductivity as deduced from a correct application of Fourier's law is less than 6%. We also find that the Fourier's law results presented here are nearly identical to those obtained from a widely used ballistic-diffusive approach but analytically much simpler. Although limited to steady-state conditions with spatial variations in one dimension and to a gray model of phonon transport, the results show that Fourier's law can be used for linear transport from the diffusive to the ballistic limit. The results also contribute to an understanding of how heat transport at the nanoscale can be understood in terms of the conceptual framework that has been established for electron transport at the nanoscale.

  16. Design and performance of the ALMA-J prototype antenna

    NASA Astrophysics Data System (ADS)

    Ukita, Nobuharu; Saito, Masao; Ezawa, Hajime; Ikenoue, Bungo; Ishizaki, Hideharu; Iwashita, Hiroyuki; Yamaguchi, Nobuyuki; Hayakawa, Takahiro

    2004-10-01

    The National Astronomical Observatory of Japan has constructed a prototype 12-m antenna of the Atacama Compact Array to evaluate its performance at the ALMA Test Facility in the NRAO VLA observatory in New Mexico, the United States. The antenna has a CFRP tube backup structure (BUS) with CFRP boards to support 205 machined Aluminum surface panels. Their accuracies were measured to be 5.9 m rms on average. A chemical treatment technique of the surface panels has successfully applied to scatter the solar radiation, which resulted in a subreflector temperature increase of about 25 degrees relative to ambient temperature during direct solar observations. Holography measurements and panel adjustments led to a final surface accuracy of 20 m rms, (weighted by 12dB edge taper), after three rounds of the panel adjustments. Based on a long term temperature monitoring of the BUS and thermal deformation FEM calculation, the BUS thermal deformation was estimated to be less than 3.1 m rms. We have employed gear drive mechanism both for a fast position switching capability and for smooth drive at low velocities. Servo errors measured with angle encoders were found to be less than 0.1 arcseconds rms at rotational velocities below 0.1 degrees s-1 and to increase to 0.7 arcseconds rms at the maximum speed of the 'on-the-fly' scan as a single dish, 0.5 deg s-1 induced by the irregularity of individual gear tooth profiles. Simultaneous measurements of the antenna motion with the angle encoders and seismic accelerometers mounted at the primary reflector mirror edges and at the subreflector showed the same amplitude and phase of oscillation, indicating that they are rigid, suggesting that it is possible to estimate where the antenna is actually pointing from the encoder readout. Continuous tracking measurements of Polaris during day and night have revealed a large pointing drift due to thermal distortion of the yoke structure. We have applied retrospective thermal corrections to tracking data for two hours, with a preliminary thermal deformation model of the yoke, and have found the tracking accuracy improved to be 0.1 - 0.3 arcseconds rms for a 15-munites period. The whole sky absolute pointing error under no wind and during night was measured to be 1.17 arcseconds rms. We need to make both an elaborated modeling of thermal deformation of the structure and systematic searches for significant correlation among pointing errors and metrology sensor outputs to achieve the stable tracking performance requested by ALMA.

  17. Topological quantum error correction in the Kitaev honeycomb model

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  18. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter

    PubMed Central

    Angrisani, Leopoldo; Simone, Domenico De

    2018-01-01

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input. PMID:29735956

  19. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter.

    PubMed

    Fontanella, Rita; Accardo, Domenico; Moriello, Rosario Schiano Lo; Angrisani, Leopoldo; Simone, Domenico De

    2018-05-07

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input.

  20. ATLAST ULE mirror segment performance analytical predictions based on thermally induced distortions

    NASA Astrophysics Data System (ADS)

    Eisenhower, Michael J.; Cohen, Lester M.; Feinberg, Lee D.; Matthews, Gary W.; Nissen, Joel A.; Park, Sang C.; Peabody, Hume L.

    2015-09-01

    The Advanced Technology Large-Aperture Space Telescope (ATLAST) is a concept for a 9.2 m aperture space-borne observatory operating across the UV/Optical/NIR spectra. The primary mirror for ATLAST is a segmented architecture with pico-meter class wavefront stability. Due to its extraordinarily low coefficient of thermal expansion, a leading candidate for the primary mirror substrate is Corning's ULE® titania-silicate glass. The ATLAST ULE® mirror substrates will be maintained at `room temperature' during on orbit flight operations minimizing the need for compensation of mirror deformation between the manufacturing temperature and the operational temperatures. This approach requires active thermal management to maintain operational temperature while on orbit. Furthermore, the active thermal control must be sufficiently stable to prevent time-varying thermally induced distortions in the mirror substrates. This paper describes a conceptual thermal management system for the ATLAST 9.2 m segmented mirror architecture that maintains the wavefront stability to less than 10 pico-meters/10 minutes RMS. Thermal and finite element models, analytical techniques, accuracies involved in solving the mirror figure errors, and early findings from the thermal and thermal-distortion analyses are presented.

  1. Assessment of the Sensitivity to the Thermal Roughness Length in Noah and Noah-MP Land Surface Model Using WRF in an Arid Region

    NASA Astrophysics Data System (ADS)

    Weston, Michael; Chaouch, Naira; Valappil, Vineeth; Temimi, Marouane; Ek, Michael; Zheng, Weizhong

    2018-06-01

    Atmospheric models are known to underestimate land surface temperature and, by association, 2 m air temperature over dry arid regions during the day due to the treatment of the thermal roughness length also known as roughness length of heat. The thermal roughness length can be controlled by the Zilitinkevich parameter, known as Czil, which is a tunable parameter within the models. Three different scenarios with the WRF model are run to test the impact of the Czil parameter on the simulations using two land surface models: the Noah and Noah-MP models. In this study, a modified version of the Noah-MP model is tested, in which the Czil parameter, and, therefore, the thermal roughness length varies depending on the land cover and vegetation height. The model domain is over the United Arab Emirates (UAE) where the major land cover type is desert. The following configurations are tested: the Noah model with Czil = 0.1, Noah model with Czil = 0.5 and the Noah-MP model with Czil = 0.5 over desert. Results of 2 m air temperature are verified against three stations in the UAE. Mean gross error of the diurnal 2 m temperature was reduced by up to 1.48 and 1.54 °C in the 24 and 48 h forecasts, respectively. This reduced the cold bias in the model. This improvement in air temperature showed to improve the diurnal cycle of relative humidity at the three monitoring stations as well as the duration of the sea breeze in some cases.

  2. A comparative study on improved Arrhenius-type and artificial neural network models to predict high-temperature flow behaviors in 20MnNiMo alloy.

    PubMed

    Quan, Guo-zheng; Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng

    2014-01-01

    The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173 ∼ 1473 K and strain rate range of 0.01 ∼ 10 s(-1). Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of -39.99% ∼ 35.05% and -3.77% ∼ 16.74%. As for the former, only 16.3% of the test data set possesses η-values within ± 1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model.

  3. Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Saxon, Aron R; Keyser, Matthew A

    Lithium-ion (Li-ion) batteries are being deployed on the electrical grid for a variety of purposes, such as to smooth fluctuations in solar renewable power generation. The lifetime of these batteries will vary depending on their thermal environment and how they are charged and discharged. To optimal utilization of a battery over its lifetime requires characterization of its performance degradation under different storage and cycling conditions. Aging tests were conducted on commercial graphite/nickel-manganese-cobalt (NMC) Li-ion cells. A general lifetime prognostic model framework is applied to model changes in capacity and resistance as the battery degrades. Across 9 aging test conditions frommore » 0oC to 55oC, the model predicts capacity fade with 1.4% RMS error and resistance growth with 15% RMS error. The model, recast in state variable form with 8 states representing separate fade mechanisms, is used to extrapolate lifetime for example applications of the energy storage system integrated with renewable photovoltaic (PV) power generation.« less

  4. Analysis tool and methodology design for electronic vibration stress understanding and prediction

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Jen; Crane, Robert L.; Sathish, Shamachary

    2005-03-01

    The objectives of this research were to (1) understand the impact of vibration on electronic components under ultrasound excitation; (2) model the thermal profile presented under vibration stress; and (3) predict stress level given a thermal profile of an electronic component. Research tasks included: (1) retrofit of current ultrasonic/infrared nondestructive testing system with sensory devices for temperature readings; (2) design of software tool to process images acquired from the ultrasonic/infrared system; (3) developing hypotheses and conducting experiments; and (4) modeling and evaluation of electronic vibration stress levels using a neural network model. Results suggest that (1) an ultrasonic/infrared system can be used to mimic short burst high vibration loads for electronics components; (2) temperature readings for electronic components under vibration stress are consistent and repeatable; (3) as stress load and excitation time increase, temperature differences also increase; (4) components that are subjected to a relatively high pre-stress load, followed by a normal operating load, have a higher heating rate and lower cooling rate. These findings are based on grayscale changes in images captured during experimentation. Discriminating variables and a neural network model were designed to predict stress levels given temperature and/or grayscale readings. Preliminary results suggest a 15.3% error when using grayscale change rate and 12.8% error when using average heating rate within the neural network model. Data were obtained from a high stress point (the corner) of the chip.

  5. Multi-spectral pyrometer for gas turbine blade temperature measurement

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Wang, Lixin; Feng, Chi

    2014-09-01

    To achieve the highest possible turbine inlet temperature requires to accurately measuring the turbine blade temperature. If the temperature of blade frequent beyond the design limits, it will seriously reduce the service life. The problem for the accuracy of the temperature measurement includes the value of the target surface emissivity is unknown and the emissivity model is variability and the thermal radiation of the high temperature environment. In this paper, the multi-spectral pyrometer is designed provided mainly for range 500-1000°, and present a model corrected in terms of the error due to the reflected radiation only base on the turbine geometry and the physical properties of the material. Under different working conditions, the method can reduce the measurement error from the reflect radiation of vanes, make measurement closer to the actual temperature of the blade and calculating the corresponding model through genetic algorithm. The experiment shows that this method has higher accuracy measurements.

  6. Trends in MODIS Geolocation Error Analysis

    NASA Technical Reports Server (NTRS)

    Wolfe, R. E.; Nishihama, Masahiro

    2009-01-01

    Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.

  7. Numerical homogenization of elastic and thermal material properties for metal matrix composites (MMC)

    NASA Astrophysics Data System (ADS)

    Schindler, Stefan; Mergheim, Julia; Zimmermann, Marco; Aurich, Jan C.; Steinmann, Paul

    2017-01-01

    A two-scale material modeling approach is adopted in order to determine macroscopic thermal and elastic constitutive laws and the respective parameters for metal matrix composite (MMC). Since the common homogenization framework violates the thermodynamical consistency for non-constant temperature fields, i.e., the dissipation is not conserved through the scale transition, the respective error is calculated numerically in order to prove the applicability of the homogenization method. The thermomechanical homogenization is applied to compute the macroscopic mass density, thermal expansion, elasticity, heat capacity and thermal conductivity for two specific MMCs, i.e., aluminum alloy Al2024 reinforced with 17 or 30 % silicon carbide particles. The temperature dependency of the material properties has been considered in the range from 0 to 500°C, the melting temperature of the alloy. The numerically determined material properties are validated with experimental data from the literature as far as possible.

  8. Quantifying data retention of perpendicular spin-transfer-torque magnetic random access memory chips using an effective thermal stability factor method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Luc, E-mail: luc.thomas@headway.com; Jan, Guenole; Le, Son

    The thermal stability of perpendicular Spin-Transfer-Torque Magnetic Random Access Memory (STT-MRAM) devices is investigated at chip level. Experimental data are analyzed in the framework of the Néel-Brown model including distributions of the thermal stability factor Δ. We show that in the low error rate regime important for applications, the effect of distributions of Δ can be described by a single quantity, the effective thermal stability factor Δ{sub eff}, which encompasses both the median and the standard deviation of the distributions. Data retention of memory chips can be assessed accurately by measuring Δ{sub eff} as a function of device diameter andmore » temperature. We apply this method to show that 54 nm devices based on our perpendicular STT-MRAM design meet our 10 year data retention target up to 120 °C.« less

  9. Parameters on reconstructions of geohistory, thermal history, and hydrocarbon generation history in a sedimentary basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, S.; Lerche, I.

    1988-01-01

    Geological processes related to petroleum generation, migration, and accumulation are very complicated in terms of time and variables involved, and are very difficult to simulate by laboratory experiments. For this reason, many mathematic/computer models have been developed to simulate these geological processes based on geological, geophysical, and geochemical principles. Unfortunately, none of these models can exactly simulate these processes because of the assumptions and simplifications made in these models and the errors in the input for the models. The sensitivity analysis is a comprehensive examination on how geological, geophysical, and geochemical parameters affect the reconstructions of geohistory, thermal history, andmore » hydrocarbon generation history. In this study, a one-dimensional fluid flow/compaction model has been used to run the sensitivity analysis. The authors will show the effects of some commonly used parameters such as depth, age, lithology, porosity, permeability, unconformity (time and eroded thickness), temperature at sediment surface, bottom hole temperature, present day heat flow, thermal gradient, thermal conductivity and kerogen type, and content on the evolutions of formation thickness, porosity, permeability, pressure with time and depth, heat flow with time, temperature with time and depth, vitrinite reflectance (R/sub 0/) and TTI with time and depth, oil window in terms of time and depth, and amount of hydrocarbon generated with time and depth.« less

  10. Augmented Method to Improve Thermal Data for the Figure Drift Thermal Distortion Predictions of the JWST OTIS Cryogenic Vacuum Test

    NASA Technical Reports Server (NTRS)

    Park, Sang C.; Carnahan, Timothy M.; Cohen, Lester M.; Congedo, Cherie B.; Eisenhower, Michael J.; Ousley, Wes; Weaver, Andrew; Yang, Kan

    2017-01-01

    The JWST Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope currently being manufactured and assembled, and is scheduled for launch in 2018. The JWST OTE, including the 18 segment primary mirror, secondary mirror, and the Aft Optics Subsystem (AOS) are designed to be passively cooled and operate near 45K. These optical elements are supported by a complex composite backplane structure. As a part of the structural distortion model validation efforts, a series of tests are planned during the cryogenic vacuum test of the fully integrated flight hardware at NASA JSC Chamber A. The successful ends to the thermal-distortion phases are heavily dependent on the accurate temperature knowledge of the OTE structural members. However, the current temperature sensor allocations during the cryo-vac test may not have sufficient fidelity to provide accurate knowledge of the temperature distributions within the composite structure. A method based on an inverse distance relationship among the sensors and thermal model nodes was developed to improve the thermal data provided for the nanometer scale WaveFront Error (WFE) predictions. The Linear Distance Weighted Interpolation (LDWI) method was developed to augment the thermal model predictions based on the sparse sensor information. This paper will encompass the development of the LDWI method using the test data from the earlier pathfinder cryo-vac tests, and the results of the notional and as tested WFE predictions from the structural finite element model cases to characterize the accuracies of this LDWI method.

  11. Quantification of improvements in an operational global-scale ocean thermal analysis system. (Reannouncement with new availability information)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, R.M.; Harding, J.M.; Pollak, K.D.

    1992-02-01

    Global-scale analyses of ocean thermal structure produced operationally at the U.S. Navy`s Fleet Numerical Oceanography Center are verified, along with an ocean thermal climatology, against unassimilated bathythermograph (bathy), satellite multichannel sea surface temperature (MCSST), and ship sea surface temperature (SST) data. Verification statistics are calculated from the three types of data for February-April of 1988 and February-April of 1990 in nine verification areas covering most of the open ocean in the Northern Hemisphere. The analyzed thermal fields were produced by version 1.0 of the Optimum Thermal Interpolation System (OTIS 1.0) in 1988, but by an upgraded version of this model,more » referred to as OTIS 1.1, in 1990. OTIS 1.1 employs exactly the same analysis methodology as OTIS 1.0. The principal difference is that OTIS 1.1 has twice the spatial resolution of OTIS 1.0 and consequently uses smaller spatial decorrelation scales and noise-to-signal ratios. As a result, OTIS 1.1 is able to represent more horizontal detail in the ocean thermal fields than its predecessor. Verification statistics for the SST fields derived from bathy and MCSST data are consistent with each other, showing similar trends and error levels. These data indicate that the analyzed SST fields are more accurate in 1990 than in 1988, and generally more accurate than climatology for both years. Verification statistics for the SST fields derived from ship data are inconsistent with those derived from the bathy and MCSST data, and show much higher error levels indicative of observational noise.« less

  12. Estimated Viscosities and Thermal Conductivities of Gases at High Temperatures

    NASA Technical Reports Server (NTRS)

    Svehla, Roger A.

    1962-01-01

    Viscosities and thermal conductivities, suitable for heat-transfer calculations, were estimated for about 200 gases in the ground state from 100 to 5000 K and 1-atmosphere pressure. Free radicals were included, but excited states and ions were not. Calculations for the transport coefficients were based upon the Lennard-Jones (12-6) potential for all gases. This potential was selected because: (1) It is one of the most realistic models available and (2) intermolecular force constants can be estimated from physical properties or by other techniques when experimental data are not available; such methods for estimating force constants are not as readily available for other potentials. When experimental viscosity data were available, they were used to obtain the force constants; otherwise the constants were estimated. These constants were then used to calculate both the viscosities and thermal conductivities tabulated in this report. For thermal conductivities of polyatomic gases an Eucken-type correction was made to correct for exchange between internal and translational energies. Though this correction may be rather poor at low temperatures, it becomes more satisfactory with increasing temperature. It was not possible to obtain force constants from experimental thermal conductivity data except for the inert atoms, because most conductivity data are available at low temperatures only (200 to 400 K), the temperature range where the Eucken correction is probably most in error. However, if the same set of force constants is used for both viscosity and thermal conductivity, there is a large degree of cancellation of error when these properties are used in heat-transfer equations such as the Dittus-Boelter equation. It is therefore concluded that the properties tabulated in this report are suitable for heat-transfer calculations of gaseous systems.

  13. Thermal insulation and clothing area factors of typical Arabian Gulf clothing ensembles for males and females: measurements using thermal manikins.

    PubMed

    Al-ajmi, F F; Loveday, D L; Bedwell, K H; Havenith, G

    2008-05-01

    The thermal insulation of clothing is one of the most important parameters used in the thermal comfort model adopted by the International Standards Organisation (ISO) [BS EN ISO 7730, 2005. Ergonomics of the thermal environment. Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices and local thermal comfort criteria. International Standardisation Organisation, Geneva.] and by ASHRAE [ASHRAE Handbook, 2005. Fundamentals. Chapter 8. American Society of Heating Refrigeration and Air-conditioning Engineers, Inc., 1791 Tullie Circle N.E., Atlanta, GA.]. To date, thermal insulation values of mainly Western clothing have been published with only minimal data being available for non-Western clothing. Thus, the objective of the present study is to measure and present the thermal insulation (clo) values of a number of Arabian Gulf garments as worn by males and females. The clothing ensembles and garments of Arabian Gulf males and females presented in this study are representative of those typically worn in the region during both summer and winter seasons. Measurements of total thermal insulation values (clo) were obtained using a male and a female shape thermal manikin in accordance with the definition of insulation as given in ISO 9920. In addition, the clothing area factors (f cl) determined in two different ways were compared. The first method used a photographic technique and the second a regression equation as proposed in ISO 9920, based on the insulation values of Arabian Gulf male and female garments and ensembles as they were determined in this study. In addition, fibre content, descriptions and weights of Arabian Gulf clothing have been recorded and tabulated in this study. The findings of this study are presented as additions to the existing knowledge base of clothing insulation, and provide for the first time data for Arabian Gulf clothing. The analysis showed that for these non-Western clothing designs, the most widely used regression calculation of f cl is not valid. However, despite the very large errors in f cl made with the regression method, the errors this causes in the intrinsic clothing insulation value, I cl, are limited.

  14. Thermal Imaging Performance of TIR Onboard the Hayabusa2 Spacecraft

    NASA Astrophysics Data System (ADS)

    Arai, Takehiko; Nakamura, Tomoki; Tanaka, Satoshi; Demura, Hirohide; Ogawa, Yoshiko; Sakatani, Naoya; Horikawa, Yamato; Senshu, Hiroki; Fukuhara, Tetsuya; Okada, Tatsuaki

    2017-07-01

    The thermal infrared imager (TIR) is a thermal infrared camera onboard the Hayabusa2 spacecraft. TIR will perform thermography of a C-type asteroid, 162173 Ryugu (1999 JU3), and estimate its surface physical properties, such as surface thermal emissivity ɛ , surface roughness, and thermal inertia Γ, through remote in-situ observations in 2018 and 2019. In prelaunch tests of TIR, detector calibrations and evaluations, along with imaging demonstrations, were performed. The present paper introduces the experimental results of a prelaunch test conducted using a large-aperture collimator in conjunction with TIR under atmospheric conditions. A blackbody source, controlled at constant temperature, was measured using TIR in order to construct a calibration curve for obtaining temperatures from observed digital data. As a known thermal emissivity target, a sandblasted black almite plate warmed from the back using a flexible heater was measured by TIR in order to evaluate the accuracy of the calibration curve. As an analog target of a C-type asteroid, carbonaceous chondrites (50 mm × 2 mm in thickness) were also warmed from the back and measured using TIR in order to clarify the imaging performance of TIR. The calibration curve, which was fitted by a specific model of the Planck function, allowed for conversion to the target temperature within an error of 1°C (3σ standard deviation) for the temperature range of 30 to 100°C. The observed temperature of the black almite plate was consistent with the temperature measured using K-type thermocouples, within the accuracy of temperature conversion using the calibration curve when the temperature variation exhibited a random error of 0.3 °C (1σ ) for each pixel at a target temperature of 50°C. TIR can resolve the fine surface structure of meteorites, including cracks and pits with the specified field of view of 0.051°C (328 × 248 pixels). There were spatial distributions with a temperature variation of 3°C at the setting temperature of 50°C in the thermal images obtained by TIR. If the spatial distribution of the temperature is caused by the variation of the thermal emissivity, including the effects of the surface roughness, the difference of the thermal emissivity Δ ɛ is estimated to be approximately 0.08, as calculated by the Stefan-Boltzmann raw. Otherwise, if the distribution of temperature is caused by the variation of the thermal inertia, the difference of the thermal inertia Δ Γ is calculated to be approximately 150 J m^{-2} s^{0.5} K^{-1}, based on a simulation using a 20-layer model of the heat balance equation. The imaging performance of TIR based on the results of the meteorite experiments indicates that TIR can resolve the spatial distribution of thermal emissivity and thermal inertia of the asteroid surface within accuracies of Δ ɛ \\cong 0.02 and Δ Γ \\cong 20 J m^{-2} s^{0.5} K^{-1}, respectively. However, the effects of the thermal emissivity and thermal inertia will degenerate in thermal images of TIR. Therefore, TIR will observe the same areas of the asteroid surface numerous times ({>}10 times, in order to ensure statistical significance), which allows us to determine both the parameters of the surface thermal emissivity and the thermal inertia by least-squares fitting to a thermal model of Ryugu.

  15. Geomagnetic Secular Variation Prediction with Thermal Heterogeneous Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Kuang, W.; Tangborn, A.; Jiang, W.

    2011-12-01

    It has long been conjectured that thermal heterogeneity at the core-mantle boundary (CMB) affects the geodynamo substantially. The observed two pairs of steady and strong magnetic flux lobes near the Polar Regions and the low secular variation in the Pacific over the past 400 years (and perhaps longer) are likely the consequences of this CMB thermal heterogeneity. There are several studies on the impact of the thermal heterogeneity with numerical geodynamo simulations. However, direct correlation between the numerical results and the observations is found very difficult, except qualitative comparisons of certain features in the radial component of the magnetic field at the CMB. This makes it difficult to assess accurately the impact of thermal heterogeneity on the geodynamo and the geomagnetic secular variation. We revisit this problem with our MoSST_DAS system in which geomagnetic data are assimilated with our geodynamo model to predict geomagnetic secular variations. In this study, we implement a heterogeneous heat flux across the CMB that is chosen based on the seismic tomography of the lowermost mantle. The amplitude of the heat flux (relative to the mean heat flux across the CMB) varies in the simulation. With these assimilation studies, we will examine the influences of the heterogeneity on the forecast accuracies, e.g. the accuracies as functions of the heterogeneity amplitude. With these, we could be able to assess the model errors to the true core state, and thus the thermal heterogeneity in geodynamo modeling.

  16. Parameter Estimation of the Thermal Network Model of a Machine Tool Spindle by Self-made Bluetooth Temperature Sensor Module

    PubMed Central

    Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen

    2018-01-01

    Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system. PMID:29473877

  17. Parameter Estimation of the Thermal Network Model of a Machine Tool Spindle by Self-made Bluetooth Temperature Sensor Module.

    PubMed

    Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen

    2018-02-23

    Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system.

  18. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  19. A New Global Core Plasma Model of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Comfort, R. H.; Craven, P. D.

    2014-01-01

    The Global Core Plasma Model (GCPM) is the first empirical model for thermal inner magnetospheric plasma designed to integrate previous models and observations into a continuous in value and gradient representation of typical total densities. New information about the plasmasphere, in particular, make possible significant improvement. The IMAGE Mission Radio Plasma Imager (RPI) has obtained the first observations of total plasma densities along magnetic field lines in the plasmasphere and polar cap. Dynamics Explorer 1 Retarding Ion Mass Spectrometer (RIMS) has provided densities in temperatures in the plasmasphere for 5 ion species. These and other works enable a new more detailed empirical model of thermal in the inner magnetosphere that will be presented. Specifically shown here are the inner-plasmasphere RIMS measurements, radial fits to densities and temperatures for H(+), He(+), He(++), O(+), and O(+) and the error associated with these initial simple fits. Also shown are more subtle dependencies on the f10.7 P-value (see Richards et al. [1994]).

  20. Modeling the field of a passive scalar in a nonisothermal turbulent plane gas jet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrashin, V.N.; Barykin, V.N.; Martynenko, O.G.

    The problem of the distribution of thermal characteristics in a plane nonisothermal turbulent gas jet in the case of large Reynolds numbers and a small temperature difference, allowing heat to be regarded as a passive impurity, is solved in the range of jet cross sections 20-100 caliber by a second-order correlational model of turbulence and an effective numerical algorithm. Analysis of the results show that the model allows computational data in good agreement with experiment to be obtained in the range of jet cross section 20-100 diameters. The relative error in determining the maximum values of the functions is 3-10%more » for the dynamic characteristics while the mean temperature and its mean square pulsations are determined with an accuracy of 5-10%; the corresponding figures for the thermal characteristics are 5-15% and 5-10%.« less

  1. Quantitative analysis of the radiation error for aerial coiled-fiber-optic distributed temperature sensing deployments using reinforcing fabric as support structure

    NASA Astrophysics Data System (ADS)

    Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.

    2017-06-01

    In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.

  2. In-Situ monitoring and modeling of metal additive manufacturing powder bed fusion

    NASA Astrophysics Data System (ADS)

    Alldredge, Jacob; Slotwinski, John; Storck, Steven; Kim, Sam; Goldberg, Arnold; Montalbano, Timothy

    2018-04-01

    One of the major challenges in metal additive manufacturing is developing in-situ sensing and feedback control capabilities to eliminate build errors and allow qualified part creation without the need for costly and destructive external testing. Previously, many groups have focused on high fidelity numerical modeling and true temperature thermal imaging systems. These approaches require large computational resources or costly hardware that requires complex calibration and are difficult to integrate into commercial systems. In addition, due to the rapid change in the state of the material as well as its surface properties, getting true temperature is complicated and difficult. Here, we describe a different approach where we implement a low cost thermal imaging solution allowing for relative temperature measurements sufficient for detecting unwanted process variability. We match this with a faster than real time qualitative model that allows the process to be rapidly modeled during the build. The hope is to combine these two, allowing for the detection of anomalies in real time, enabling corrective action to potentially be taken, or parts to be stopped immediately after the error, saving material and time. Here we describe our sensor setup, its costs and abilities. We also show the ability to detect in real time unwanted process deviations. We also show that the output of our high speed model agrees qualitatively with experimental results. These results lay the groundwork for our vision of an integrated feedback and control scheme that combines low cost, easy to use sensors and fast modeling for process deviation monitoring.

  3. Analytical estimation of ultrasound properties, thermal diffusivity, and perfusion using magnetic resonance-guided focused ultrasound temperature data

    PubMed Central

    Dillon, C R; Borasi, G; Payne, A

    2016-01-01

    For thermal modeling to play a significant role in treatment planning, monitoring, and control of magnetic resonance-guided focused ultrasound (MRgFUS) thermal therapies, accurate knowledge of ultrasound and thermal properties is essential. This study develops a new analytical solution for the temperature change observed in MRgFUS which can be used with experimental MR temperature data to provide estimates of the ultrasound initial heating rate, Gaussian beam variance, tissue thermal diffusivity, and Pennes perfusion parameter. Simulations demonstrate that this technique provides accurate and robust property estimates that are independent of the beam size, thermal diffusivity, and perfusion levels in the presence of realistic MR noise. The technique is also demonstrated in vivo using MRgFUS heating data in rabbit back muscle. Errors in property estimates are kept less than 5% by applying a third order Taylor series approximation of the perfusion term and ensuring the ratio of the fitting time (the duration of experimental data utilized for optimization) to the perfusion time constant remains less than one. PMID:26741344

  4. Advanced Mirror Technology Development (AMTD) Thermal Trade Studies

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas

    2015-01-01

    Advanced Mirror Technology Development (AMTD) is being done at Marshall Space Flight Center (MSFC) in preparation for the next large aperture UVOIR space observatory. A key science mission of that observatory is the detection and characterization of 'Earth-like' exoplanets. Direct exoplanet observation requires a telescope to see a planet which will be 10(exp -10) times dimmer than its host star. To accomplish this using an internal coronagraph requires a telescope with an ultra-stable wavefront error (WFE). This paper investigates parametric relationships between primary mirror physical parameters and thermal WFE stability. Candidate mirrors are designed as a mesh and placed into a thermal analysis model to determine the temperature distribution in the mirror when it is placed inside of an actively controlled cylindrical shroud at Lagrange point 2. Thermal strains resulting from the temperature distribution are found and an estimation of WFE is found to characterize the effect that thermal inputs have on the optical quality of the mirror. This process is repeated for several mirror material properties, material types, and mirror designs to determine how to design a mirror for thermal stability.

  5. A model of the ground surface temperature for micrometeorological analysis

    NASA Astrophysics Data System (ADS)

    Leaf, Julian S.; Erell, Evyatar

    2017-07-01

    Micrometeorological models at various scales require ground surface temperature, which may not always be measured in sufficient spatial or temporal detail. There is thus a need for a model that can calculate the surface temperature using only widely available weather data, thermal properties of the ground, and surface properties. The vegetated/permeable surface energy balance (VP-SEB) model introduced here requires no a priori knowledge of soil temperature or moisture at any depth. It combines a two-layer characterization of the soil column following the heat conservation law with a sinusoidal function to estimate deep soil temperature, and a simplified procedure for calculating moisture content. A physically based solution is used for each of the energy balance components allowing VP-SEB to be highly portable. VP-SEB was tested using field data measuring bare loess desert soil in dry weather and following rain events. Modeled hourly surface temperature correlated well with the measured data (r 2 = 0.95 for a whole year), with a root-mean-square error of 2.77 K. The model was used to generate input for a pedestrian thermal comfort study using the Index of Thermal Stress (ITS). The simulation shows that the thermal stress on a pedestrian standing in the sun on a fully paved surface, which may be over 500 W on a warm summer day, may be as much as 100 W lower on a grass surface exposed to the same meteorological conditions.

  6. Interferometer for Measuring Displacement to Within 20 pm

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2003-01-01

    An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers

  7. Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Guangdong; Turchi, Craig

    Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajagopal, K. R.; Rao, I. J.

    The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce suchmore » materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.« less

  9. Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant

    DOE PAGES

    Zhu, Guangdong; Turchi, Craig

    2017-01-27

    Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less

  10. Radiative transfer model for aerosols at infrared wavelengths for passive remote sensing applications: revisited.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Embury, Janon F

    2008-11-01

    We introduced a two-dimensional radiative transfer model for aerosols in the thermal infrared [Appl. Opt.45, 6860-6875 (2006)APOPAI0003-693510.1364/AO.45.006860]. In that paper we superimposed two orthogonal plane-parallel layers to compute the radiance due to a two-dimensional (2D) rectangular aerosol cloud. In this paper we revisit the model and correct an error in the interaction of the two layers. We derive new expressions relating to the signal content of the radiance from an aerosol cloud based on the concept of five directional thermal contrasts: four for the 2D diffuse radiance and one for direct radiance along the line of sight. The new expressions give additional insight on the radiative transfer processes within the cloud. Simulations for Bacillus subtilis var. niger (BG) bioaerosol and dustlike kaolin aerosol clouds are compared and contrasted for two geometries: an airborne sensor looking down and a ground-based sensor looking up. Simulation results suggest that aerosol cloud detection from an airborne platform may be more challenging than for a ground-based sensor and that the detection of an aerosol cloud in emission mode (negative direct thermal contrast) is not the same as the detection of an aerosol cloud in absorption mode (positive direct thermal contrast).

  11. Development of a Response Surface Thermal Model for Orion Mated to the International Space Station

    NASA Technical Reports Server (NTRS)

    Miller, Stephen W.; Meier, Eric J.

    2010-01-01

    A study was performed to determine if a Design of Experiments (DOE)/Response Surface Methodology could be applied to on-orbit thermal analysis and produce a set of Response Surface Equations (RSE) that accurately predict vehicle temperatures. The study used an integrated thermal model of the International Space Station and the Orion Outer mold line model. Five separate factors were identified for study: yaw, pitch, roll, beta angle, and the environmental parameters. Twenty external Orion temperatures were selected as the responses. A DOE case matrix of 110 runs was developed. The data from these cases were analyzed to produce an RSE for each of the temperature responses. The initial agreement between the engineering data and the RSE predictions was encouraging, although many RSEs had large uncertainties on their predictions. Fourteen verification cases were developed to test the predictive powers of the RSEs. The verification showed mixed results with some RSE predicting temperatures matching the engineering data within the uncertainty bands, while others had very large errors. While this study to not irrefutably prove that the DOE/RSM approach can be applied to on-orbit thermal analysis, it does demonstrate that technique has the potential to predict temperatures. Additional work is needed to better identify the cases needed to produce the RSEs

  12. System Measures Thermal Noise In A Microphone

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J.; Ngo, Kim Chi T.

    1994-01-01

    Vacuum provides acoustic isolation from environment. System for measuring thermal noise of microphone and its preamplifier eliminates some sources of error found in older systems. Includes isolation vessel and exterior suspension, acting together, enables measurement of thermal noise under realistic conditions while providing superior vibrational and accoustical isolation. System yields more accurate measurements of thermal noise.

  13. Improved accuracy of ultrasound-guided therapies using electromagnetic tracking: in-vivo speed of sound measurements

    NASA Astrophysics Data System (ADS)

    Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.

    2017-02-01

    The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.

  14. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  15. Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Saxon, Aron R; Keyser, Matthew A

    Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint Lithium-ion (Li-ion) batteries are being deployed on the electrical grid for a variety of purposes, such as to smooth fluctuations in solar renewable power generation. The lifetime of these batteries will vary depending on their thermal environment and how they are charged and discharged. To optimal utilization of a battery over its lifetime requires characterization of its performance degradation under different storage and cycling conditions. Aging tests were conducted on commercial graphite/nickel-manganese-cobalt (NMC) Li-ion cells. A general lifetime prognostic model framework is applied to model changes in capacity andmore » resistance as the battery degrades. Across 9 aging test conditions from 0oC to 55oC, the model predicts capacity fade with 1.4 percent RMS error and resistance growth with 15 percent RMS error. The model, recast in state variable form with 8 states representing separate fade mechanisms, is used to extrapolate lifetime for example applications of the energy storage system integrated with renewable photovoltaic (PV) power generation.« less

  16. Development of steady-state model for MSPT and detailed analyses of receiver

    NASA Astrophysics Data System (ADS)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  17. Thermal analysis in the rat glioma model during directly multipoint injection hyperthermia incorporating magnetic nanoparticles.

    PubMed

    Liu, Lianke; Ni, Fang; Zhang, Jianchao; Wang, Chunyu; Lu, Xiang; Guo, Zhirui; Yao, Shaowei; Shu, Yongqian; Xu, Ruizhi

    2011-12-01

    Hyperthermia incorporating magnetic nanoparticles (MNPs) is a hopeful therapy to cancers and steps into clinical tests at present. However, the clinical plan of MNPs deposition in tumors, especially applied for directly multipoint injection hyperthermia (DMIH), and the information of temperature rise in tumors by DMIH is lack of studied. In this paper, we mainly discussed thermal distributions induced by MNPs in the rat brain tumors during DMIH. Due to limited experimental measurement for detecting thermal dose of tumors, and in order to acquire optimized results of temperature distributions clinically needed, we designed the thermal model in which three types of MNPs injection for hyperthermia treatments were simulated. The simulated results showed that MNPs injection plan played an important role in determining thermal distribution, as well as the overall dose of MNPs injected. We found that as injected points enhanced, the difference of temperature in the whole tumor volume decreased. Moreover, from temperature detecting data by Fiber Optic Temperature Sensors (FOTSs) in glioma bearing rats during MNPs hyperthermia, we found the temperature errors by FOTSs reduced as the number of points injected enhanced. Finally, the results showed that the simulations are preferable and the optimized plans of the numbers and spatial positions of MNPs points injected are essential during direct injection hyperthermia.

  18. Evaluation of high temperature superconductive thermal bridges for space borne cryogenic detectors

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1996-01-01

    Infrared sensor satellites are used to monitor the conditions in the earth's upper atmosphere. In these systems, the electronic links connecting the cryogenically cooled infrared detectors to the significantly warmer amplification electronics act as thermal bridges and, consequently, the mission lifetimes of the satellites are limited due to cryogenic evaporation. High-temperature superconductor (HTS) materials have been proposed by researchers at the National Aeronautics and Space Administration Langley's Research Center (NASA-LaRC) as an alternative to the currently used manganin wires for electrical connection. The potential for using HTS films as thermal bridges has provided the motivation for the design and the analysis of a spaceflight experiment to evaluate the performance of this superconductive technology in the space environment. The initial efforts were focused on the preliminary design of the experimental system which allows for the quantitative comparison of superconductive leads with manganin leads, and on the thermal conduction modeling of the proposed system. Most of the HTS materials were indicated to be potential replacements for the manganin wires. In the continuation of this multi-year research, the objectives of this study were to evaluate the sources of heat transfer on the thermal bridges that have been neglected in the preliminary conductive model and then to develop a methodology for the estimation of the thermal conductivities of the HTS thermal bridges in space. The Joule heating created by the electrical current through the manganin wires was incorporated as a volumetric heat source into the manganin conductive model. The radiative heat source on the HTS thermal bridges was determined by performing a separate radiant interchange analysis within a high-T(sub c) superconductor housing area. Both heat sources indicated no significant contribution on the cryogenic heat load, which validates the results obtained in the preliminary conduction model. A methodology was presented for the estimation of the thermal conductivities of the individual HTS thermal bridge materials and the effective thermal conductivities of the composite HTS thermal bridges as functions of temperature. This methodology included a sensitivity analysis and the demonstration of the estimation procedure using simulated data with added random errors. The thermal conductivities could not be estimated as functions of temperature; thus the effective thermal conductivities of the HTS thermal bridges were analyzed as constants.

  19. Estimating envelope thermal characteristics from single point in time thermal images

    NASA Astrophysics Data System (ADS)

    Alshatshati, Salahaldin Faraj

    Energy efficiency programs implemented nationally in the U.S. by utilities have rendered savings which have cost on average 0.03/kWh. This cost is still well below generation costs. However, as the lowest cost energy efficiency measures are adopted, this the cost effectiveness of further investment declines. Thus there is a need to more effectively find the most opportunities for savings regionally and nationally, so that the greatest cost effectiveness in implementing energy efficiency can be achieved. Integral to this process. are at scale energy audits. However, on-site building energy audits process are expensive, in the range of US1.29/m2-$5.37/m2 and there are an insufficient number of professionals to perform the audits. Energy audits that can be conducted at-scale and at low cost are needed. Research is presented that addresses at community-wide scales characterization of building envelope thermal characteristics via drive-by and fly-over GPS linked thermal imaging. A central question drives this research: Can single point-in-time thermal images be used to infer U-values and thermal capacitances of walls and roofs? Previous efforts to use thermal images to estimate U-values have been limited to rare steady exterior weather conditions. The approaches posed here are based upon the development two models first is a dynamic model of a building envelope component with unknown U-value and thermal capacitance. The weather conditions prior to the thermal image are used as inputs to the model. The model is solved to determine the exterior surface temperature, ultimately predicted the temperature at the thermal measurement time. The model U-value and thermal capacitance are tuned in order to force the error between the predicted surface temperature and the measured surface temperature from thermal imaging to be near zero. This model is developed simply to show that such a model cannot be relied upon to accurately estimate the U-value. The second is a data-based methodology. This approach integrates the exterior surface temperature measurements, historical utility data, and easily accessible or potentially easily accessible housing data. A Random Forest model is developed from a training subset of residences for which the envelope U-value is known. This model is used to predict the envelope U-value for a validation set of houses with unknown U-value. Demonstrated is an ability to estimate the wall/roof U-value with an R-squared value in the range of 0.97 and 0.96 respectively, using as few as 9 and 24 training houses for respectively wall and ceiling U-value estimation. The implication of this research is significant, offering the possibility of auditing residences remotely at-scale via aerial and drive-by thermal imaging.

  20. Aluminum nitride coatings using response surface methodology to optimize the thermal dissipated performance of light-emitting diode modules

    NASA Astrophysics Data System (ADS)

    Jean, Ming-Der; Lei, Peng-Da; Kong, Ling-Hua; Liu, Cheng-Wu

    2018-05-01

    This study optimizes the thermal dissipation ability of aluminum nitride (AlN) ceramics to increase the thermal performance of light-emitting diode (LED) modulus. AlN powders are deposited on heat sink as a heat interface material, using an electrostatic spraying process. The junction temperature of the heat sink is developed by response surface methodology based on Taguchi methods. In addition, the structure and properties of the AlN coating are examined using X-ray photoelectron spectroscopy (XPS). In the XPS analysis, the AlN sub-peaks are observed at 72.79 eV for Al2p and 398.88 eV for N1s, and an N1s sub-peak is assigned to N-O at 398.60eV and Al-N bonding at 395.95eV, which allows good thermal properties. The results have shown that the use of AlN ceramic material on a heat sink can enhance the thermal performance of LED modules. In addition, the percentage error between the predicted and experimental results compared the quadric model with between the linear and he interaction models was found to be within 7.89%, indicating that it was a good predictor. Accordingly, RSM can effectively enhance the thermal performance of an LED, and the beneficial heat dissipation effects for AlN are improved by electrostatic spraying.

  1. Implementation of Active Thermal Control (ATC) for the Soil Moisture Active and Passive (SMAP) Radiometer

    NASA Technical Reports Server (NTRS)

    Mikhaylov, Rebecca; Kwack, Eug; French, Richard; Dawson, Douglas; Hoffman, Pamela

    2014-01-01

    NASA's Earth Observing Soil Moisture Active and Passive (SMAP) Mission is scheduled to launch in November 2014 into a 685 kilometer near-polar, sun-synchronous orbit. SMAP will provide comprehensive global mapping measurements of soil moisture and freeze/thaw state in order to enhance understanding of the processes that link the water, energy, and carbon cycles. The primary objectives of SMAP are to improve worldwide weather and flood forecasting, enhance climate prediction, and refine drought and agriculture monitoring during its three year mission. The SMAP instrument architecture incorporates an L-band radar and an L-band radiometer which share a common feed horn and parabolic mesh reflector. The instrument rotates about the nadir axis at approximately 15 revolutions per minute, thereby providing a conically scanning wide swath antenna beam that is capable of achieving global coverage within three days. In order to make the necessary precise surface emission measurements from space, the electronics and hardware associated with the radiometer must meet tight short-term (instantaneous and orbital) and long-term (monthly and mission) thermal stabilities. Maintaining these tight thermal stabilities is quite challenging because the sensitive electronics are located on a fast spinning platform that can either be in full sunlight or total eclipse, thus exposing them to a highly transient environment. A passive design approach was first adopted early in the design cycle as a low-cost solution. With careful thermal design efforts to cocoon and protect all sensitive components, all stability requirements were met passively. Active thermal control (ATC) was later added after the instrument Preliminary Design Review (PDR) to mitigate the threat of undetected gain glitches, not for thermal-stability reasons. Gain glitches are common problems with radiometers during missions, and one simple way to avoid gain glitches is to use the in-flight set point programmability that ATC affords to operate the radiometer component away from the problematic temperature zone. A simple ThermXL model (10 nodes) was developed to exercise quick trade studies among various proposed control algorithms: Modified P control vs. PI control. The ThermXL results were then compared with the detailed Thermal Desktop (TD) model for corroboration. Once done, the simple ThermXL model was used to evaluate parameter effects such as temperature digitization, heater size and gain margin, time step, and voltage variation of power supply on the ATC performance. A Modified P control algorithm was implemented into the instrument flight electronics based on the ThermXL results. The thermal short-term stability margin decreased by 10 percent with ATC and a wide temperature error band (plus or minus 0.1 degrees Centigrade) compared to the original passive thermal design. However, a tighter temperature error band (plus or minus 0.1 degrees Centigrade) increased the thermal short-term stability margin by a factor of three over the passive thermal design. The current ATC design provides robust thermal control, tighter stability, and greater in-flight flexibility even though its implementation was prompted by non-thermal performance concerns.

  2. A Comparative Study on Improved Arrhenius-Type and Artificial Neural Network Models to Predict High-Temperature Flow Behaviors in 20MnNiMo Alloy

    PubMed Central

    Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng

    2014-01-01

    The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173∼1473 K and strain rate range of 0.01∼10 s−1. Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of −39.99%∼35.05% and −3.77%∼16.74%. As for the former, only 16.3% of the test data set possesses η-values within ±1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model. PMID:24688358

  3. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  4. A Hybrid Demand Response Simulator Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-02

    A hybrid demand response simulator is developed to test different control algorithms for centralized and distributed demand response (DR) programs in a small distribution power grid. The HDRS is designed to model a wide variety of DR services such as peak having, load shifting, arbitrage, spinning reserves, load following, regulation, emergency load shedding, etc. The HDRS does not model the dynamic behaviors of the loads, rather, it simulates the load scheduling and dispatch process. The load models include TCAs (water heaters, air conditioners, refrigerators, freezers, etc) and non-TCAs (lighting, washer, dishwasher, etc.) The ambient temperature changes, thermal resistance, capacitance, andmore » the unit control logics can be modeled for TCA loads. The use patterns of the non-TCA can be modeled by probability of use and probabilistic durations. Some of the communication network characteristics, such as delays and errors, can also be modeled. Most importantly, because the simulator is modular and greatly simplified the thermal models for TCA loads, it is very easy and fast to be used to test and validate different control algorithms in a simulated environment.« less

  5. Improving the thermal efficiency of a jaggery production module using a fire-tube heat exchanger.

    PubMed

    La Madrid, Raul; Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel

    2017-12-15

    Jaggery is a product obtained after heating and evaporation processes have been applied to sugar cane juice via the addition of thermal energy, followed by the crystallisation process through mechanical agitation. At present, jaggery production uses furnaces and pans that are designed empirically based on trial and error procedures, which results in low ranges of thermal efficiency operation. To rectify these deficiencies, this study proposes the use of fire-tube pans to increase heat transfer from the flue gases to the sugar cane juice. With the aim of increasing the thermal efficiency of a jaggery installation, a computational fluid dynamic (CFD)-based model was used as a numerical tool to design a fire-tube pan that would replace the existing finned flat pan. For this purpose, the original configuration of the jaggery furnace was simulated via a pre-validated CFD model in order to calculate its current thermal performance. Then, the newly-designed fire-tube pan was virtually replaced in the jaggery furnace with the aim of numerically estimating the thermal performance at the same operating conditions. A comparison of both simulations highlighted the growth of the heat transfer rate at around 105% in the heating/evaporation processes when the fire-tube pan replaced the original finned flat pan. This enhancement impacted the jaggery production installation, whereby the thermal efficiency of the installation increased from 31.4% to 42.8%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Electrothermal DC characterization of GaN on Si MOS-HEMTs

    NASA Astrophysics Data System (ADS)

    Rodríguez, R.; González, B.; García, J.; Núñez, A.

    2017-11-01

    DC characteristics of AlGaN/GaN on Si single finger MOS-HEMTs, for different gate geometries, have been measured and numerically simulated with substrate temperatures up to 150 °C. Defect density, depending on gate width, and thermal resistance, depending additionally on temperature, are extracted from transfer characteristics displacement and the AC output conductance method, respectively, and modeled for numerical simulations with Atlas. The thermal conductivity degradation in thin films is also included for accurate simulation of the heating response. With an appropriate methodology, the internal model parameters for temperature dependencies have been established. The numerical simulations show a relative error lower than 4.6% overall, for drain current and channel temperature behavior, and account for the measured device temperature decrease with the channel length increase as well as with the channel width reduction, for a set bias.

  7. LEWICE 2.2 Capabilities and Thermal Validation

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2002-01-01

    A computational model of bleed air anti-icing and electrothermal de-icing have been added to the LEWICE 2.0 software by integrating the capabilities of two previous programs, ANTICE and LEWICE/ Thermal. This combined model has been released as LEWICE version 2.2. Several advancements have also been added to the previous capabilities of each module. This report will present the capabilities of the software package and provide results for both bleed air and electrothermal cases. A comprehensive validation effort has also been performed to compare the predictions to an existing electrothermal database. A quantitative comparison shows that for deicing cases, the average difference is 9.4 F (26%) compared to 3 F for the experimental data while for evaporative cases the average difference is 2 F (32%) compared to an experimental error of 4 F.

  8. An Analytical Model for the Performance Analysis of Concurrent Transmission in IEEE 802.15.4

    PubMed Central

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-01-01

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions. PMID:24658624

  9. An analytical model for the performance analysis of concurrent transmission in IEEE 802.15.4.

    PubMed

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-03-20

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions.

  10. Solving local structure around dopants in metal nanoparticles with ab initio modeling of X-ray absorption near edge structure

    DOE PAGES

    Timoshenko, J.; Shivhare, A.; Scott, R. W.; ...

    2016-06-30

    We adopted ab-initio X-ray Absorption Near Edge Structure (XANES) modelling for structural refinement of local environments around metal impurities in a large variety of materials. Our method enables both direct modelling, where the candidate structures are known, and the inverse modelling, where the unknown structural motifs are deciphered from the experimental spectra. We present also estimates of systematic errors, and their influence on the stability and accuracy of the obtained results. We illustrate our approach by following the evolution of local environment of palladium atoms in palladium-doped gold thiolate clusters upon chemical and thermal treatments.

  11. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  12. Error-growth dynamics and predictability of surface thermally induced atmospheric flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, X.; Pielke, R.A.

    1993-09-01

    Using the CSU Regional Atmospheric Modeling System (RAMS) in its nonhydrostatic and compressible configuration, over 200 two-dimensional simulations with [Delta]x = 2 km and [Delta]x = 100 m are performed to study in detail the initial adjustment process and the error-growth dynamics of surface thermally induced circulation including the sensitivity to initial conditions, boundary conditions, and model parameters, and to study the predictability as a function of the size of surface heat patches under a calm mean wind. It is found that the error growth is not sensitive to the characterisitics of the initial perturbations. The numerical smoothing has amore » strong impact on the initial adjustment process and on the error-growth dynamics. The predictability and flow structures, it is found that the vertical velocity field is strongly affected by the mean wind, and the flow structures are quite sensitive to the initial soil water content. The transition from organized flow to the situation in which fluxes are dominated by noncoherent turbulent eddies under a calm mean wind is quantitatively evaluated and this transition is different for different variables. The relationship between the predictability of a realization and of an ensemble average is discussed. The predictability and the coherent circulations modulated by the surface inhomogeneities are also studied by computing the autocorrelations and the power spectra. The three-dimensional mesoscale and large-eddy simulations are performed to verify the above results. It is found that the two-dimensional mesoscale (or fine resolution) simulation yields very close or similar results regarding the predictability as those from the three-dimensional mesoscale (or large eddy) simulation. The horizontally averaged quantities based on two-dimensional fine-resolution simulations are insensitive to initial perturbations and agree with those based on three-dimensional large-eddy simulations. 87 refs., 25 figs.« less

  13. The Swift BAT Perspective on Non-Thermal Emission in HIFLUGCS Galaxy Clusters

    NASA Technical Reports Server (NTRS)

    Wik, Daniel R.

    2011-01-01

    The search for diffuse non-thermal, inverse Compton (IC) emission from galaxy clusters at hard X-ray energies has been underway for many years, with most detections being either of low significance or controversial. Until recently, comprehensive surveys of hard X-ray emission from clusters were not possible; instead, individually proposed-for. long observations would be collated from the archive. With the advent of the Swift BAT all sky survey, any c1u,;ter's emission above 14 keV can be probed with nearly uniform sensitivity. which is comparable to that of RXTE, Beppo-SAX, and Suzaku with the 58-month version of the survey. In this work. we search for non-thermal excess emission above the exponentially decreasing, high energy thermal emission in the flux-limited HIFLUGCS sample. The BAT emission from many of the detected clusters is marginally extended; we are able to extract the total flux for these clusters using fiducial models for their spatial extent. To account for thermal emission at BAT energies, XMM-Newton EPIC spectra are extracted from coincident spatial regions so that both the thermal and non-thermal spectral components can be determined simultaneou,;ly in joint fits. We find marginally significant IC components in 6 clusters, though after closer inspection and consideration of systematic errors we are unable to claim a clear detection in any of them. The spectra of all clusters are also summed to enhance a cumulative non-thermal signal not quite detectable in individual clusters. After constructing a model based on single temperature

  14. Effects of thermal blooming on systems comprised of tiled subapertures

    NASA Astrophysics Data System (ADS)

    Leakeas, Charles L.; Bartell, Richard J.; Krizo, Matthew J.; Fiorino, Steven T.; Cusumano, Salvatore J.; Whiteley, Matthew R.

    2010-04-01

    Laser weapon systems comprise of tiled subapertures are rapidly emerging in the directed energy community. The Air Force Institute of Technology Center for Directed Energy (AFIT/CDE), under sponsorship of the HEL Joint Technology Office has developed performance models of such laser weapon system configurations consisting of tiled arrays of both slab and fiber subapertures. These performance models are based on results of detailed waveoptics analyses conducted using WaveTrain. Previous performance model versions developed in this effort represent system characteristics such as subaperture shape, aperture fill factor, subaperture intensity profile, subaperture placement in the primary aperture, subaperture mutual coherence (piston), subaperture differential jitter (tilt), and beam quality wave-front error associated with each subaperture. The current work is a prerequisite for the development of robust performance models for turbulence and thermal blooming effects for tiled systems. Emphasis is placed on low altitude tactical scenarios. The enhanced performance model developed will be added to AFIT/CDE's HELEEOS parametric one-on-one engagement level model via the Scaling for High Energy Laser and Relay Engagement (SHaRE) toolbox.

  15. Method for solving the problem of nonlinear heating a cylindrical body with unknown initial temperature

    NASA Astrophysics Data System (ADS)

    Yaparova, N.

    2017-10-01

    We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.

  16. Influence of thermodynamic properties of a thermo-acoustic emitter on the efficiency of thermal airborne ultrasound generation.

    PubMed

    Daschewski, M; Kreutzbruck, M; Prager, J

    2015-12-01

    In this work we experimentally verify the theoretical prediction of the recently published Energy Density Fluctuation Model (EDF-model) of thermo-acoustic sound generation. Particularly, we investigate experimentally the influence of thermal inertia of an electrically conductive film on the efficiency of thermal airborne ultrasound generation predicted by the EDF-model. Unlike widely used theories, the EDF-model predicts that the thermal inertia of the electrically conductive film is a frequency-dependent parameter. Its influence grows non-linearly with the increase of excitation frequency and reduces the efficiency of the ultrasound generation. Thus, this parameter is the major limiting factor for the efficient thermal airborne ultrasound generation in the MHz-range. To verify this theoretical prediction experimentally, five thermo-acoustic emitter samples consisting of Indium-Tin-Oxide (ITO) coatings of different thicknesses (from 65 nm to 1.44 μm) on quartz glass substrates were tested for airborne ultrasound generation in a frequency range from 10 kHz to 800 kHz. For the measurement of thermally generated sound pressures a laser Doppler vibrometer combined with a 12 μm thin polyethylene foil was used as the sound pressure detector. All tested thermo-acoustic emitter samples showed a resonance-free frequency response in the entire tested frequency range. The thermal inertia of the heat producing film acts as a low-pass filter and reduces the generated sound pressure with the increasing excitation frequency and the ITO film thickness. The difference of generated sound pressure levels for samples with 65 nm and 1.44 μm thickness is in the order of about 6 dB at 50 kHz and of about 12 dB at 500 kHz. A comparison of sound pressure levels measured experimentally and those predicted by the EDF-model shows for all tested emitter samples a relative error of less than ±6%. Thus, experimental results confirm the prediction of the EDF-model and show that the model can be applied for design and optimization of thermo-acoustic airborne ultrasound emitters. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Experimental validation and model development for thermal transmittances of porous window screens and horizontal louvred blind systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Robert; Goudey, Howdy; Curcija, D. Charlie

    Virtually every home in the US has some form of shades, blinds, drapes, or other window attachment, but few have been designed for energy savings. In order to provide a common basis of comparison for thermal performance it is important to have validated simulation tools. This study outlines a review and validation of the ISO 15099 centre-of-glass thermal transmittance correlations for naturally ventilated cavities through measurement and detailed simulations. The focus is on the impacts of room-side ventilated cavities, such as those found with solar screens and horizontal louvred blinds. The thermal transmittance of these systems is measured experimentally, simulatedmore » using computational fluid dynamics analysis, and simulated utilizing simplified correlations from ISO 15099. Finally, correlation coefficients are proposed for the ISO 15099 algorithm that reduces the mean error between measured and simulated heat flux for typical solar screens from 16% to 3.5% and from 13% to 1% for horizontal blinds.« less

  18. Experimental validation and model development for thermal transmittances of porous window screens and horizontal louvred blind systems

    DOE PAGES

    Hart, Robert; Goudey, Howdy; Curcija, D. Charlie

    2017-05-16

    Virtually every home in the US has some form of shades, blinds, drapes, or other window attachment, but few have been designed for energy savings. In order to provide a common basis of comparison for thermal performance it is important to have validated simulation tools. This study outlines a review and validation of the ISO 15099 centre-of-glass thermal transmittance correlations for naturally ventilated cavities through measurement and detailed simulations. The focus is on the impacts of room-side ventilated cavities, such as those found with solar screens and horizontal louvred blinds. The thermal transmittance of these systems is measured experimentally, simulatedmore » using computational fluid dynamics analysis, and simulated utilizing simplified correlations from ISO 15099. Finally, correlation coefficients are proposed for the ISO 15099 algorithm that reduces the mean error between measured and simulated heat flux for typical solar screens from 16% to 3.5% and from 13% to 1% for horizontal blinds.« less

  19. A High-Resolution Measurement of Ball IR Black Paint's Low-Temperature Emissivity

    NASA Technical Reports Server (NTRS)

    Tuttle, Jim; Canavan, Ed; DiPirro, Mike; Li, Xiaoyi; Franck, Randy; Green, Dan

    2011-01-01

    High-emissivity paints are commonly used on thermal control system components. The total hemispheric emissivity values of such paints are typically high (nearly 1) at temperatures above about 100 Kelvin, but they drop off steeply at lower temperatures. A precise knowledge of this temperature-dependence is critical to designing passively-cooled components with low operating temperatures. Notable examples are the coatings on thermal radiators used to cool space-flight instruments to temperatures below 40 Kelvin. Past measurements of low-temperature paint emissivity have been challenging, often requiring large thermal chambers and typically producing data with high uncertainties below about 100 Kelvin. We describe a relatively inexpensive method of performing high-resolution emissivity measurements in a small cryostat. We present the results of such a measurement on Ball InfraRed BlackTM(BIRBTM), a proprietary surface coating produced by Ball Aerospace and Technologies Corp (BATC), which is used in spaceflight applications. We also describe a thermal model used in the error analysis.

  20. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  1. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    Van Hooidonk, R. J.

    2011-12-01

    Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.

  2. Erratum: The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions

    NASA Astrophysics Data System (ADS)

    Pickett, Brian K.; Cassen, Patrick; Durisen, Richard H.; Link, Robert

    2000-02-01

    In the paper ``The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions'' by Brian K. Pickett, Patrick Cassen, Richard H. Durisen, and Robert Link (ApJ, 529, 1034 [2000]), the wrong version of Figure 10 was published as a result of an error at the Press. The correct version of Figure 10 appears below. The Press sincerely regrets this error.

  3. Thermal Conductance of Pressed Bimetal Contact Pairs at Liquid Nitrogen Temperatures

    NASA Technical Reports Server (NTRS)

    Kittle, Peter; Salerno, Louis J.; Spivak, Alan L.

    1994-01-01

    Large Dewars often use aluminum radiation shields and stainless steel vent lines. A simple, low cost method of making thermal contact between the shield and the line is to deform the shield around the line. A knowledge of the thermal conductance of such a joint is needed to thermally analyze the system. The thermal conductance of pressed metal contacts consisting of one aluminum and one stainless steel contact has been measured at 77 K, with applied forces from 8.9 N to 267 N. Both 5052 or 5083 aluminum were used as the upper contact. The lower contact was 304L stainless steel. The thermal conductance was found to be linear in temperature over the narrow temperature range of measurement. As the force was increased, the thermal conductance ranged from roughly 9 to 21 mW/K within a range of errors from 3% to 8%. Within the range of error no difference could be found between the using either of the aluminum alloys as the upper contact. Extrapolating the data to zero applied force does not result in zero thermal conductance. Possible causes of this anomalous effect are discussed.

  4. THEMIS high-resolution digital terrain: Topographic and thermophysical mapping of Gusev Crater, Mars

    USGS Publications Warehouse

    Cushing, G.E.; Titus, T.N.; Soderblom, L.A.; Kirk, R.L.

    2009-01-01

    We discuss a new technique to generate high-resolution digital terrain models (DTMs) and to quantitatively derive and map slope-corrected thermophysical properties such as albedo, thermal inertia, and surface temperatures. This investigation is a continuation of work started by Kirk et al. (2005), who empirically deconvolved Thermal Emission Imaging System (THEMIS) visible and thermal infrared data of this area, isolating topographic information that produced an accurate DTM. Surface temperatures change as a function of many variables such as slope, albedo, thermal inertia, time, season, and atmospheric opacity. We constrain each of these variables to construct a DTM and maps of slope-corrected albedo, slope- and albedo-corrected thermal inertia, and surface temperatures across the scene for any time of day or year and at any atmospheric opacity. DTMs greatly facilitate analyses of the Martian surface, and the MOLA global data set is not finely scaled enough (128 pixels per degree, ???0.5 km per pixel near the equator) to be combined with newer data sets (e.g., High Resolution Imaging Science Experiment, Context Camera, and Compact Reconnaissance Imaging Spectrometer for Mars at ???0.25, ???6, and ???20 m per pixel, respectively), so new techniques to derive high-resolution DTMs are always being explored. This paper discusses our technique of combining a set of THEMIS visible and thermal infrared observations such that albedo and thermal inertia variations within the scene are eliminated and only topographic variations remain. This enables us to produce a high-resolution DTM via photoclinometry techniques that are largely free of albedo-induced errors. With this DTM, THEMIS observations, and a subsurface thermal diffusion model, we generate slope-corrected maps of albedo, thermal inertia, and surface temperatures. In addition to greater accuracy, these products allow thermophysical properties to be directly compared with topography.

  5. Regolith thermal property inversion in the LUNAR-A heat-flow experiment

    NASA Astrophysics Data System (ADS)

    Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.

    2001-11-01

    In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.

  6. Detection of thermal gradients through fiber-optic Chirped Fiber Bragg Grating (CFBG): Medical thermal ablation scenario

    NASA Astrophysics Data System (ADS)

    Korganbayev, Sanzhar; Orazayev, Yerzhan; Sovetov, Sultan; Bazyl, Ali; Schena, Emiliano; Massaroni, Carlo; Gassino, Riccardo; Vallan, Alberto; Perrone, Guido; Saccomandi, Paola; Arturo Caponero, Michele; Palumbo, Giovanna; Campopiano, Stefania; Iadicicco, Agostino; Tosi, Daniele

    2018-03-01

    In this paper, we describe a novel method for spatially distributed temperature measurement with Chirped Fiber Bragg Grating (CFBG) fiber-optic sensors. The proposed method determines the thermal profile in the CFBG region from demodulation of the CFBG optical spectrum. The method is based on an iterative optimization that aims at minimizing the mismatch between the measured CFBG spectrum and a CFBG model based on coupled-mode theory (CMT), perturbed by a temperature gradient. In the demodulation part, we simulate different temperature distribution patterns with Monte-Carlo approach on simulated CFBG spectra. Afterwards, we obtain cost function that minimizes difference between measured and simulated spectra, and results in final temperature profile. Experiments and simulations have been carried out first with a linear gradient, demonstrating a correct operation (error 2.9 °C); then, a setup has been arranged to measure the temperature pattern on a 5-cm long section exposed to medical laser thermal ablation. Overall, the proposed method can operate as a real-time detection technique for thermal gradients over 1.5-5 cm regions, and turns as a key asset for the estimation of thermal gradients at the micro-scale in biomedical applications.

  7. Heat transfer enhancement in triplex-tube latent thermal energy storage system with selected arrangements of fins

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Xing, Yuming; Liu, Xin; Rui, Zhoufeng

    2018-01-01

    The use of thermal energy storage systems can effectively reduce energy consumption and improve the system performance. One of the promising ways for thermal energy storage system is application of phase change materials (PCMs). In this study, a two-dimensional numerical model is presented to investigate the heat transfer enhancement during the melting/solidification process in a triplex tube heat exchanger (TTHX) by using fluent software. The thermal conduction and natural convection are all taken into account in the simulation of the melting/solidification process. As the volume fraction of fin is kept to be a constant, the influence of proposed fin arrangement on temporal profile of liquid fraction over the melting process is studied and reported. By rotating the unit with different angle, the simulation shows that the melting time varies a little, which means that the installation error can be reduced by the selected fin arrangement. The proposed fin arrangement also can effectively reduce time of the solidification of the PCM by investigating the solidification process. To summarize, this work presents a shape optimization for the improvement of the thermal energy storage system by considering both thermal energy charging and discharging process.

  8. Inverting multiple suites of thermal indicator data to constrain the heat flow history: A case study from east Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudford, B.S.

    1996-12-31

    The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less

  9. Inverting multiple suites of thermal indicator data to constrain the heat flow history: A case study from east Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudford, B.S.

    1996-01-01

    The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less

  10. Automatic location of disruption times in JET

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Vega, J.; Murari, A.

    2014-11-01

    The loss of stability and confinement in tokamak plasmas can induce critical events known as disruptions. Disruptions produce strong electromagnetic forces and thermal loads which can damage fundamental components of the devices. Determining the disruption time is extremely important for various disruption studies: theoretical models, physics-driven models, or disruption predictors. In JET, during the experimental campaigns with the JET-C (Carbon Fiber Composite) wall, a common criterion to determine the disruption time consisted of locating the time of the thermal quench. However, with the metallic ITER-like wall (JET-ILW), this criterion is usually not valid. Several thermal quenches may occur previous to the current quench but the temperature recovers. Therefore, a new criterion has to be defined. A possibility is to use the start of the current quench as disruption time. This work describes the implementation of an automatic data processing method to estimate the disruption time according to this new definition. This automatic determination allows both reducing human efforts to locate the disruption times and standardizing the estimates (with the benefit of being less vulnerable to human errors).

  11. Heat transfer models for predicting Salmonella enteritidis in shell eggs through supply chain distribution.

    PubMed

    Almonacid, S; Simpson, R; Teixeira, A

    2007-11-01

    Egg and egg preparations are important vehicles for Salmonella enteritidis infections. The influence of time-temperature becomes important when the presence of this organism is found in commercial shell eggs. A computer-aided mathematical model was validated to estimate surface and interior temperature of shell eggs under variable ambient and refrigerated storage temperature. A risk assessment of S. enteritidis based on the use of this model, coupled with S. enteritidis kinetics, has already been reported in a companion paper published earlier in JFS. The model considered the actual geometry and composition of shell eggs and was solved by numerical techniques (finite differences and finite elements). Parameters of interest such as local (h) and global (U) heat transfer coefficient, thermal conductivity, and apparent volumetric specific heat were estimated by an inverse procedure from experimental temperature measurement. In order to assess the error in predicting microbial population growth, theoretical and experimental temperatures were applied to a S. enteritidis growth model taken from the literature. Errors between values of microbial population growth calculated from model predicted compared with experimentally measured temperatures were satisfactorily low: 1.1% and 0.8% for the finite difference and finite element model, respectively.

  12. Dynamic gas temperature measurement system

    NASA Technical Reports Server (NTRS)

    Elmore, D. L.; Robinson, W. W.; Watkins, W. B.

    1983-01-01

    A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.

  13. Thin film absorption characterization by focus error thermal lensing

    NASA Astrophysics Data System (ADS)

    Domené, Esteban A.; Schiltz, Drew; Patel, Dinesh; Day, Travis; Jankowska, E.; Martínez, Oscar E.; Rocca, Jorge J.; Menoni, Carmen S.

    2017-12-01

    A simple, highly sensitive technique for measuring absorbed power in thin film dielectrics based on thermal lensing is demonstrated. Absorption of an amplitude modulated or pulsed incident pump beam by a thin film acts as a heat source that induces thermal lensing in the substrate. A second continuous wave collimated probe beam defocuses after passing through the sample. Determination of absorption is achieved by quantifying the change of the probe beam profile at the focal plane using a four-quadrant detector and cylindrical lenses to generate a focus error signal. This signal is inherently insensitive to deflection, which removes noise contribution from point beam stability. A linear dependence of the focus error signal on the absorbed power is shown for a dynamic range of over 105. This technique was used to measure absorption loss in dielectric thin films deposited on fused silica substrates. In pulsed configuration, a single shot sensitivity of about 20 ppm is demonstrated, providing a unique technique for the characterization of moving targets as found in thin film growth instrumentation.

  14. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    NASA Astrophysics Data System (ADS)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.

    2016-09-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  15. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  16. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  17. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE PAGES

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...

    2016-05-20

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  18. Achievable flatness in a large microwave power transmitting antenna

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1980-01-01

    A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.

  19. Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module

    NASA Astrophysics Data System (ADS)

    Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge

    2017-04-01

    A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.

  20. Experimental validation for thermal transmittances of window shading systems with perimeter gaps

    DOE PAGES

    Hart, Robert; Goudey, Howdy; Curcija, D. Charlie

    2018-02-22

    Virtually all residential and commercial windows in the U.S. have some form of window attachment, but few have been designed for energy savings. ISO 15099 presents a simulation framework to determine thermal performance of window attachments, but the model has not been validated for these products. This paper outlines a review and validation of the ISO 15099 centre-of-glass heat transfer correlations for perimeter gaps (top, bottom, and side) in naturally ventilated cavities through measurement and simulation. The thermal transmittance impact due to dimensional variations of these gaps is measured experimentally, simulated using computational fluid dynamics, and simulated utilizing simplified correlationsmore » from ISO 15099. Results show that the ISO 15099 correlations produce a mean error between measured and simulated heat flux of 2.5 ± 7%. These tolerances are similar to those obtained from sealed cavity comparisons and are deemed acceptable within the ISO 15099 framework.« less

  1. Opto-thermal analysis of a lightweighted mirror for solar telescope.

    PubMed

    Banyal, Ravinder K; Ravindra, B; Chatterjee, S

    2013-03-25

    In this paper, an opto-thermal analysis of a moderately heated lightweighted solar telescope mirror is carried out using 3D finite element analysis (FEA). A physically realistic heat transfer model is developed to account for the radiative heating and energy exchange of the mirror with surroundings. The numerical simulations show the non-uniform temperature distribution and associated thermo-elastic distortions of the mirror blank clearly mimicking the underlying discrete geometry of the lightweighted substrate. The computed mechanical deformation data is analyzed with surface polynomials and the optical quality of the mirror is evaluated with the help of a ray-tracing software. The thermal print-through distortions are further shown to contribute to optical figure changes and mid-spatial frequency errors of the mirror surface. A comparative study presented for three commonly used substrate materials, namely, Zerodur, Pyrex and Silicon Carbide (SiC) is relevant to vast area of large optics requirements in ground and space applications.

  2. Evaluating the SSEBop approach for evapotranspiration mapping with landsat data using lysimetric observations in the semi-arid Texas High Plains

    USGS Publications Warehouse

    Senay, Gabriel; Gowda, Prasanna H.; Bohms, Stefanie; Howell, T.A.; Friedrichs, Mackenzie; Marek, T.H.; Verdin, James

    2014-01-01

    The operational Simplified Surface Energy Balance (SSEBop) approach was applied on 14 Landsat 5 thermal infrared images for mapping daily actual evapotranspiration (ETa) fluxes during the spring and summer seasons (March–October) in 2006 and 2007. Data from four large lysimeters, managed by the USDA-ARS Conservation and Production Research Laboratory were used for evaluating the SSEBop estimated ETa. Lysimeter fields are arranged in a 2 × 2 block pattern with two fields each managed under irrigated and dryland cropping systems. The modeled and observed daily ETa values were grouped as "irrigated" and "dryland" at four different aggregation periods (1-day, 2-day, 3 day and "seasonal") for evaluation. There was a strong linear relationship between observed and modeled ETa with R2 values ranging from 0.87 to 0.97. The root mean square error (RMSE), as percent of their respective mean values, were reduced progressively with 28, 24, 16 and 12% at 1-day, 2-day, 3-day, and seasonal aggregation periods, respectively. With a further correction of the underestimation bias (−11%), the seasonal RMSE reduced from 12 to 6%. The random error contribution to the total error was reduced from 86 to 20% while the bias' contribution increased from 14 to 80% when aggregated from daily to seasonal scale, respectively. This study shows the reliable performance of the SSEBop approach on the Landsat data stream with a transferable approach for use with the recently launched LDCM (Landsat Data Continuity Mission) Thermal InfraRed Sensor (TIRS) data. Thus, SSEBop can produce quick, reliable and useful ET estimations at various time scales with higher seasonal accuracy for use in regional water management decisions.

  3. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) Thermal Soak Test

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip

    2017-01-01

    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  4. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) thermal soak test

    NASA Astrophysics Data System (ADS)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip

    2017-09-01

    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  5. Modeling ground thermal regime of an ancient buried ice body in Beacon Valley, Antarctica using a 1-D heat equation with latent heat effect

    NASA Astrophysics Data System (ADS)

    Liu, L.; Sletten, R. S.; Hallet, B.; Waddington, E. D.; Wood, S. E.

    2013-12-01

    An ancient massive ice body buried under several decimeters of debris in Beacon Valley, Antarctica is believed to be over one million years old, making it older than any known glacier or ice cap. It is fundamentally important as a reservoir of water, proxy for climatic information, and an expression of the periglacial landscape. It is also one of Earth's closest analog for widespread, near-surface ice found in Martian soils and ice-cored landforms. We are interested in understanding controls on how long this ice may persist since our physical model of sublimation suggests it should not be stable. In these models, the soil temperatures and the gradient are important because it determines the direction and magnitude of the vapor flux, and thus sublimation rates. To better understand the heat transfer processes and constrain the rates of processes governing ground ice stability, a model of the thermal behavior of the permafrost is applied to Beacon Valley, Antarctica. It calculates soil temperatures based on a 1-D thermal diffusion equation using a fully implicit finite volume method (FVM). This model is constrained by soil physical properties and boundary conditions of in-situ ground surface temperature measurements (with an average of -23.6oC, a maximum of 20.5oC and a minimum of -54.3oC) and ice-core temperature record at ~30 m. Model results are compared to in-situ temperature measurements at depths of 0.10 m, 0.20 m, 0.30 m, and 0.45 m to assess the model's ability to reproduce the temperature profile for given thermal properties of the debris cover and ice. The model's sensitivity to the thermal diffusivity of the permafrost and the overlaying debris is also examined. Furthermore, we incorporate the role of ice condensation/sublimation which is calculated using our vapor diffusion model in the 1-D thermal diffusion model to assess potential latent heat effects that in turn affect ground ice sublimation rates. In general, the model simulates the ground thermal regime well. Detailed temperature comparison suggests that the 1-D thermal diffusion model results closely approximate the measured temperature at all depths with the average square root of the mean squared error (SRMSE) of 0.15oC; a linear correlation between modeled and measured temperatures yields an average R2 value of 0.9997. Prominent seasonal temperature variations diminish with depth, and it equilibrates to mean annual temperature at about 21.5 m depth. The amount of heat generated/consumed by ice condensation/sublimation is insufficient to significantly impact the thermal regime.

  6. Minimization of the hole overcut and cylindricity errors during rotary ultrasonic drilling of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.

    2018-04-01

    Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.

  7. Measurement error associated with surveys of fish abundance in Lake Michigan

    USGS Publications Warehouse

    Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.

    2002-01-01

    In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.

  8. Design of experiments-based monitoring of critical quality attributes for the spray-drying process of insulin by NIR spectroscopy.

    PubMed

    Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger

    2012-09-01

    Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.

  9. Intelligent demand side management of residential building energy systems

    NASA Astrophysics Data System (ADS)

    Sinha, Maruti N.

    Advent of modern sensing technologies, data processing capabilities and rising cost of energy are driving the implementation of intelligent systems in buildings and houses which constitute 41% of total energy consumption. The primary motivation has been to provide a framework for demand-side management and to improve overall reliability. The entire formulation is to be implemented on NILM (Non-Intrusive Load Monitoring System), a smart meter. This is going to play a vital role in the future of demand side management. Utilities have started deploying smart meters throughout the world which will essentially help to establish communication between utility and consumers. This research is focused on investigation of a suitable thermal model of residential house, building up control system and developing diagnostic and energy usage forecast tool. The present work has considered measurement based approach to pursue. Identification of building thermal parameters is the very first step towards developing performance measurement and controls. The proposed identification technique is PEM (Prediction Error Method) based, discrete state-space model. The two different models have been devised. First model is focused toward energy usage forecast and diagnostics. Here one of the novel idea has been investigated which takes integral of thermal capacity to identify thermal model of house. The purpose of second identification is to build up a model for control strategy. The controller should be able to take into account the weather forecast information, deal with the operating point constraints and at the same time minimize the energy consumption. To design an optimal controller, MPC (Model Predictive Control) scheme has been implemented instead of present thermostatic/hysteretic control. This is a receding horizon approach. Capability of the proposed schemes has also been investigated.

  10. Maps of Jovian radio emission

    NASA Technical Reports Server (NTRS)

    Depater, I.

    1977-01-01

    Observations were made of Jupiter with the Westerbork telescope at all three frequencies available: 610 MHz, 1415 MHz, and 4995 MHz. The raw measurements were corrected for position errors, atmospheric extinction, Faraday rotation, clock, frequency, and baseline errors, and errors due to a shadowing effect. The data was then converted into brightness distribution of the sky by Fourier transformation. Maps of both thermal and nonthermal radiation were developed. Results indicate that the thermal disk of Jupiter measured at a wavelength of 6 cm has a temperature of 236 + or - 15 K. The radiation belts have an overall structure governed by the trapping of electrons in the dipolar field of the planet with significant beaming of the synchrotron radiation into the plane of the magnetic equator.

  11. Effect of Material Inhomogeneity on Thermal Performance of a Rheocast Aluminum Heatsink for Electronics Cooling

    NASA Astrophysics Data System (ADS)

    Payandeh, M.; Belov, I.; Jarfors, A. E. W.; Wessén, M.

    2016-06-01

    The relation between microstructural inhomogeneity and thermal conductivity of a rheocast component manufactured from two different aluminum alloys was investigated. The formation of two different primary α-Al particles was observed and related to multistage solidification process during slurry preparation and die cavity filling process. The microstructural inhomogeneity of the component was quantified as the fraction of α 1-Al particles in the primary Al phase. A high fraction of coarse solute-lean α 1-Al particles in the primary Al phase caused a higher thermal conductivity of the component in the near-to-gate region. A variation in thermal conductivity through the rheocast component of 10% was discovered. The effect of an inhomogeneous temperature-dependent thermal conductivity on the thermal performance of a large rheocast heatsink for electronics cooling in an operation environment was studied by means of simulation. Design guidelines were developed to account for the thermal performance of heatsinks with inhomogeneous thermal conductivity, as caused by the rheocasting process. Under the modeling assumptions, the simulation results showed over 2.5% improvement in heatsink thermal resistance when the higher conductivity near-to-gate region was located at the top of the heatsink. Assuming homogeneous thermo-physical properties in a rheocast heatsink may lead to greater than 3.5% error in the estimation of maximum thermal resistance of the heatsink. The variation in thermal conductivity within a large rheocast heatsink was found to be important for obtaining of a robust component design.

  12. Dancing the tight rope on the nanoscale—Calibrating a heat flux sensor of a scanning thermal microscope

    NASA Astrophysics Data System (ADS)

    Kloppstech, K.; Könne, N.; Worbes, L.; Hellmann, D.; Kittel, A.

    2015-11-01

    We report on a precise in situ procedure to calibrate the heat flux sensor of a near-field scanning thermal microscope. This sensitive thermal measurement is based on 1ω modulation technique and utilizes a hot wire method to build an accessible and controllable heat reservoir. This reservoir is coupled thermally by near-field interactions to our probe. Thus, the sensor's conversion relation V th ( QGS ∗ ) can be precisely determined. Vth is the thermopower generated in the sensor's coaxial thermocouple and QGS ∗ is the thermal flux from reservoir through the sensor. We analyze our method with Gaussian error calculus with an error estimate on all involved quantities. The overall relative uncertainty of the calibration procedure is evaluated to be about 8% for the measured conversion constant, i.e., (2.40 ± 0.19) μV/μW. Furthermore, we determine the sensor's thermal resistance to be about 0.21 K/μW and find the thermal resistance of the near-field mediated coupling at a distance between calibration standard and sensor of about 250 pm to be 53 K/μW.

  13. A soft-computing methodology for noninvasive time-spatial temperature estimation.

    PubMed

    Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A

    2008-02-01

    The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.

  14. Regional surface soil heat flux estimate from multiple remote sensing data in a temperate and semiarid basin

    NASA Astrophysics Data System (ADS)

    Li, Nana; Jia, Li; Lu, Jing; Menenti, Massimo; Zhou, Jie

    2017-01-01

    The regional surface soil heat flux (G0) estimation is very important for the large-scale land surface process modeling. However, most of the regional G0 estimation methods are based on the empirical relationship between G0 and the net radiation flux. A physical model based on harmonic analysis was improved (referred to as "HM model") and applied over the Heihe River Basin northwest China with multiple remote sensing data, e.g., FY-2C, AMSR-E, and MODIS, and soil map data. The sensitivity analysis of the model was studied as well. The results show that the improved model describes the variation of G0 well. Land surface temperature (LST) and thermal inertia (Γ) are the two key input variables to the HM model. Compared with in situ G0, there are some differences, mainly due to the differences between remote-sensed LST and the in situ LST. The sensitivity analysis shows that the errors from -7 to -0.5 K in LST amplitude and from -300 to 300 J m-2 K-1 s-0.5 in Γ will cause about 20% errors, which are acceptable for G0 estimation.

  15. Coupled heat transfer model and experiment study of semitransparent barrier materials in aerothermal environment

    NASA Astrophysics Data System (ADS)

    Wang, Da-Lin; Qi, Hong

    Semi-transparent materials (such as IR optical windows) are widely used for heat protection or transfer, temperature and image measurement, and safety in energy , space, military, and information technology applications. They are used, for instance, ceramic coatings for thermal barriers of spacecrafts or gas turbine blades, and thermal image observation under extreme or some dangerous environments. In this paper, the coupled conduction and radiation heat transfer model is established to describe temperature distribution of semitransparent thermal barrier medium within the aerothermal environment. In order to investigate this numerical model, one semi-transparent sample with black coating was considered, and photothermal properties were measured. At last, Finite Volume Method (FVM) was used to solve the coupled model, and the temperature responses from the sample surfaces were obtained. In addition, experiment study was also taken into account. In the present experiment, aerodynamic heat flux was simulated by one electrical heater, and two experiment cases were designed in terms of the duration of aerodynamic heating. One case is that the heater irradiates one surface of the sample continually until the other surface temperature up to constant, and the other case is that the heater works only 130 s. The surface temperature responses of these two cases were recorded. Finally, FVM model of the coupling conduction-radiation heat transfer was validated based on the experiment study with relative error less than 5%.

  16. Design and testing of a liquid cooled garment for hot environments.

    PubMed

    Guo, Tinghui; Shang, Bofeng; Duan, Bin; Luo, Xiaobing

    2015-01-01

    Liquid cooled garments (LCGs) are considered a viable method to protect individuals from hyperthermia and heat-related illness when working in thermally stressful environments. While the concept of LCGs was proposed over 50 years ago, the design and testing of these systems is undeveloped and stands in need of further study. In this study, a detailed heat transfer model of LCG in a hot environment was built to analyze the effects of different factors on the LCG performance, and to identify the main limitations to achieve maximum performance. An LCG prototype was designed and fabricated. Series of tests were done by a modified thermal manikin method to validate the heat transfer model and to evaluate the thermal properties. Both experimental and predicted results show that the heat flux components match the heat balance equation with an error of less than 10% at different flowrate. Thermal resistance analysis also manifests that the thermal resistance between the cooling water and the ambient (R2) is more sensitive to the flowrate than to the one between the skin surface and the cooling water (R1). When the flowrate increased from 225 to 544 mL/min, R2 decreased from 0.5 to 0.3 °C m(2)/W while R1 almost remained constant. A specific duration time was proposed to assess the durability and an optimized value of 1.68 h/kg was found according to the heat transfer model. The present heat transfer model and specific duration time concept could be used to optimize and evaluate this kind of LCG respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Thermal Conductivities of Some Polymers and Composites

    DTIC Science & Technology

    2018-02-01

    volume fraction of glass and fabric style. The experimental results are compared to modeled results for Kt in composites. 15. SUBJECT TERMS...entities in a polymer above TG increases, so Cp will increase at TG. For Kt to remain constant, there would have to be a comparable decrease in α due to...scanning calorimetry (DSC) method, and have error bars as large as the claimed effect. Their Kt values for their carbon fiber samples are comparable to

  18. Groundwater flux estimation in streams: A thermal equilibrium approach

    USGS Publications Warehouse

    Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon K.

    2018-01-01

    Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash–Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.

  19. Groundwater flux estimation in streams: A thermal equilibrium approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon

    2018-06-01

    Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.

  20. Evaluation of platinum resistance thermometers

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Dillon-Townes, Lawrence A.

    1988-01-01

    An evaluation procedure for the characterization of industrial platinum resistance thermometers (PRTs) for use in the temperature range -120 to 160 C was investigated. This evaluation procedure consisted of calibration, thermal stability and hysteresis testing of four surface measuring PRTs. Five different calibration schemes were investigated for these sensors. The IPTS-68 formulation produced the most accurate result, yielding average sensor systematic error of 0.02 C and random error of 0.1 C. The sensors were checked for thermal stability by successive and thermal cycling between room temperature, 160 C, and boiling point of nitrogen. All the PRTs suffered from instability and hysteresis. The applicability of the self-heating technique as an in situ method for checking the calibration of PRTs located inside wind tunnels was investigated.

  1. Robust design of microchannel cooler

    NASA Astrophysics Data System (ADS)

    He, Ye; Yang, Tao; Hu, Li; Li, Leimin

    2005-12-01

    Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.

  2. Lifelong modelling of properties for materials with technological memory

    NASA Astrophysics Data System (ADS)

    Falaleev, AP; Meshkov, VV; Vetrogon, AA; Ogrizkov, SV; Shymchenko, AV

    2016-10-01

    An investigation of real automobile parts produced from dual phase steel during standard periods of life cycle is presented, which considers such processes as stamping, exploitation, automobile accident, and further repair. The development of the phenomenological model of the mechanical properties of such parts was based on the two surface plastic theory of Chaboche. As a consequence of the composite structure of dual phase steel, it was shown that local mechanical properties of parts produced from this material change significantly their during their life cycle, depending on accumulated plastic deformations and thermal treatments. Such mechanical property changes have a considerable impact on the accuracy of the computer modelling of automobile behaviour. The most significant errors of modelling were obtained at the critical operating conditions, such as crashes and accidents. The model developed takes into account the kinematics (Bauschinger effect), isotropic hardening, non-linear elastic steel behaviour and changes caused by the thermal treatment. Using finite element analysis, the model allows the evaluation of the passive safety of a repaired car body, and enables increased restoration accuracy following an accident. The model was confirmed experimentally for parts produced from dual phase steel DP780.

  3. Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.

    PubMed

    Kim, Kyunghan; Guo, Zhixiong

    2007-05-01

    A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.

  4. The Influence of Injection Molding Parameter on Properties of Thermally Conductive Plastic

    NASA Astrophysics Data System (ADS)

    Hafizah Azis, N.; Zulafif Rahim, M.; Sa'ude, Nasuha; Rafai, N.; Yusof, M. S.; Tobi, ALM; Sharif, ZM; Rasidi Ibrahim, M.; Ismail, A. E.

    2017-05-01

    Thermally conductive plastic is the composite between metal-plastic material that is becoming popular because if it special characteristic. Injection moulding was regarded as the best process for mass manufacturing of the plastic composite due to its low production cost. The objective of this research is to find the best combination of the injection parameter setting and to find the most significant factor that effect the strength and thermal conductivity of the composite. Several parameter such as the volume percentage of copper powder, nozzle temperature and injection pressure of injection moulding machine were investigated. The analysis was done using Design Expert Software by implementing design of experiment method. From the analysis, the significant effects were determined and mathematical models of only significant effect were established. In order to ensure the validity of the model, confirmation run was done and percentage errors were calculated. It was found that the best combination parameter setting to maximize the value of tensile strength is volume percentage of copper powder of 3.00%, the nozzle temperature of 195°C and the injection pressure of 65%, and the best combination parameter settings to maximize the value of thermal conductivity is volume percentage of copper powder of 7.00%, the nozzle temperature of 195°C and the injection pressure of 65% as recommended..

  5. Majorana Braiding with Thermal Noise.

    PubMed

    Pedrocchi, Fabio L; DiVincenzo, David P

    2015-09-18

    We investigate the self-correcting properties of a network of Majorana wires, in the form of a trijunction, in contact with a parity-preserving thermal environment. As opposed to the case where Majorana bound states are immobile, braiding Majorana bound states within a trijunction introduces dangerous error processes that we identify. Such errors prevent the lifetime of the memory from increasing with the size of the system. We confirm our predictions with Monte Carlo simulations. Our findings put a restriction on the degree of self-correction of this specific quantum computing architecture.

  6. CFD Script for Rapid TPS Damage Assessment

    NASA Technical Reports Server (NTRS)

    McCloud, Peter

    2013-01-01

    This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.

  7. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  8. Modelling and simulation of cure in pultrusion processes

    NASA Astrophysics Data System (ADS)

    Tucci, F.; Rubino, F.; Paradiso, V.; Carlone, P.; Valente, R.

    2017-10-01

    Trial and error approach is not a suitable method to optimize the pultrusion process because of the high times required for the start up and the wide range of possible combinations of matrix and reinforcement. On the other hand, numerical approaches can be a suitable solution to test different parameter configuration. One of the main tasks in pultrusion processes is to obtain a complete and homogeneous resin polymerization. The formation of cross-links between polymeric chains is thermally induced but it leads to a strong exothermic heat generation, hence the thermal and the chemical phenomena are mutually affected. It requires that the two problems have to be modelled in coupled way. The mathematical model used in this work considers the composite as a lumped material, whose thermal and mechanical properties are evaluated as function of resin and fibers properties. The numerical pattern is based on a quasi-static approach in a three-dimensional Eulerian domain, which describes both thermal and chemical phenomena. The data obtained are used in a simplified C.H.I.L.E. (Cure Hardening Instantaneous Linear Elastic) model to compute the mechanical properties of the resin fraction in the pultruded. The two combined approaches allow to formulate a numerical model which takes into account the normal (no-penetration) and tangential (viscosity/friction) interactions between die and profile, the pulling force and the hydrostatic pressure of the liquid resin to evaluate the stress and strain fields induced by the process within the pultruded. The implementation of the numerical models has been carried out using the ABAQUS finite element suite, by means of several user subroutines (in Fortran language) which improve the basic software potentialities.

  9. Effective thermal conductivity determination for low-density insulating materials

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1978-01-01

    That nonlinear least squares can be used to determine effective thermal conductivity was demonstrated, and a method for assessing the relative error associated with these predicted values was provided. The differences between dynamic and static determination of effective thermal conductivity of low-density materials that transfer heat by a combination of conduction, convection, and radiation were discussed.

  10. Thermal infrared spectroscopy and modeling of experimentally shocked basalts

    USGS Publications Warehouse

    Johnson, J. R.; Staid, M.I.; Kraft, M.D.

    2007-01-01

    New measurements of thermal infrared emission spectra (250-1400 cm-1; ???7-40 ??m) of experimentally shocked basalt and basaltic andesite (17-56 GPa) exhibit changes in spectral features with increasing pressure consistent with changes in the structure of plagioclase feldspars. Major spectral absorptions in unshocked rocks between 350-700 cm-1 (due to Si-O-Si octahedral bending vibrations) and between 1000-1250 cm-1 (due to Si-O antisymmetric stretch motions of the silica tetrahedra) transform at pressures >20-25 GPa to two broad spectral features centered near 950-1050 and 400-450 cm-1. Linear deconvolution models using spectral libraries composed of common mineral and glass spectra replicate the spectra of shocked basalt relatively well up to shock pressures of 20-25 GPa, above which model errors increase substantially, coincident with the onset of diaplectic glass formation in plagioclase. Inclusion of shocked feldspar spectra in the libraries improves fits for more highly shocked basalt. However, deconvolution models of the basaltic andesite select shocked feldspar end-members even for unshocked samples, likely caused by the higher primary glass content in the basaltic andesite sample.

  11. Evaluation of WRF model-derived direct irradiance for solar thermal resource assessment over South Korea

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Young; Yun, Chang-Yeol; Kim, Chang Ki; Kang, Yong-Heack; Kim, Hyun-Goo; Lee, Sang-Nam; Kim, Shin-Young

    2017-06-01

    The South Korean government has been started monitoring and reassessment for new and renewable resource under greenhouse reduction related with the climate agreement in Paris. This study investigated characteristics of the model-derived direct normal irradiance(DNI) using ten-minute data of the Weather Research and Forecasting(WRF) model with 1 km grid spacing. First, ground horizontal irradiance(GHI) and direct normal irradiance(DNI) from the model was compared with those of ground stations throughout South Korea to evaluate the uncertainty of the GHI-derived DNI. Then solar thermal resource potential was assessed using a DNI map. Uncertainty of irradiances appeared highly dependent on sky conditions. Root mean square errors in DNI(GHI) was 45.39%(18.06%) for all sky with the range of 9.92˜51.93%(14.49˜51.47%) for clear to overcast sky. These indicate DNI is further sensitive to cloud condition in Korea which is around 72% of cloud days during a whole year. Finally DNI maps showed high value over most areas except southeastern areas and Jeju island which is humid regions in South Korea.

  12. Optical control of the Advanced Technology Solar Telescope.

    PubMed

    Upton, Robert

    2006-08-10

    The Advanced Technology Solar Telescope (ATST) is an off-axis Gregorian astronomical telescope design. The ATST is expected to be subject to thermal and gravitational effects that result in misalignments of its mirrors and warping of its primary mirror. These effects require active, closed-loop correction to maintain its as-designed diffraction-limited optical performance. The simulation and modeling of the ATST with a closed-loop correction strategy are presented. The correction strategy is derived from the linear mathematical properties of two Jacobian, or influence, matrices that map the ATST rigid-body (RB) misalignments and primary mirror figure errors to wavefront sensor (WFS) measurements. The two Jacobian matrices also quantify the sensitivities of the ATST to RB and primary mirror figure perturbations. The modeled active correction strategy results in a decrease of the rms wavefront error averaged over the field of view (FOV) from 500 to 19 nm, subject to 10 nm rms WFS noise. This result is obtained utilizing nine WFSs distributed in the FOV with a 300 nm rms astigmatism figure error on the primary mirror. Correction of the ATST RB perturbations is demonstrated for an optimum subset of three WFSs with corrections improving the ATST rms wavefront error from 340 to 17.8 nm. In addition to the active correction of the ATST, an analytically robust sensitivity analysis that can be generally extended to a wider class of optical systems is presented.

  13. Continuous estimation of evapotranspiration and gross primary productivity from an Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Bandini, F.; Jakobsen, J.; J Zarco-Tejada, P.; Liu, X.; Haugård Olesen, D.; Ibrom, A.; Bauer-Gottwein, P.; Garcia, M.

    2017-12-01

    Model prediction of evapotranspiration (ET) and gross primary productivity (GPP) using optical and thermal satellite imagery is biased towards clear-sky conditions. Unmanned Aerial Systems (UAS) can collect optical and thermal signals at unprecedented very high spatial resolution (< 1 meter) under sunny and cloudy weather conditions. However, methods to obtain model outputs between image acquisitions are still needed. This study uses UAS based optical and thermal observations to continuously estimate daily ET and GPP in a Danish willow forest for an entire growing season of 2016. A hexacopter equipped with multispectral and thermal infrared cameras and a real-time kinematic Global Navigation Satellite System was used. The Normalized Differential Vegetation Index (NDVI) and the Temperature Vegetation Dryness Index (TVDI) were used as proxies for leaf area index and soil moisture conditions, respectively. To obtain continuously daily records between UAS acquisitions, UAS surface temperature was assimilated by the ensemble Kalman filter into a prognostic land surface model (Noilhan and Planton, 1989), which relies on the force-restore method, to simulate the continuous land surface temperature. NDVI was interpolated into daily time steps by the cubic spline method. Using these continuous datasets, a joint ET and GPP model, which combines the Priestley-Taylor Jet Propulsion Laboratory ET model (Fisher et al., 2008; Garcia et al., 2013) and the Light Use Efficiency GPP model (Potter et al., 1993), was applied. The simulated ET and GPP were compared with the footprint of eddy covariance observations. The simulated daily ET has a RMSE of 14.41 W•m-2 and a correlation coefficient of 0.83. The simulated daily GPP has a root mean square error (RMSE) of 1.56 g•C•m-2•d-1 and a correlation coefficient of 0.87. This study demonstrates the potential of UAS based multispectral and thermal mapping to continuously estimate ET and GPP for both sunny and cloudy weather conditions.

  14. Optimizing X-ray mirror thermal performance using matched profile cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Cocco, Daniele; Kelez, Nicholas

    2015-08-07

    To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less

  15. Thrust Vector Control for Nuclear Thermal Rockets

    NASA Technical Reports Server (NTRS)

    Ensworth, Clinton B. F.

    2013-01-01

    Future space missions may use Nuclear Thermal Rocket (NTR) stages for human and cargo missions to Mars and other destinations. The vehicles are likely to require engine thrust vector control (TVC) to maintain desired flight trajectories. This paper explores requirements and concepts for TVC systems for representative NTR missions. Requirements for TVC systems were derived using 6 degree-of-freedom models of NTR vehicles. Various flight scenarios were evaluated to determine vehicle attitude control needs and to determine the applicability of TVC. Outputs from the models yielded key characteristics including engine gimbal angles, gimbal rates and gimbal actuator power. Additional factors such as engine thrust variability and engine thrust alignment errors were examined for impacts to gimbal requirements. Various technologies are surveyed for TVC systems for the NTR applications. A key factor in technology selection is the unique radiation environment present in NTR stages. Other considerations including mission duration and thermal environments influence the selection of optimal TVC technologies. Candidate technologies are compared to see which technologies, or combinations of technologies best fit the requirements for selected NTR missions. Representative TVC systems are proposed and key properties such as mass and power requirements are defined. The outputs from this effort can be used to refine NTR system sizing models, providing higher fidelity definition for TVC systems for future studies.

  16. Investigating the effect of suspensions nanostructure on the thermophysical properties of nanofluids

    NASA Astrophysics Data System (ADS)

    Tesfai, Waka; Singh, Pawan K.; Masharqa, Salim J. S.; Souier, Tewfik; Chiesa, Matteo; Shatilla, Youssef

    2012-12-01

    The effect of fractal dimensions and Feret diameter of aggregated nanoparticle on predicting the thermophysical properties of nanofluids is demonstrated. The fractal dimensions and Feret diameter distributions of particle agglomerates are quantified from scanning electron and probe microscope imaging of yttria nanofluids. The results are compared with the fractal dimensions calculated by fitting the rheological properties of yttria nanofluids against the modified Krieger-Dougherty model. Nanofluids of less than 1 vol. % particle loading are found to have fractal dimensions of below 1.8, which is typical for diffusion controlled cluster formation. By contrast, an increase in the particle loading increases the fractal dimension to 2.0-2.2. The fractal dimensions obtained from both methods are employed to predict the thermal conductivity of the nanofluids using the modified Maxwell-Garnet (M-G) model. The prediction from rheology is found inadequate and might lead up to 8% error in thermal conductivity for an improper choice of aspect ratio. Nevertheless, the prediction of the modified M-G model from the imaging is found to agree well with the experimentally observed effective thermal conductivity of the nanofluids. In addition, this study opens a new window on the study of aggregate kinetics, which is critical in tuning the properties of multiphase systems.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    Work is underway at Pacific Northwest Laboratory (PNL) to improve the probabilistic analysis used to model pressurized thermal shock (PTS) incidents in reactor pressure vessels, and, further, to incorporate these improvements into the existing Vessel Integrity Simulation Analysis (VISA) code. Two topics related to work on input distributions in VISA are discussed in this paper. The first involves the treatment of flaw size distributions and the second concerns errors in the parameters in the (Guthrie) equation which is used to compute ..delta..RT/sub NDT/, the shift in reference temperature for nil ductility transition.

  18. Modeling the small-scale dish-mounted solar thermal Brayton cycle

    NASA Astrophysics Data System (ADS)

    Le Roux, Willem G.; Meyer, Josua P.

    2016-05-01

    The small-scale dish-mounted solar thermal Brayton cycle (STBC) makes use of a sun-tracking dish reflector, solar receiver, recuperator and micro-turbine to generate power in the range of 1-20 kW. The modeling of such a system, using a turbocharger as micro-turbine, is required so that optimisation and further development of an experimental setup can be done. As a validation, an analytical model of the small-scale STBC in Matlab, where the net power output is determined from an exergy analysis, is compared with Flownex, an integrated systems CFD code. A 4.8 m diameter parabolic dish with open-cavity tubular receiver and plate-type counterflow recuperator is considered, based on previous work. A dish optical error of 10 mrad, a tracking error of 1° and a receiver aperture area of 0.25 m × 0.25 m are considered. Since the recuperator operates at a very high average temperature, the recuperator is modeled using an updated ɛ-NTU method which takes heat loss to the environment into consideration. Compressor and turbine maps from standard off-the-shelf Garrett turbochargers are used. The results show that for the calculation of the steady-state temperatures and pressures, there is good comparison between the Matlab and Flownex results (within 8%) except for the recuperator outlet temperature, which is due to the use of different ɛ-NTU methods. With the use of Matlab and Flownex, it is shown that the small-scale open STBC with an existing off-the-shelf turbocharger could generate a positive net power output with solar-to-mechanical efficiency of up to 12%, with much room for improvement.

  19. Developments in FTIR spectroscopy of diamonds and better constraints on diamond thermal histories

    NASA Astrophysics Data System (ADS)

    Kohn, Simon; Speich, Laura; Smith, Christopher; Bulanova, Galina

    2017-04-01

    Fourier Transform Infrared (FTIR) spectroscopy is a commonly-used technique for investigating diamonds. It gives the most useful information if spatially-resolved measurements are used [1]. In this contribution we discuss the best way to acquire and present FTIR data from diamonds, using examples from Murowa (Zimbabwe), Argyle (Australia) and Machado River (Brazil). Examples of FTIR core-to-rim line scans, maps with high spatial resolution and maps with high spectral resolution that are fitted to extract the spatial variation of different nitrogen and hydrogen defects are presented. Model mantle residence temperatures are calculated from the concentration of A and B nitrogen-containing defects in the diamonds using known times of annealing in the mantle. A new, two-stage thermal annealing model is presented that better constrains the thermal history of the diamond and that of the mantle lithosphere in which the diamond resided. The effect of heterogeneity within the analysed FTIR volume is quantitatively assessed and errors in model temperatures that can be introduced by studying whole diamonds instead of thin plates are discussed. The kinetics of platelet growth and degradation will be discussed and the potential for two separate, kinetically-controlled defect reactions to be used to constrain a full thermal history of the diamond will be assessed. [1] Kohn, S.C., Speich, L., Smith, C.B. and Bulanova, G.P., 2016. FTIR thermochronometry of natural diamonds: A closer look. Lithos, 265, pp.148-158.

  20. Micromachined Fluid Inertial Sensors

    PubMed Central

    Liu, Shiqiang; Zhu, Rong

    2017-01-01

    Micromachined fluid inertial sensors are an important class of inertial sensors, which mainly includes thermal accelerometers and fluid gyroscopes, which have now been developed since the end of the last century for about 20 years. Compared with conventional silicon or quartz inertial sensors, the fluid inertial sensors use a fluid instead of a solid proof mass as the moving and sensitive element, and thus offer advantages of simple structures, low cost, high shock resistance, and large measurement ranges while the sensitivity and bandwidth are not competitive. Many studies and various designs have been reported in the past two decades. This review firstly introduces the working principles of fluid inertial sensors, followed by the relevant research developments. The micromachined thermal accelerometers based on thermal convection have developed maturely and become commercialized. However, the micromachined fluid gyroscopes, which are based on jet flow or thermal flow, are less mature. The key issues and technologies of the thermal accelerometers, mainly including bandwidth, temperature compensation, monolithic integration of tri-axis accelerometers and strategies for high production yields are also summarized and discussed. For the micromachined fluid gyroscopes, improving integration and sensitivity, reducing thermal errors and cross coupling errors are the issues of most concern. PMID:28216569

  1. USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhmalbaf, Atefe; Augenbroe , Godfried

    2015-12-09

    Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less

  2. Process Optimization of Dual-Laser Beam Welding of Advanced Al-Li Alloys Through Hot Cracking Susceptibility Modeling

    NASA Astrophysics Data System (ADS)

    Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra

    2016-07-01

    Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.

  3. Thermal dye double indicator dilution measurement of lung water in man: comparison with gravimetric measurements.

    PubMed Central

    Mihm, F G; Feeley, T W; Jamieson, S W

    1987-01-01

    The thermal dye double indicator dilution technique for estimating lung water was compared with gravimetric analyses in nine human subjects who were organ donors. As observed in animal studies, the thermal dye measurement of extravascular thermal volume (EVTV) consistently overestimated gravimetric extravascular lung water (EVLW), the mean (SEM) difference being 3.43 (0.59) ml/kg. In eight of the nine subjects the EVTV -3.43 ml/kg would yield an estimate of EVLW that would be from 3.23 ml/kg under to 3.37 ml/kg over the actual value EVLW at the 95% confidence limits. Reproducibility, assessed with the standard error of the mean percentage, suggested that a 15% change in EVTV can be reliably detected with repeated measurements. One subject was excluded from analysis because the EVTV measurement grossly underestimated its actual EVLW. This error was associated with regional injury observed on gross examination of the lung. Experimental and clinical evidence suggest that the thermal dye measurement provides a reliable estimate of lung water in diffuse pulmonary oedema states. PMID:3616974

  4. Thermal-mechanical behavior of high precision composite mirrors

    NASA Technical Reports Server (NTRS)

    Kuo, C. P.; Lou, M. C.; Rapp, D.

    1993-01-01

    Composite mirror panels were designed, constructed, analyzed, and tested in the framework of a NASA precision segmented reflector task. The deformations of the reflector surface during the exposure to space enviroments were predicted using a finite element model. The composite mirror panels have graphite-epoxy or graphite-cyanate facesheets, separated by an aluminum or a composite honeycomb core. It is pointed out that in order to carry out detailed modeling of composite mirrors with high accuracy, it is necessary to have temperature dependent properties of the materials involved and the type and magnitude of manufacturing errors and material nonuniformities. The structural modeling and analysis efforts addressed the impact of key design and materials parameters on the performance of mirrors.

  5. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Multiple altitude TM thermal infrared images were analyzed and the observed radiance values were computed. The data obtained represent an experimental relation between preceived radiance and altitude. A LOWTRAB approach was tested which incorporates a modification to the path radiance model. This modification assumes that the scattering out of the optical path is equal in magnitude and direction to the scattering into the path. The radiance observed at altitude by an aircraft sensor was used as input to the model. Expected radiance as a function of altitude was then computed down to the ground. The results were not very satisfactory because of somewhat large errors in temperature and because of the difference in the shape of the modeled and experimental curves.

  6. Glass sample characterization

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1990-01-01

    The development of in-house integrated optical performance modelling capability at MSFC is described. This performance model will take into account the effects of structural and thermal distortions, as well as metrology errors in optical surfaces to predict the performance of large an complex optical systems, such as Advanced X-Ray Astrophysics Facility. The necessary hardware and software were identified to implement an integrated optical performance model. A number of design, development, and testing tasks were supported to identify the debonded mirror pad, and rebuilding of the Technology Mirror Assembly. Over 300 samples of Zerodur were prepared in different sizes and shapes for acid etching, coating, and polishing experiments to characterize the subsurface damage and stresses produced by the grinding and polishing operations.

  7. Uncertainty of Passive Imager Cloud Optical Property Retrievals to Instrument Radiometry and Model Assumptions: Examples from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Meyer, Kerry; Amarasinghe, Nandana; Arnold, G. Thomas; Zhang, Zhibo; King, Michael D.

    2013-01-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global-daily 1 km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VISNIR channel paired with a 1.6, 2.1, and 3.7 m spectral channel. The MOD06 forward model is derived from on a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1 aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. I n Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 m band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 m, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  8. Uncertainty of passive imager cloud retrievals to instrument radiometry and model assumptions: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Amarasinghe, N.; Arnold, G. T.; Zhang, Z.; Meyer, K.; King, M. D.

    2013-12-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VIS/NIR channel paired with a 1.6, 2.1, and 3.7 μm spectral channel. The MOD06 forward model is derived from a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1° aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. In Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 μm band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 μm, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  9. Diamond fly cutting of aluminum thermal infrared flat mirrors for the OSIRIS-REx Thermal Emission Spectrometer (OTES) instrument

    NASA Astrophysics Data System (ADS)

    Groppi, Christopher E.; Underhill, Matthew; Farkas, Zoltan; Pelham, Daniel

    2016-07-01

    We present the fabrication and measurement of monolithic aluminum flat mirrors designed to operate in the thermal infrared for the OSIRIS-Rex Thermal Emission Spectrometer (OTES) space instrument. The mirrors were cut using a conventional fly cutter with a large radius diamond cutting tool on a high precision Kern Evo 3-axis CNC milling machine. The mirrors were measured to have less than 150 angstroms RMS surface error.

  10. Time-Separating Heating and Sensor Functions of Thermistors in Precision Thermal Control Applications

    NASA Technical Reports Server (NTRS)

    Cho, Hyung J.; Sukhatme, Kalyani G.; Mahoney, John C.; Penanen, Konstantin Penanen; Vargas, Rudolph, Jr.

    2010-01-01

    A method allows combining the functions of a heater and a thermometer in a single device, a thermistor, with minimal temperature read errors. Because thermistors typically have a much smaller thermal mass than the objects they monitor, the thermal time to equilibrate the thermometer to the temperature of the object is typically much shorter than the thermal time of the object to change its temperature in response to an external perturbation.

  11. Superresolving Black Hole Images with Full-Closure Sparse Modeling

    NASA Astrophysics Data System (ADS)

    Crowley, Chelsea; Akiyama, Kazunori; Fish, Vincent

    2018-01-01

    It is believed that almost all galaxies have black holes at their centers. Imaging a black hole is a primary objective to answer scientific questions relating to relativistic accretion and jet formation. The Event Horizon Telescope (EHT) is set to capture images of two nearby black holes, Sagittarius A* at the center of the Milky Way galaxy roughly 26,000 light years away and the other M87 which is in Virgo A, a large elliptical galaxy that is 50 million light years away. Sparse imaging techniques have shown great promise for reconstructing high-fidelity superresolved images of black holes from simulated data. Previous work has included the effects of atmospheric phase errors and thermal noise, but not systematic amplitude errors that arise due to miscalibration. We explore a full-closure imaging technique with sparse modeling that uses closure amplitudes and closure phases to improve the imaging process. This new technique can successfully handle data with systematic amplitude errors. Applying our technique to synthetic EHT data of M87, we find that full-closure sparse modeling can reconstruct images better than traditional methods and recover key structural information on the source, such as the shape and size of the predicted photon ring. These results suggest that our new approach will provide superior imaging performance for data from the EHT and other interferometric arrays.

  12. Science support for the Earth radiation budget experiment

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    1994-01-01

    The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.

  13. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.

  14. Biomass Thermogravimetric Analysis: Uncertainty Determination Methodology and Sampling Maps Generation

    PubMed Central

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

    2010-01-01

    The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532

  15. Atmospheric Compensation and Surface Temperature and Emissivity Retrieval with LWIR Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Pieper, Michael

    Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.

  16. Dual-wavelengths photoacoustic temperature measurement

    NASA Astrophysics Data System (ADS)

    Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao

    2017-02-01

    Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.

  17. Temperature Prediction Model for Bone Drilling Based on Density Distribution and In Vivo Experiments for Minimally Invasive Robotic Cochlear Implantation.

    PubMed

    Feldmann, Arne; Anso, Juan; Bell, Brett; Williamson, Tom; Gavaghan, Kate; Gerber, Nicolas; Rohrbach, Helene; Weber, Stefan; Zysset, Philippe

    2016-05-01

    Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.

  18. HeatWave: the next generation of thermography devices

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman; Vidas, Stephen

    2014-05-01

    Energy sustainability is a major challenge of the 21st century. To reduce environmental impact, changes are required not only on the supply side of the energy chain by introducing renewable energy sources, but also on the demand side by reducing energy usage and improving energy efficiency. Currently, 2D thermal imaging is used for energy auditing, which measures the thermal radiation from the surfaces of objects and represents it as a set of color-mapped images that can be analysed for the purpose of energy efficiency monitoring. A limitation of such a method for energy auditing is that it lacks information on the geometry and location of objects with reference to each other, particularly across separate images. Such a limitation prevents any quantitative analysis to be done, for example, detecting any energy performance changes before and after retrofitting. To address these limitations, we have developed a next generation thermography device called Heat Wave. Heat Wave is a hand-held 3D thermography device that consists of a thermal camera, a range sensor and color camera, and can be used to generate precise 3D model of objects with augmented temperature and visible information. As an operator holding the device smoothly waves it around the objects of interest, Heat Wave can continuously track its own pose in space and integrate new information from the range and thermal and color cameras into a single, and precise 3D multi-modal model. Information from multiple viewpoints can be incorporated together to improve the accuracy, reliability and robustness of the global model. The approach also makes it possible to reduce any systematic errors associated with the estimation of surface temperature from the thermal images.

  19. Thermal and optical aspects of glob-top design for phosphor converted white LED light sources

    NASA Astrophysics Data System (ADS)

    Sommer, Christian; Fulmek, Paul; Nicolics, Johann; Schweitzer, Susanne; Nemitz, Wolfgang; Hartmann, Paul; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.

    2013-09-01

    For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for correlated color temperature (CCT) reproducibility and maintenance under control. In this regard, it is of essential importance to understand how geometrical, optical and thermal properties of the color conversion elements (CCE), which typically consist of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired CCT value. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board by chip-on-board technology and a CCE with a glob-top configuration on the top of it as a model system and discuss the impact of the CCE shape and size on CCT constancy with respect to substrate reflectivity and thermal load of the CCEs. From these studies, some general conclusions for improved glob-top design can be drawn.

  20. Effect of temperature variations and thermal noise on the static and dynamic behavior of straintronics devices

    NASA Astrophysics Data System (ADS)

    Barangi, Mahmood; Mazumder, Pinaki

    2015-11-01

    A theoretical model quantifying the effect of temperature variations on the magnetic properties and static and dynamic behavior of the straintronics magnetic tunneling junction is presented. Four common magnetostrictive materials (Nickel, Cobalt, Terfenol-D, and Galfenol) are analyzed to determine their temperature sensitivity and to provide a comprehensive database for different applications. The variations of magnetic anisotropies are studied in detail for temperature levels up to the Curie temperature. The energy barrier of the free layer and the critical voltage required for flipping the magnetization vector are inspected as important metrics that dominate the energy requirements and noise immunity when the device is incorporated into large systems. To study the dynamic thermal noise, the effect of the Langevin thermal field on the free layer's magnetization vector is incorporated into the Landau-Lifshitz-Gilbert equation. The switching energy, flipping delay, write, and hold error probabilities are studied, which are important metrics for nonvolatile memories, an important application of the straintronics magnetic tunneling junctions.

  1. Modeling radiation forces acting on TOPEX/Poseidon for precision orbit determination

    NASA Technical Reports Server (NTRS)

    Marshall, J. A.; Luthcke, S. B.; Antreasian, P. G.; Rosborough, G. W.

    1992-01-01

    Geodetic satellites such as GEOSAT, SPOT, ERS-1, and TOPEX/Poseidon require accurate orbital computations to support the scientific data they collect. Until recently, gravity field mismodeling was the major source of error in precise orbit definition. However, albedo and infrared re-radiation, and spacecraft thermal imbalances produce in combination no more than a 6-cm radial root-mean-square (RMS) error over a 10-day period. This requires the development of nonconservative force models that take the satellite's complex geometry, attitude, and surface properties into account. For TOPEX/Poseidon, a 'box-wing' satellite form was investigated that models the satellite as a combination of flat plates arranged in a box shape with a connected solar array. The nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. In order to test the validity of this concept, 'micro-models' based on finite element analysis of TOPEX/Poseidon were used to generate acceleration histories in a wide variety of orbit orientations. These profiles are then compared to the box-wing model. The results of these simulations and their implication on the ability to precisely model the TOPEX/Poseidon orbit are discussed.

  2. Electro-thermal modelling of anode and cathode in micro-EDM

    NASA Astrophysics Data System (ADS)

    Yeo, S. H.; Kurnia, W.; Tan, P. C.

    2007-04-01

    Micro-electrical discharge machining is an evolution of conventional EDM used for fabricating three-dimensional complex micro-components and microstructure with high precision capabilities. However, due to the stochastic nature of the process, it has not been fully understood. This paper proposes an analytical model based on electro-thermal theory to estimate the geometrical dimensions of micro-crater. The model incorporates voltage, current and pulse-on-time during material removal to predict the temperature distribution on the workpiece as a result of single discharges in micro-EDM. It is assumed that the entire superheated area is ejected from the workpiece surface while only a small fraction of the molten area is expelled. For verification purposes, single discharge experiments using RC pulse generator are performed with pure tungsten as the electrode and AISI 4140 alloy steel as the workpiece. For the pulse-on-time range up to 1000 ns, the experimental and theoretical results are found to be in close agreement with average volume approximation errors of 2.7% and 6.6% for the anode and cathode, respectively.

  3. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  4. Natural convection in binary gases driven by combined horizontal thermal and vertical solutal gradients

    NASA Technical Reports Server (NTRS)

    Weaver, J. A.; Viskanta, Raymond

    1992-01-01

    An investigation of natural convection is presented to examine the influence of a horizontal temperature gradient and a concentration gradient occurring from the bottom to the cold wall in a cavity. As the solutal buoyancy force changes from augmenting to opposing the thermal buoyancy force, the fluid motion switches from unicellular to multicellular flow (fluid motion is up the cold wall and down the hot wall for the bottom counterrotating flow cell). Qualitatively, the agreement between predicted streamlines and smoke flow patterns is generally good. In contrast, agreement between measured and predicted temperature and concentration distributions ranges from fair to poor. Part of the discrepancy can be attributed to experimental error. However, there remains considerable discrepancy between data and predictions due to the idealizations of the mathematical model, which examines only first-order physical effects. An unsteady flow, variable thermophysical properties, conjugate effects, species interdiffusion, and radiation were not accounted for in the model.

  5. Calibration of micro-capacitance measurement system for thermal barrier coating testing

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Chen, Dixiang; Wan, Chengbiao; Tian, Wugang; Pan, Mengchun

    2018-06-01

    In order to comprehensively evaluate the thermal barrier coating system of an engine blade, an integrated planar sensor combining electromagnetic coils with planar capacitors is designed, in which the capacitance measurement accuracy of the planar capacitor is a key factor. The micro-capacitance measurement system is built based on an impedance analyzer. Because of the influence of non-ideal factors on the measuring system, there is an obvious difference between the measured value and the actual value. It is necessary to calibrate the measured results and eliminate the difference. In this paper, the measurement model of a planar capacitive sensor is established, and the relationship between the measured value and the actual value of capacitance is deduced. The model parameters are estimated with the least square method, and the calibration accuracy is evaluated with experiments under different dielectric conditions. The capacitance measurement error is reduced from 29% ˜ 46.5% to around 1% after calibration, which verifies the feasibility of the calibration method.

  6. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    van Hooidonk, R.; Huber, M.

    2012-03-01

    Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.

  7. Comparison of methods applied in photoinduced transient spectroscopy to determining the defect center parameters: The correlation procedure and the signal analysis based on inverse Laplace transformation

    NASA Astrophysics Data System (ADS)

    Suproniuk, M.; Pawłowski, M.; Wierzbowski, M.; Majda-Zdancewicz, E.; Pawłowski, Ma.

    2018-04-01

    The procedure for determination of trap parameters by photo-induced transient spectroscopy is based on the Arrhenius plot that illustrates a thermal dependence of the emission rate. In this paper, we show that the Arrhenius plot obtained by the correlation method is shifted toward lower temperatures as compared to the one obtained with the inverse Laplace transformation. This shift is caused by the model adequacy error of the correlation method and introduces errors to a calculation procedure of defect center parameters. The effect is exemplified by comparing the results of the determination of trap parameters with both methods based on photocurrent transients for defect centers observed in tin-doped neutron-irradiated silicon crystals and in gallium arsenide grown with the Vertical Gradient Freeze method.

  8. TOPEX/POSEIDON microwave radiometer performance and in-flight calibration

    NASA Technical Reports Server (NTRS)

    Ruf, C. S.; Keihm, Stephen J.; Subramanya, B.; Janssen, Michael A.

    1994-01-01

    Results of the in-flight calibration and performance evaluation campaign for the TOPEX/POSEIDON microwave radiometer (TMR) are presented. Intercomparisons are made between TMR and various sources of ground truth, including ground-based microwave water vapor radiometers, radiosondes, global climatological models, special sensor microwave imager data over the Amazon rain forest, and models of clear, calm, subpolar ocean regions. After correction for preflight errors in the processing of thermal/vacuum data, relative channel offsets in the open ocean TMR brightness temperatures were noted at the approximately = 1 K level for the three TMR frequencies. Larger absolute offsets of 6-9 K over the rain forest indicated a approximately = 5% gain error in the three channel calibrations. This was corrected by adjusting the antenna pattern correction (APC) algorithm. AS 10% scale error in the TMR path delay estimates, relative to coincident radiosondes, was corrected in part by the APC adjustment and in part by a 5% modification to the value assumed for the 22.235 FGHz water vapor line strength in the path delay retrieval algorithm. After all in-flight corrections to the calibration, TMR global retrieval accuracy for the wet tropospheric range correction is estimated at 1.1 cm root mean square (RMS) with consistent peformance under clear, cloudy, and windy conditions.

  9. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    NASA Astrophysics Data System (ADS)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20°C, representing the worst-case temperature gradient, the spatial rms wavefront error in units of wavelength is 0.178 (88.69 nm at λ = 500 nm).

  10. Prediction of temperature and HAZ in thermal-based processes with Gaussian heat source by a hybrid GA-ANN model

    NASA Astrophysics Data System (ADS)

    Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali

    2018-02-01

    Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.

  11. Distribution and depth of bottom-simulating reflectors in the Nankai subduction margin.

    PubMed

    Ohde, Akihiro; Otsuka, Hironori; Kioka, Arata; Ashi, Juichiro

    2018-01-01

    Surface heat flow has been observed to be highly variable in the Nankai subduction margin. This study presents an investigation of local anomalies in surface heat flows on the undulating seafloor in the Nankai subduction margin. We estimate the heat flows from bottom-simulating reflectors (BSRs) marking the lower boundaries of the methane hydrate stability zone and evaluate topographic effects on heat flow via two-dimensional thermal modeling. BSRs have been used to estimate heat flows based on the known stability characteristics of methane hydrates under low-temperature and high-pressure conditions. First, we generate an extensive map of the distribution and subseafloor depths of the BSRs in the Nankai subduction margin. We confirm that BSRs exist at the toe of the accretionary prism and the trough floor of the offshore Tokai region, where BSRs had previously been thought to be absent. Second, we calculate the BSR-derived heat flow and evaluate the associated errors. We conclude that the total uncertainty of the BSR-derived heat flow should be within 25%, considering allowable ranges in the P-wave velocity, which influences the time-to-depth conversion of the BSR position in seismic images, the resultant geothermal gradient, and thermal resistance. Finally, we model a two-dimensional thermal structure by comparing the temperatures at the observed BSR depths with the calculated temperatures at the same depths. The thermal modeling reveals that most local variations in BSR depth over the undulating seafloor can be explained by topographic effects. Those areas that cannot be explained by topographic effects can be mainly attributed to advective fluid flow, regional rapid sedimentation, or erosion. Our spatial distribution of heat flow data provides indispensable basic data for numerical studies of subduction zone modeling to evaluate margin parallel age dependencies of subducting plates.

  12. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  13. Variational bounds on the temperature distribution

    NASA Astrophysics Data System (ADS)

    Kalikstein, Kalman; Spruch, Larry; Baider, Alberto

    1984-02-01

    Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.

  14. Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.

    PubMed

    Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico

    2017-03-01

    Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.

  15. New approaches in the indirect quantification of thermal rock properties in sedimentary basins: the well-log perspective

    NASA Astrophysics Data System (ADS)

    Fuchs, Sven; Balling, Niels; Förster, Andrea

    2016-04-01

    Numerical temperature models generated for geodynamic studies as well as for geothermal energy solutions heavily depend on rock thermal properties. Best practice for the determination of those parameters is the measurement of rock samples in the laboratory. Given the necessity to enlarge databases of subsurface rock parameters beyond drill core measurements an approach for the indirect determination of these parameters is developed, for rocks as well a for geological formations. We present new and universally applicable prediction equations for thermal conductivity, thermal diffusivity and specific heat capacity in sedimentary rocks derived from data provided by standard geophysical well logs. The approach is based on a data set of synthetic sedimentary rocks (clastic rocks, carbonates and evaporates) composed of mineral assemblages with variable contents of 15 major rock-forming minerals and porosities varying between 0 and 30%. Petrophysical properties are assigned to both the rock-forming minerals and the pore-filling fluids. Using multivariate statistics, relationships then were explored between each thermal property and well-logged petrophysical parameters (density, sonic interval transit time, hydrogen index, volume fraction of shale and photoelectric absorption index) on a regression sub set of data (70% of data) (Fuchs et al., 2015). Prediction quality was quantified on the remaining test sub set (30% of data). The combination of three to five well-log parameters results in predictions on the order of <15% for thermal conductivity and thermal diffusivity, and of <10% for specific heat capacity. Comparison of predicted and benchmark laboratory thermal conductivity from deep boreholes of the Norwegian-Danish Basin, the North German Basin, and the Molasse Basin results in 3 to 5% larger uncertainties with regard to the test data set. With regard to temperature models, the use of calculated TC borehole profiles approximate measured temperature logs with an error of <3°C along a 4 km deep profile. A benchmark comparison for thermal diffusivity and specific heat capacity is pending. Fuchs, Sven; Balling, Niels; Förster, Andrea (2015): Calculation of thermal conductivity, thermal diffusivity and specific heat capacity of sedimentary rocks using petrophysical well logs, Geophysical Journal International 203, 1977-2000, doi: 10.1093/gji/ggv403

  16. General MACOS Interface for Modeling and Analysis for Controlled Optical Systems

    NASA Technical Reports Server (NTRS)

    Sigrist, Norbert; Basinger, Scott A.; Redding, David C.

    2012-01-01

    The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.

  17. Estimating spatially distributed soil texture using time series of thermal remote sensing - a case study in central Europe

    NASA Astrophysics Data System (ADS)

    Müller, Benjamin; Bernhardt, Matthias; Jackisch, Conrad; Schulz, Karsten

    2016-09-01

    For understanding water and solute transport processes, knowledge about the respective hydraulic properties is necessary. Commonly, hydraulic parameters are estimated via pedo-transfer functions using soil texture data to avoid cost-intensive measurements of hydraulic parameters in the laboratory. Therefore, current soil texture information is only available at a coarse spatial resolution of 250 to 1000 m. Here, a method is presented to derive high-resolution (15 m) spatial topsoil texture patterns for the meso-scale Attert catchment (Luxembourg, 288 km2) from 28 images of ASTER (advanced spaceborne thermal emission and reflection radiometer) thermal remote sensing. A principle component analysis of the images reveals the most dominant thermal patterns (principle components, PCs) that are related to 212 fractional soil texture samples. Within a multiple linear regression framework, distributed soil texture information is estimated and related uncertainties are assessed. An overall root mean squared error (RMSE) of 12.7 percentage points (pp) lies well within and even below the range of recent studies on soil texture estimation, while requiring sparser sample setups and a less diverse set of basic spatial input. This approach will improve the generation of spatially distributed topsoil maps, particularly for hydrologic modeling purposes, and will expand the usage of thermal remote sensing products.

  18. Thermal behavior of the Medicina 32-meter radio telescope

    NASA Astrophysics Data System (ADS)

    Pisanu, Tonino; Buffa, Franco; Morsiani, Marco; Pernechele, Claudio; Poppi, Sergio

    2010-07-01

    We studied the thermal effects on the 32 m diameter radio-telescope managed by the Institute of Radio Astronomy (IRA), Medicina, Bologna, Italy. The preliminary results show that thermal gradients deteriorate the pointing performance of the antenna. Data has been collected by using: a) two inclinometers mounted near the elevation bearing and on the central part of the alidade structure; b) a non contact laser alignment optical system capable of measuring the secondary mirror position; c) twenty thermal sensors mounted on the alidade trusses. Two series of measurements were made, the first series was performed by placing the antenna in stow position, the second series was performed while tracking a circumpolar astronomical source. When the antenna was in stow position we observed a strong correlation between the inclinometer measurements and the differential temperature. The latter was measured with the sensors located on the South and North sides of the alidade, thus indicating that the inclinometers track well the thermal deformation of the alidade. When the antenna pointed at the source we measured: pointing errors, the inclination of the alidade, the temperature of the alidade components and the subreflector position. The pointing errors measured on-source were 15-20 arcsec greater than those measured with the inclinometer.

  19. Performance analysis of next-generation lunar laser retroreflectors

    NASA Astrophysics Data System (ADS)

    Ciocci, Emanuele; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Mastrofini, Marco; Currie, Douglas; Delle Monache, Giovanni; Dell'Agnello, Simone

    2017-09-01

    Starting from 1969, Lunar Laser Ranging (LLR) to the Apollo and Lunokhod Cube Corner Retroreflectors (CCRs) provided several tests of General Relativity (GR). When deployed, the Apollo/Lunokhod CCRs design contributed only a negligible fraction of the ranging error budget. Today the improvement over the years in the laser ground stations makes the lunar libration contribution relevant. So the libration now dominates the error budget limiting the precision of the experimental tests of gravitational theories. The MoonLIGHT-2 project (Moon Laser Instrumentation for General relativity High-accuracy Tests - Phase 2) is a next-generation LLR payload developed by the Satellite/lunar/GNSS laser ranging/altimetry and Cube/microsat Characterization Facilities Laboratory (SCF _ Lab) at the INFN-LNF in collaboration with the University of Maryland. With its unique design consisting of a single large CCR unaffected by librations, MoonLIGHT-2 can significantly reduce error contribution of the reflectors to the measurement of the lunar geodetic precession and other GR tests compared to Apollo/Lunokhod CCRs. This paper treats only this specific next-generation lunar laser retroreflector (MoonLIGHT-2) and it is by no means intended to address other contributions to the global LLR error budget. MoonLIGHT-2 is approved to be launched with the Moon Express 1(MEX-1) mission and will be deployed on the Moon surface in 2018. To validate/optimize MoonLIGHT-2, the SCF _ Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. The focus of this paper is to describe the SCF _ Lab specialized characterization of the performance of our next-generation LLR payload. While this payload will improve the contribution of the error budget of the space segment (MoonLIGHT-2) to GR tests and to constraints on new gravitational theories (like non-minimally coupled gravity and spacetime torsion), the description of the associated physics analysis and global LLR error budget is outside of the chosen scope of present paper. We note that, according to Reasenberg et al. (2016), software models used for LLR physics and lunar science cannot process residuals with an accuracy better than few centimeters and that, in order to process millimeter ranging data (or better) coming from (not only) future reflectors, it is necessary to update and improve the respective models inside the software package. The work presented here on results of the SCF-test thermal and optical analysis shows that a good performance is expected by MoonLIGHT-2 after its deployment on the Moon. This in turn will stimulate improvements in LLR ground segment hardware and help refine the LLR software code and models. Without a significant improvement of the LLR space segment, the acquisition of improved ground LLR hardware and challenging LLR software refinements may languish for lack of motivation, since the librations of the old generation LLR payloads largely dominate the global LLR error budget.

  20. Including sheath effects in the interpretation of planar retarding potential analyzer's low-energy ion data

    NASA Astrophysics Data System (ADS)

    Fisher, L. E.; Lynch, K. A.; Fernandes, P. A.; Bekkeng, T. A.; Moen, J.; Zettergren, M.; Miceli, R. J.; Powell, S.; Lessard, M. R.; Horak, P.

    2016-04-01

    The interpretation of planar retarding potential analyzers (RPA) during ionospheric sounding rocket missions requires modeling the thick 3D plasma sheath. This paper overviews the theory of RPAs with an emphasis placed on the impact of the sheath on current-voltage (I-V) curves. It then describes the Petite Ion Probe (PIP) which has been designed to function in this difficult regime. The data analysis procedure for this instrument is discussed in detail. Data analysis begins by modeling the sheath with the Spacecraft Plasma Interaction System (SPIS), a particle-in-cell code. Test particles are traced through the sheath and detector to determine the detector's response. A training set is constructed from these simulated curves for a support vector regression analysis which relates the properties of the I-V curve to the properties of the plasma. The first in situ use of the PIPs occurred during the MICA sounding rocket mission which launched from Poker Flat, Alaska in February of 2012. These data are presented as a case study, providing valuable cross-instrument comparisons. A heritage top-hat thermal ion electrostatic analyzer, called the HT, and a multi-needle Langmuir probe have been used to validate both the PIPs and the data analysis method. Compared to the HT, the PIP ion temperature measurements agree with a root-mean-square error of 0.023 eV. These two instruments agree on the parallel-to-B plasma flow velocity with a root-mean-square error of 130 m/s. The PIP with its field of view aligned perpendicular-to-B provided a density measurement with an 11% error compared to the multi-needle Langmuir Probe. Higher error in the other PIP's density measurement is likely due to simplifications in the SPIS model geometry.

  1. The vertical variability of hyporheic fluxes inferred from riverbed temperature data

    NASA Astrophysics Data System (ADS)

    Cranswick, Roger H.; Cook, Peter G.; Shanafield, Margaret; Lamontagne, Sebastien

    2014-05-01

    We present detailed profiles of vertical water flux from the surface to 1.2 m beneath the Haughton River in the tropical northeast of Australia. A 1-D numerical model is used to estimate vertical flux based on raw temperature time series observations from within downwelling, upwelling, neutral, and convergent sections of the hyporheic zone. A Monte Carlo analysis is used to derive error bounds for the fluxes based on temperature measurement error and uncertainty in effective thermal diffusivity. Vertical fluxes ranged from 5.7 m d-1 (downward) to -0.2 m d-1 (upward) with the lowest relative errors for values between 0.3 and 6 m d-1. Our 1-D approach provides a useful alternative to 1-D analytical and other solutions because it does not incorporate errors associated with simplified boundary conditions or assumptions of purely vertical flow, hydraulic parameter values, or hydraulic conditions. To validate the ability of this 1-D approach to represent the vertical fluxes of 2-D flow fields, we compare our model with two simple 2-D flow fields using a commercial numerical model. These comparisons showed that: (1) the 1-D vertical flux was equivalent to the mean vertical component of flux irrespective of a changing horizontal flux; and (2) the subsurface temperature data inherently has a "spatial footprint" when the vertical flux profiles vary spatially. Thus, the mean vertical flux within a 2-D flow field can be estimated accurately without requiring the flow to be purely vertical. The temperature-derived 1-D vertical flux represents the integrated vertical component of flux along the flow path intersecting the observation point. This article was corrected on 6 JUN 2014. See the end of the full text for details.

  2. The effect of thermal treatment on the enhancement of detection of adulteration in extra virgin olive oils by synchronous fluorescence spectroscopy and chemometric analysis.

    PubMed

    Mabood, F; Boqué, R; Folcarelli, R; Busto, O; Jabeen, F; Al-Harrasi, Ahmed; Hussain, J

    2016-05-15

    In this study the effect of thermal treatment on the enhancement of synchronous fluorescence spectroscopic method for discrimination and quantification of pure extra virgin olive oil (EVOO) samples from EVOO samples adulterated with refined oil was investigated. Two groups of samples were used. One group was analyzed at room temperature (25 °C) and the other group was thermally treated in a thermostatic water bath at 75 °C for 8h, in contact with air and with light exposure, to favor oxidation. All the samples were then measured with synchronous fluorescence spectroscopy. Synchronous fluorescence spectra were acquired by varying the wavelength in the region from 250 to 720 nm at 20 nm wavelength differential interval of excitation and emission. Pure and adulterated olive oils were discriminated by using partial least-squares discriminant analysis (PLS-DA). It was found that the best PLS-DA models were those built with the difference spectra (75 °C-25 °C), which were able to discriminate pure from adulterated oils at a 2% level of adulteration of refined olive oils. Furthermore, PLS regression models were also built to quantify the level of adulteration. Again, the best model was the one built with the difference spectra, with a prediction error of 3.18% of adulteration. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. The effect of thermal treatment on the enhancement of detection of adulteration in extra virgin olive oils by synchronous fluorescence spectroscopy and chemometric analysis

    NASA Astrophysics Data System (ADS)

    Mabood, F.; Boqué, R.; Folcarelli, R.; Busto, O.; Jabeen, F.; Al-Harrasi, Ahmed; Hussain, J.

    2016-05-01

    In this study the effect of thermal treatment on the enhancement of synchronous fluorescence spectroscopic method for discrimination and quantification of pure extra virgin olive oil (EVOO) samples from EVOO samples adulterated with refined oil was investigated. Two groups of samples were used. One group was analyzed at room temperature (25 °C) and the other group was thermally treated in a thermostatic water bath at 75 °C for 8 h, in contact with air and with light exposure, to favor oxidation. All the samples were then measured with synchronous fluorescence spectroscopy. Synchronous fluorescence spectra were acquired by varying the wavelength in the region from 250 to 720 nm at 20 nm wavelength differential interval of excitation and emission. Pure and adulterated olive oils were discriminated by using partial least-squares discriminant analysis (PLS-DA). It was found that the best PLS-DA models were those built with the difference spectra (75 °C-25 °C), which were able to discriminate pure from adulterated oils at a 2% level of adulteration of refined olive oils. Furthermore, PLS regression models were also built to quantify the level of adulteration. Again, the best model was the one built with the difference spectra, with a prediction error of 3.18% of adulteration.

  4. Expected orbit determination performance for the TOPEX/Poseidon mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerem, R.S.; Putney, B.H.; Marshall, J.A.

    1993-03-01

    The TOPEX/Poseidon (T/P) mission, launched during the summer of 1992, has the requirement that the radial component of its orbit must be computed to an accuracy of 13 cm root-mean-square (rms) or better, allowing measurements of the sea surface height to be computed to similar accuracy when the satellite height is differenced with the altimeter measurements. This will be done by combining precise satellite tracking measurements with precise models of the forces acting on the satellite. The Space Geodesy Branch at Goddard Space Flight Center (GSFC), as part of the T/P precision orbit determination (POD) Team, has the responsibility withinmore » NASA for the T/P precise orbit computations. The prelaunch activities of the T/P POD Team have been mainly directed towards developing improved models of the static and time-varying gravitational forces acting on T/P and precise models for the non-conservative forces perturbing the orbit of T/P such as atmospheric drag, solar and Earth radiation pressure, and thermal imbalances. The radial orbit error budget for T/P allows 10 cm rms error due to gravity field mismodeling, 3 cm due to solid Earth and ocean tides, 6 cm due to radiative forces, and 3 cm due to atmospheric drag. A prelaunch assessment of the current modeling accuracies for these forces indicates that the radial orbit error requirements can be achieved with the current models, and can probably be surpassed once T/P tracking data are used to fine tune the models. Provided that the performance of the T/P spacecraft is nominal, the precise orbits computed by the T/P POD Team should be accurate to 13 cm or better radially.« less

  5. Wearable Sweat Rate Sensors for Human Thermal Comfort Monitoring.

    PubMed

    Sim, Jai Kyoung; Yoon, Sunghyun; Cho, Young-Ho

    2018-01-19

    We propose watch-type sweat rate sensors capable of automatic natural ventilation by integrating miniaturized thermo-pneumatic actuators, and experimentally verify their performances and applicability. Previous sensors using natural ventilation require manual ventilation process or high-power bulky thermo-pneumatic actuators to lift sweat rate detection chambers above skin for continuous measurement. The proposed watch-type sweat rate sensors reduce operation power by minimizing expansion fluid volume to 0.4 ml through heat circuit modeling. The proposed sensors reduce operation power to 12.8% and weight to 47.6% compared to previous portable sensors, operating for 4 hours at 6 V batteries. Human experiment for thermal comfort monitoring is performed by using the proposed sensors having sensitivity of 0.039 (pF/s)/(g/m 2 h) and linearity of 97.9% in human sweat rate range. Average sweat rate difference for each thermal status measured in three subjects shows (32.06 ± 27.19) g/m 2 h in thermal statuses including 'comfortable', 'slightly warm', 'warm', and 'hot'. The proposed sensors thereby can discriminate and compare four stages of thermal status. Sweat rate measurement error of the proposed sensors is less than 10% under air velocity of 1.5 m/s corresponding to human walking speed. The proposed sensors are applicable for wearable and portable use, having potentials for daily thermal comfort monitoring applications.

  6. Elastic modulus and thermal stress in coating during heat cycling with different substrate shapes

    NASA Astrophysics Data System (ADS)

    Gaona, Daniel; Valarezo, Alfredo

    2015-09-01

    The elastic modulus of a deposit ( E d) can be obtained by monitoring the temperature (Δ T) and curvature (Δ k) of a one-side coated long plate, namely, a onedimensional (1D) deformation model. The aim of this research is to design an experimental setup that proves whether a 1D deformation model can be scaled for complex geometries. The setup includes a laser displacement sensor mounted on a robotic arm capable of scanning a specimen surface and measuring its deformation. The reproducibility of the results is verified by comparing the present results with Stony Brook University Laboratory's results. The Δ k-Δ T slope error is less than 8%, and the E d estimation error is close to 2%. These values reveal the repeatability of the experiments. Several samples fabricated with aluminum as the substrate and 100MXC nanowire (Fe and Cr alloy) as the deposit are analyzed and compared with those in finite element (FE) simulations. The linear elastic behavior of 1D (flat long plate) and 2D (squared plate) specimens during heating/cooling cycles is demonstrated by the high linearity of all Δ k-Δ T curves (over 97%). The E d values are approximately equal for 1D and 2D analyses, with a median of 96 GPa and standard deviation of 2 GPa. The correspondence between the experimental and simulated results for the 1D and 2D specimens reveals that deformation and thermal stress in coated specimens can be predicted regardless of specimen geometry through FE modeling and by using the experimental value of E d. An example of a turbine-bladeshaped substrate is presented to validate the approach.

  7. Speckle temporal stability in XAO coronagraphic images. II. Refine model for quasi-static speckle temporal evolution for VLT/SPHERE

    NASA Astrophysics Data System (ADS)

    Martinez, P.; Kasper, M.; Costille, A.; Sauvage, J. F.; Dohlen, K.; Puget, P.; Beuzit, J. L.

    2013-06-01

    Context. Observing sequences have shown that the major noise source limitation in high-contrast imaging is the presence of quasi-static speckles. The timescale on which quasi-static speckles evolve is determined by various factors, mechanical or thermal deformations, among others. Aims: Understanding these time-variable instrumental speckles and, especially, their interaction with other aberrations, referred to as the pinning effect, is paramount for the search for faint stellar companions. The temporal evolution of quasi-static speckles is, for instance, required for quantifying the gain expected when using angular differential imaging (ADI) and to determining the interval on which speckle nulling techniques must be carried out. Methods: Following an early analysis of a time series of adaptively corrected, coronagraphic images obtained in a laboratory condition with the high-order test bench (HOT) at ESO Headquarters, we confirm our results with new measurements carried out with the SPHERE instrument during its final test phase in Europe. The analysis of the residual speckle pattern in both direct and differential coronagraphic images enables the characterization of the temporal stability of quasi-static speckles. Data were obtained in a thermally actively controlled environment reproducing realistic conditions encountered at the telescope. Results: The temporal evolution of the quasi-static wavefront error exhibits a linear power law, which can be used to model quasi-static speckle evolution in the context of forthcoming high-contrast imaging instruments, with implications for instrumentation (design, observing strategies, data reduction). Such a model can be used for instance to derive the timescale on which non-common path aberrations must be sensed and corrected. We found in our data that quasi-static wavefront error increases with ~0.7 Å per minute.

  8. Thermal oxidation process accelerates degradation of the olive oil mixed with sunflower oil and enables its discrimination using synchronous fluorescence spectroscopy and chemometric analysis

    NASA Astrophysics Data System (ADS)

    Mabood, Fazal; Boqué, Ricard; Folcarelli, Rita; Busto, Olga; Al-Harrasi, Ahmed; Hussain, Javid

    2015-05-01

    We have investigated the effect of thermal treatment on the discrimination of pure extra virgin olive oil (EVOO) samples from EVOO samples adulterated with sunflower oil. Two groups of samples were used. One group was analyzed at room temperature (25 °C) and the other group was thermally treated in a thermostatic water bath at 75 °C for 8 h, in contact with air and with light exposure, to favor oxidation. All samples were then measured with synchronous fluorescence spectroscopy. Fluorescence spectra were acquired by varying the excitation wavelength in the region from 250 to 720 nm. In order to optimize the differences between excitation and emission wavelengths, four constant differential wavelengths, i.e., 20 nm, 40 nm, 60 nm and 80 nm, were tried. Partial least-squares discriminant analysis (PLS-DA) was used to discriminate between pure and adulterated oils. It was found that the 20 nm difference was the optimal, at which the discrimination models showed the best results. The best PLS-DA models were those built with the difference spectra (75-25 °C), which were able to discriminate pure from adulterated oils at a 2% level of adulteration. Furthermore, PLS regression models were built to quantify the level of adulteration. Again, the best model was the one built with the difference spectra, with a prediction error of 1.75% of adulteration.

  9. Stagnation point flow of viscoelastic nanomaterial over a stretched surface

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Kiyani, M. Z.; Ahmad, I.; Khan, M. Ijaz; Alsaedi, A.

    2018-06-01

    Present communication aims to discuss magnetohydrodynamic (MHD) stagnation point flow of Jeffrey nanofluid by a stretching cylinder. Modeling is based upon Brownian motion, thermophoresis, thermal radiation and heat generation. Problem is attempted by using (HAM). Residual errors for h-curves are plotted. Convergent solutions for velocity, temperature and concentration are obtained. Skin friction coefficient, local Nusselt number and Sherwood number are studied. It is examined that velocity field decays in the presence of higher estimation of magnetic variable. Furthermore temperature and concentration fields are enhanced for larger magnetic variable.

  10. Quantum State Transfer via Noisy Photonic and Phononic Waveguides

    NASA Astrophysics Data System (ADS)

    Vermersch, B.; Guimond, P.-O.; Pichler, H.; Zoller, P.

    2017-03-01

    We describe a quantum state transfer protocol, where a quantum state of photons stored in a first cavity can be faithfully transferred to a second distant cavity via an infinite 1D waveguide, while being immune to arbitrary noise (e.g., thermal noise) injected into the waveguide. We extend the model and protocol to a cavity QED setup, where atomic ensembles, or single atoms representing quantum memory, are coupled to a cavity mode. We present a detailed study of sensitivity to imperfections, and apply a quantum error correction protocol to account for random losses (or additions) of photons in the waveguide. Our numerical analysis is enabled by matrix product state techniques to simulate the complete quantum circuit, which we generalize to include thermal input fields. Our discussion applies both to photonic and phononic quantum networks.

  11. Ulysses, one year after the launch

    NASA Astrophysics Data System (ADS)

    Petersen, H.

    1991-12-01

    Ulysses is currently one year underway in a huge heliocentric orbit. A late change in some of the blankets' external material was required to prevent electrical charging due to contamination by nozzle outgassing products. Test results are shown, governing various ranges of plasma parameters and sample temperatures. Even clean materials show a few volts charging due to imperfections in the conductive film. Thermal environment in the Shuttle cargo bay proved to be slightly different from prelaunch predictions: less warm with doors closed, and less cold with doors opened. Temperatures experienced in orbit are nominal. A problem was caused by a complex interaction of a Sun induced thermal gradient in a sensitive boom on the dynamic stability of the spacecraft. A user interface program was an invaluable tool to ease computations with the mathematical models, eliminate error risk and provide configuration control.

  12. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  13. Prediction of the mass gain during high temperature oxidation of aluminized nanostructured nickel using adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Hayati, M.; Rashidi, A. M.; Rezaei, A.

    2012-10-01

    In this paper, the applicability of ANFIS as an accurate model for the prediction of the mass gain during high temperature oxidation using experimental data obtained for aluminized nanostructured (NS) nickel is presented. For developing the model, exposure time and temperature are taken as input and the mass gain as output. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the network. We have compared the proposed ANFIS model with experimental data. The predicted data are found to be in good agreement with the experimental data with mean relative error less than 1.1%. Therefore, we can use ANFIS model to predict the performances of thermal systems in engineering applications, such as modeling the mass gain for NS materials.

  14. Thermal and heat flow instrumentation for the space shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Hartman, G. J.; Neuner, G. J.; Pavlosky, J.

    1974-01-01

    The 100 mission lifetime requirement for the space shuttle orbiter vehicle dictates a unique set of requirements for the Thermal Protection System (TPS) thermal and heat flow instrumentation. This paper describes the design and development of such instrumentation with emphasis on assessment of the accuracy of the measurements when the instrumentation is an integral part of the TPS. The temperature and heat flow sensors considered for this application are described and the optimum choices discussed. Installation techniques are explored and the resulting impact on the system error defined.

  15. Gaussian Hypothesis Testing and Quantum Illumination.

    PubMed

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  16. Towards the 1-cm SARAL orbit

    NASA Astrophysics Data System (ADS)

    Zelensky, Nikita P.; Lemoine, Frank G.; Chinn, Douglas S.; Beckley, Brian D.; Bordyugov, Oleg; Yang, Xu; Wimert, Jesse; Pavlis, Despina

    2016-12-01

    We have investigated the quality of precise orbits for the SARAL altimeter satellite using Satellite Laser Ranging (SLR) and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) data from March 14, 2013 to August 10, 2014. We have identified a 4.31 ± 0.14 cm error in the Z (cross-track) direction that defines the center-of-mass of the SARAL satellite in the spacecraft coordinate system, and we have tuned the SLR and DORIS tracking point offsets. After these changes, we reduce the average RMS of the SLR residuals for seven-day arcs from 1.85 to 1.38 cm. We tuned the non-conservative force model for SARAL, reducing the amplitude of the daily adjusted empirical accelerations by eight percent. We find that the best dynamic orbits show altimeter crossover residuals of 5.524 cm over cycles 7-15. Our analysis offers a unique illustration that high-elevation SLR residuals will not necessarily provide an accurate estimate of radial error at the 1-cm level, and that other supporting orbit tests are necessary for a better estimate. Through the application of improved models for handling time-variable gravity, the use of reduced-dynamic orbits, and through an arc-by-arc estimation of the C22 and S22 coefficients, we find from analysis of independent SLR residuals and other tests that we achieve 1.1-1.2 cm radial orbit accuracies for SARAL. The limiting errors stem from the inadequacy of the DPOD2008 and SLRF2008 station complements, and inadequacies in radiation force modeling, especially with respect to spacecraft self-shadowing and modeling of thermal variations due to eclipses.

  17. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2017-10-01

    Observations made by the WISE space telescope and subsequent analysis by the NEOWISE project represent the largest corpus of asteroid data to date, describing the diameter, albedo, and other properties of the ~164,000 asteroids in the collection. I present a critical reanalysis of the WISE observational data, and NEOWISE results published in numerous papers and in the JPL Planetary Data System (PDS). This analysis reveals shortcomings and a lack of clarity, both in the original analysis and in the presentation of results. The procedures used to generate NEOWISE results fall short of established thermal modelling standards. Rather than using a uniform protocol, 10 modelling methods were applied to 12 combinations of WISE band data. Over half the NEOWISE results are based on a single band of data. Most NEOWISE curve fits are poor quality, frequently missing many or all the data points. About 30% of the single-band results miss all the data; 43% of the results derived from the most common multiple-band combinations miss all the data in at least one band. The NEOWISE data processing procedures rely on inconsistent assumptions, and introduce bias by systematically discarding much of the original data. I show that error estimates for the WISE observational data have a true uncertainty factor of ~1.2 to 1.9 times larger than previously described, and that the error estimates do not fit a normal distribution. These issues call into question the validity of the NEOWISE Monte-Carlo error analysis. Comparing published NEOWISE diameters to published estimates using radar, occultation, or spacecraft measurements (ROS) reveals 150 for which the NEOWISE diameters were copied exactly from the ROS source. My findings show that the accuracy of diameter estimates for NEOWISE results depend heavily on the choice of data bands and model. Systematic errors in the diameter estimates are much larger than previously described. Systematic errors for diameters in the PDS range from -3% to +27%. Random errors range from -14% to +19% when using all four WISE bands, and from -45% to +74% in cases using only the W2 band. The results presented here show that much work remains to be done towards understanding asteroid data from WISE/NEOWISE.

  18. Multiphysics Modeling of Microwave Heating of a Frozen Heterogeneous Meal Rotating on a Turntable.

    PubMed

    Pitchai, Krishnamoorthy; Chen, Jiajia; Birla, Sohan; Jones, David; Gonzalez, Ric; Subbiah, Jeyamkondan

    2015-12-01

    A 3-dimensional (3-D) multiphysics model was developed to understand the microwave heating process of a real heterogeneous food, multilayered frozen lasagna. Near-perfect 3-D geometries of food package and microwave oven were used. A multiphase porous media model combining the electromagnetic heat source with heat and mass transfer, and incorporating phase change of melting and evaporation was included in finite element model. Discrete rotation of food on the turntable was incorporated. The model simulated for 6 min of microwave cooking of a 450 g frozen lasagna kept at the center of the rotating turntable in a 1200 W domestic oven. Temperature-dependent dielectric and thermal properties of lasagna ingredients were measured and provided as inputs to the model. Simulated temperature profiles were compared with experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The total moisture loss in lasagna was predicted and compared with the experimental moisture loss during cooking. The simulated spatial temperature patterns predicted at the top layer was in good agreement with the corresponding patterns observed in thermal images. Predicted point temperature profiles at 6 different locations within the meal were compared with experimental temperature profiles and root mean square error (RMSE) values ranged from 6.6 to 20.0 °C. The predicted total moisture loss matched well with an RMSE value of 0.54 g. Different layers of food components showed considerably different heating performance. Food product developers can use this model for designing food products by understanding the effect of thickness and order of each layer, and material properties of each layer, and packaging shape on cooking performance. © 2015 Institute of Food Technologists®

  19. Interfacing a one-dimensional lake model with a single-column atmospheric model: 2. Thermal response of the deep Lake Geneva, Switzerland under a 2 × CO2 global climate change

    NASA Astrophysics Data System (ADS)

    Perroud, Marjorie; Goyette, StéPhane

    2012-06-01

    In the companion to the present paper, the one-dimensional k-ɛ lake model SIMSTRAT is coupled to a single-column atmospheric model, nicknamed FIZC, and an application of the coupled model to the deep Lake Geneva, Switzerland, is described. In this paper, the response of Lake Geneva to global warming caused by an increase in atmospheric carbon dioxide concentration (i.e., 2 × CO2) is investigated. Coupling the models allowed for feedbacks between the lake surface and the atmosphere and produced changes in atmospheric moisture and cloud cover that further modified the downward radiation fluxes. The time evolution of atmospheric variables as well as those of the lake's thermal profile could be reproduced realistically by devising a set of adjustable parameters. In a "control" 1 × CO2 climate experiment, the coupled FIZC-SIMSTRAT model demonstrated genuine skills in reproducing epilimnetic and hypolimnetic temperatures, with annual mean errors and standard deviations of 0.25°C ± 0.25°C and 0.3°C ± 0.15°C, respectively. Doubling the CO2 concentration induced an atmospheric warming that impacted the lake's thermal structure, increasing the stability of the water column and extending the stratified period by 3 weeks. Epilimnetic temperatures were seen to increase by 2.6°C to 4.2°C, while hypolimnion temperatures increased by 2.2°C. Climate change modified components of the surface energy budget through changes mainly in air temperature, moisture, and cloud cover. During summer, reduced cloud cover resulted in an increase in the annual net solar radiation budget. A larger water vapor deficit at the air-water interface induced a cooling effect in the lake.

  20. Thermal Evolution and Crystallisation Regimes of the Martian Core

    NASA Astrophysics Data System (ADS)

    Davies, C. J.; Pommier, A.

    2015-12-01

    Though it is accepted that Mars has a sulfur-rich metallic core, its chemical and physical state as well as its time-evolution are still unconstrained and debated. Several lines of evidence indicate that an internal magnetic field was once generated on Mars and that this field decayed around 3.7-4.0 Gyrs ago. The standard model assumes that this field was produced by a thermal (and perhaps chemical) dynamo operating in the Martian core. We use this information to construct parameterized models of the Martian dynamo in order to place constraints on the thermochemical evolution of the Martian core, with particular focus on its crystallization regime. Considered compositions are in the FeS system, with S content ranging from ~10 and 16 wt%. Core radius, density and CMB pressure are varied within the errors provided by recent internal structure models that satisfy the available geodetic constraints (planetary mass, moment of inertia and tidal Love number). We also vary the melting curve and adiabat, CMB heat flow and thermal conductivity. Successful models are those that match the dynamo cessation time and fall within the bounds on present-day CMB temperature. The resulting suite of over 500 models suggest three possible crystallization regimes: growth of a solid inner core starting at the center of the planet; freezing and precipitation of solid iron (Fe- snow) from the core-mantle boundary (CMB); and freezing that begins midway through the core. Our analysis focuses on the effects of core properties that are expected to be constrained during the forthcoming Insight mission.

  1. Finite-size scaling study of the two-dimensional Blume-Capel model

    NASA Astrophysics Data System (ADS)

    Beale, Paul D.

    1986-02-01

    The phase diagram of the two-dimensional Blume-Capel model is investigated by using the technique of phenomenological finite-size scaling. The location of the tricritical point and the values of the critical and tricritical exponents are determined. The location of the tricritical point (Tt=0.610+/-0.005, Dt=1.9655+/-0.0010) is well outside the error bars for the value quoted in previous Monte Carlo simulations but in excellent agreement with more recent Monte Carlo renormalization-group results. The values of the critical and tricritical exponents, with the exception of the leading thermal tricritical exponent, are in excellent agreement with previous calculations, conjectured values, and Monte Carlo renormalization-group studies.

  2. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  3. Optimization of thermal conductivity lightweight brick type AAC (Autoclaved Aerated Concrete) effect of Si & Ca composition by using Artificial Neural Network (ANN)

    NASA Astrophysics Data System (ADS)

    Zulkifli; Wiryawan, G. P.

    2018-03-01

    Lightweight brick is the most important component of building construction, therefore it is necessary to have lightweight thermal, mechanical and aqustic thermal properties that meet the standard, in this paper which is discussed is the domain of light brick thermal conductivity properties. The advantage of lightweight brick has a low density (500-650 kg/m3), more economical, can reduce the load 30-40% compared to conventional brick (clay brick). In this research, Artificial Neural Network (ANN) is used to predict the thermal conductivity of lightweight brick type Autoclaved Aerated Concrete (AAC). Based on the training and evaluation that have been done on 10 model of ANN with number of hidden node 1 to 10, obtained that ANN with 3 hidden node have the best performance. It is known from the mean value of MSE (Mean Square Error) validation for three training times of 0.003269. This ANN was further used to predict the thermal conductivity of four light brick samples. The predicted results for each of the AAC1, AAC2, AAC3 and AAC4 light brick samples were 0.243 W/m.K, respectively; 0.29 W/m.K; 0.32 W/m.K; and 0.32 W/m.K. Furthermore, ANN is used to determine the effect of silicon composition (Si), Calcium (Ca), to light brick thermal conductivity. ANN simulation results show that the thermal conductivity increases with increasing Si composition. Si content is allowed maximum of 26.57%, while the Ca content in the range 20.32% - 30.35%.

  4. Porosity and Mineralogy Control on the Thermal Properties of Sediments in Off-Shimokita Deep-Water Coal Bed Basin

    NASA Astrophysics Data System (ADS)

    Tanikawa, W.; Tadai, O.; Morita, S.; Lin, W.; Yamada, Y.; Sanada, Y.; Moe, K.; Kubo, Y.; Inagaki, F.

    2014-12-01

    Heat transport properties such as thermal conductivity, heat capacity, and thermal diffusivity are significant parameters that influence on geothermal process in sedimentary basins at depth. We measured the thermal properties of sediment core samples at off-Shimokita basin obtained from the IODP Expedition 337 and Expedition CK06-06 in D/V Chikyu shakedown cruise. Overall, thermal conductivity and thermal diffusivity increased with depth and heat capacity decreased with depth, although the data was highly scattered at the depth of approximately 2000 meters below sea floor, where coal-layers were formed. The increase of thermal conductivity is mainly explained by the porosity reduction of sediment by the consolidation during sedimentation. The highly variation of the thermal conductivity at the same core section is probably caused by the various lithological rocks formed at the same section. Coal shows the lowest thermal conductivity of 0.4 Wm-1K-1, and the calcite cemented sandstone/siltstone shows highest conductivity around 3 Wm-1K-1. The thermal diffusivity and heat capacity are influenced by the porosity and lithological contrast as well. The relationship between thermal conductivity and porosity in this site is well explained by the mixed-law model of Maxwell or geometric mean. One dimensional temperature-depth profile at Site C0020 in Expedition 337 estimated from measured physical properties and radiative heat production data shows regression of thermal gradient with depth. Surface heat flow value was evaluated as 29~30 mWm-2, and the value is consistent with the heat flow data near this site. Our results suggest that increase of thermal conductivity with depth significantly controls on temperature profile at depth of basin. If we assume constant thermal conductivity or constant geothermal gradient, we might overestimate temperature at depth, which might cause big error to predict the heat transport or hydrocarbon formation in deepwater sedimentary basins.

  5. Genetic particle filter application to land surface temperature downscaling

    NASA Astrophysics Data System (ADS)

    Mechri, Rihab; Ottlé, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz

    2014-03-01

    Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.

  6. Analytical expressions for noise and crosstalk voltages of the High Energy Silicon Particle Detector

    NASA Astrophysics Data System (ADS)

    Yadav, I.; Shrimali, H.; Liberali, V.; Andreazza, A.

    2018-01-01

    The paper presents design and implementation of a silicon particle detector array with the derived closed form equations of signal-to-noise ratio (SNR) and crosstalk voltages. The noise analysis demonstrates the effect of interpixel capacitances (IPC) between center pixel (where particle hits) and its neighbouring pixels, resulting as a capacitive crosstalk. The pixel array has been designed and simulated in a 180 nm BCD technology of STMicroelectronics. The technology uses the supply voltage (VDD) of 1.8 V and the substrate potential of -50 V. The area of unit pixel is 250×50 μm2 with the substrate resistivity of 125 Ωcm and the depletion depth of 30 μm. The mathematical model includes the effects of various types of noise viz. the shot noise, flicker noise, thermal noise and the capacitive crosstalk. This work compares the results of noise and crosstalk analysis from the proposed mathematical model with the circuit simulation results for a given simulation environment. The results show excellent agreement with the circuit simulations and the mathematical model. The average relative error (AVR) generated for the noise spectral densities with respect to the simulations and the model is 12% whereas the comparison gives the errors of 3% and 11.5% for the crosstalk voltages and the SNR results respectively.

  7. Comparison of three approaches to model grapevine organogenesis in conditions of fluctuating temperature, solar radiation and soil water content.

    PubMed

    Pallas, B; Loi, C; Christophe, A; Cournède, P H; Lecoeur, J

    2011-04-01

    There is increasing interest in the development of plant growth models representing the complex system of interactions between the different determinants of plant development. These approaches are particularly relevant for grapevine organogenesis, which is a highly plastic process dependent on temperature, solar radiation, soil water deficit and trophic competition. The extent to which three plant growth models were able to deal with the observed plasticity of axis organogenesis was assessed. In the first model, axis organogenesis was dependent solely on temperature, through thermal time. In the second model, axis organogenesis was modelled through functional relationships linking meristem activity and trophic competition. In the last model, the rate of phytomer appearence on each axis was modelled as a function of both the trophic status of the plant and the direct effect of soil water content on potential meristem activity. The model including relationships between trophic competition and meristem behaviour involved a decrease in the root mean squared error (RMSE) for the simulations of organogenesis by a factor nine compared with the thermal time-based model. Compared with the model in which axis organogenesis was driven only by trophic competition, the implementation of relationships between water deficit and meristem behaviour improved organogenesis simulation results, resulting in a three times divided RMSE. The resulting model can be seen as a first attempt to build a comprehensive complete plant growth model simulating the development of the whole plant in fluctuating conditions of temperature, solar radiation and soil water content. We propose a new hypothesis concerning the effects of the different determinants of axis organogenesis. The rate of phytomer appearance according to thermal time was strongly affected by the plant trophic status and soil water deficit. Furthermore, the decrease in meristem activity when soil water is depleted does not result from source/sink imbalances.

  8. [Spectral quantitative analysis by nonlinear partial least squares based on neural network internal model for flue gas of thermal power plant].

    PubMed

    Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia

    2014-11-01

    To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an accurate and useful quantitative thermal power plant flue gas analysis method.

  9. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  10. A model of the CO2 exchanges between biosphere and atmosphere in the tundra

    NASA Technical Reports Server (NTRS)

    Labgaa, Rachid R.; Gautier, Catherine

    1992-01-01

    A physical model of the soil thermal regime in a permafrost terrain has been developed and validated with soil temperature measurements at Barrow, Alaska. The model calculates daily soil temperatures as a function of depth and average moisture contents of the organic and mineral layers using a set of five climatic variables, i.e., air temperature, precipitation, cloudiness, wind speed, and relative humidity. The model is not only designed to study the impact of climate change on the soil temperature and moisture regime, but also to provide the input to a decomposition and net primary production model. In this context, it is well known that CO2 exchanges between the terrestrial biosphere and the atmosphere are driven by soil temperature through decomposition of soil organic matter and root respiration. However, in tundra ecosystems, net CO2 exchange is extremely sensitive to soil moisture content; therefore it is necessary to predict variations in soil moisture in order to assess the impact of climate change on carbon fluxes. To this end, the present model includes the representation of the soil moisture response to changes in climatic conditions. The results presented in the foregoing demonstrate that large errors in soil temperature and permafrost depth estimates arise from neglecting the dependence of the soil thermal regime on soil moisture contents. Permafrost terrain is an example of a situation where soil moisture and temperature are particularly interrelated: drainage conditions improve when the depth of the permafrost increases; a decrease in soil moisture content leads to a decrease in the latent heat required for the phase transition so that the heat penetrates faster and deeper, and the maximum depth of thaw increases; and as excepted, soil thermal coefficients increase with moisture.

  11. Online estimation of internal stack temperatures in solid oxide fuel cell power generating units

    NASA Astrophysics Data System (ADS)

    Dolenc, B.; Vrečko, D.; Juričić, Ɖ.; Pohjoranta, A.; Pianese, C.

    2016-12-01

    Thermal stress is one of the main factors affecting the degradation rate of solid oxide fuel cell (SOFC) stacks. In order to mitigate the possibility of fatal thermal stress, stack temperatures and the corresponding thermal gradients need to be continuously controlled during operation. Due to the fact that in future commercial applications the use of temperature sensors embedded within the stack is impractical, the use of estimators appears to be a viable option. In this paper we present an efficient and consistent approach to data-driven design of the estimator for maximum and minimum stack temperatures intended (i) to be of high precision, (ii) to be simple to implement on conventional platforms like programmable logic controllers, and (iii) to maintain reliability in spite of degradation processes. By careful application of subspace identification, supported by physical arguments, we derive a simple estimator structure capable of producing estimates with 3% error irrespective of the evolving stack degradation. The degradation drift is handled without any explicit modelling. The approach is experimentally validated on a 10 kW SOFC system.

  12. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  13. Production of Engineered Fabrics Using Artificial Neural Network-Genetic Algorithm Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Banerjee, Debamalya

    2015-10-01

    The process of fabric engineering which is generally practised in most of the textile mills is very complicated, repetitive, tedious and time consuming. To eliminate this trial and error approach, a new approach of fabric engineering has been attempted in this work. Data sets of construction parameters [comprising of ends per inch, picks per inch, warp count and weft count] and three fabric properties (namely drape coefficient, air permeability and thermal resistance) of 25 handloom cotton fabrics have been used. The weights and biases of three artificial neural network (ANN) models developed for the prediction of drape coefficient, air permeability and thermal resistance were used to formulate the fitness or objective function and constraints of the optimization problem. The optimization problem was solved using genetic algorithm (GA). In both the fabrics which were attempted for engineering, the target and simulated fabric properties were very close. The GA was able to search the optimum set of fabric construction parameters with reasonably good accuracy except in case of EPI. However, the overall result is encouraging and can be improved further by using larger data sets of handloom fabrics by hybrid ANN-GA model.

  14. Conductivity Cell Thermal Inertia Correction Revisited

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2012-12-01

    Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified. Consideration of thermal inertia correction enables assessment of various CTD sampling schemes. Spot sampling by pumping a cell intermittently provides particular challenges, and may lead to biases in inferred salinity that are comparable to climate signals reported from profiling float arrays.

  15. Polarization modeling and predictions for DKIST part 3: focal ratio and thermal dependencies of spectral polarization fringes and optic retardance

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Sueoka, Stacey R.

    2018-01-01

    Data products from high spectral resolution astronomical polarimeters are often limited by fringes. Fringes can skew derived magnetic field properties from spectropolarimetric data. Fringe removal algorithms can also corrupt the data if the fringes and object signals are too similar. For some narrow-band imaging polarimeters, fringes change the calibration retarder properties and dominate the calibration errors. Systems-level engineering tools for polarimetric instrumentation require accurate predictions of fringe amplitudes, periods for transmission, diattenuation, and retardance. The relevant instabilities caused by environmental, thermal, and optical properties can be modeled and mitigation tools developed. We create spectral polarization fringe amplitude and temporal instability predictions by applying the Berreman calculus and simple interferometric calculations to optics in beams of varying F/ number. We then apply the formalism to superachromatic six-crystal retarders in converging beams under beam thermal loading in outdoor environmental conditions for two of the world's largest observatories: the 10-m Keck telescope and the Daniel K. Inouye Solar Telescope (DKIST). DKIST will produce a 300-W optical beam, which has imposed stringent requirements on the large diameter six-crystal retarders, dichroic beamsplitters, and internal optics. DKIST retarders are used in a converging beam with F/ ratios between 8 and 62. The fringe spectral periods, amplitudes, and thermal models of retarder behavior assisted DKIST optical designs and calibration plans with future application to many astronomical spectropolarimeters. The Low Resolution Imaging Spectrograph with polarimetry instrument at Keck also uses six-crystal retarders in a converging F / 13 beam in a Cassegrain focus exposed to summit environmental conditions providing observational verification of our predictions.

  16. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or frommore » some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model.« less

  17. Ejecta distribution patterns at Meteor Crater, Arizona: On the applicability of lithologic end-member deconvolution for spaceborne thermal infrared data of Earth and Mars

    NASA Astrophysics Data System (ADS)

    Ramsey, Michael S.

    2002-08-01

    A spectral deconvolution using a constrained least squares approach was applied to airborne thermal infrared multispectral scanner (TIMS) data of Meteor Crater, Arizona. The three principal sedimentary units sampled by the impact were chosen as end-members, and their spectra were derived from the emissivity images. To validate previous estimates of the erosion of the near-rim ejecta, the model was used to identify the areal extent of the reworked material. The outputs of the algorithm reveal subtle mixing patterns in the ejecta, identified larger ejecta blocks, and were used to further constrain the volume of Coconino Sandstone present in the vicinity of the crater. The availability of the multialtitude data set also provided a means to examine the effects of resolution degradation and quantify the subsequent errors on the model. These data served as a test case for the use of image-derived lithologic end-members at various scales, which is critical for examining thermal infrared data of planetary surfaces. The model results indicate that the Coconino Ss. reworked ejecta is detectable over 3 km from the crater. This was confirmed by field sampling within the primary ejecta field and wind streak. The areal distribution patterns of this unit imply past erosion and subsequent sediment transport that was low to moderate compared with early studies and therefore places further constraints on the ejecta degradation of Meteor Crater. It also provides an important example of the analysis that can be performed on thermal infrared data currently being returned from Earth orbit and expected from Mars in 2002.

  18. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  19. Analysis of on-orbit thermal characteristics of the 15-meter hoop/column antenna

    NASA Technical Reports Server (NTRS)

    Andersen, Gregory C.; Farmer, Jeffery T.; Garrison, James

    1987-01-01

    In recent years, interest in large deployable space antennae has led to the development of the 15 meter hoop/column antenna. The thermal environment the antenna is expected to experience during orbit is examined and the temperature distributions leading to reflector surface distortion errors are determined. Two flight orientations corresponding to: (1) normal operation, and (2) use in a Shuttle-attached flight experiment are examined. A reduced element model was used to determine element temperatures at 16 orbit points for both flight orientations. The temperature ranged from a minimum of 188 K to a maximum of 326 K. Based on the element temperatures, orbit position leading to possible worst case surface distortions were determined, and the subsequent temperatures were used in a static finite element analysis to quantify surface control cord deflections. The predicted changes in the control cord lengths were in the submillimeter ranges.

  20. Evaluation of the OMI Cloud Pressures Derived from Rotational Raman Scattering by Comparisons with other Satellite Data and Radiative Transfer Simulations

    NASA Technical Reports Server (NTRS)

    Vasilkov, Alexander; Joiner, Joanna; Spurr, Robert; Bhartia, Pawan K.; Levelt, Pieternel; Stephens, Graeme

    2009-01-01

    In this paper we examine differences between cloud pressures retrieved from the Ozone Monitoring Instrument (OMI) using the ultraviolet rotational Raman scattering (RRS) algorithm and those from the thermal infrared (IR) Aqua/MODIS. Several cloud data sets are currently being used in OMI trace gas retrieval algorithms including climatologies based on IR measurements and simultaneous cloud parameters derived from OMI. From a validation perspective, it is important to understand the OMI retrieved cloud parameters and how they differ with those derived from the IR. To this end, we perform radiative transfer calculations to simulate the effects of different geophysical conditions on the OMI RRS cloud pressure retrievals. We also quantify errors related to the use of the Mixed Lambert-Equivalent Reflectivity (MLER) concept as currently implemented of the OMI algorithms. Using properties from the Cloudsat radar and MODIS, we show that radiative transfer calculations support the following: (1) The MLER model is adequate for single-layer optically thick, geometrically thin clouds, but can produce significant errors in estimated cloud pressure for optically thin clouds. (2) In a two-layer cloud, the RRS algorithm may retrieve a cloud pressure that is either between the two cloud decks or even beneath the top of the lower cloud deck because of scattering between the cloud layers; the retrieved pressure depends upon the viewing geometry and the optical depth of the upper cloud deck. (3) Absorbing aerosol in and above a cloud can produce significant errors in the retrieved cloud pressure. (4) The retrieved RRS effective pressure for a deep convective cloud will be significantly higher than the physical cloud top pressure derived with thermal IR.

  1. Analysis of Meteorological Satellite location and data collection system concepts

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.; Reed, D. L.

    1981-01-01

    A satellite system that employs a spaceborne RF interferometer to determine the location and velocity of data collection platforms attached to meteorological balloons is proposed. This meteorological advanced location and data collection system (MALDCS) is intended to fly aboard a low polar orbiting satellite. The flight instrument configuration includes antennas supported on long deployable booms. The platform location and velocity estimation errors introduced by the dynamic and thermal behavior of the antenna booms and the effects of the presence of the booms on the performance of the spacecraft's attitude control system, and the control system design considerations critical to stable operations are examined. The physical parameters of the Astromast type of deployable boom were used in the dynamic and thermal boom analysis, and the TIROS N system was assumed for the attitude control analysis. Velocity estimation error versus boom length was determined. There was an optimum, minimum error, antenna separation distance. A description of the proposed MALDCS system and a discussion of ambiguity resolution are included.

  2. An adaptive optics system for solid-state laser systems used in inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salmon, J.T.; Bliss, E.S.; Byrd, J.L.

    1995-09-17

    Using adaptive optics the authors have obtained nearly diffraction-limited 5 kJ, 3 nsec output pulses at 1.053 {micro}m from the Beamlet demonstration system for the National Ignition Facility (NIF). The peak Strehl ratio was improved from 0.009 to 0.50, as estimated from measured wavefront errors. They have also measured the relaxation of the thermally induced aberrations in the main beam line over a period of 4.5 hours. Peak-to-valley aberrations range from 6.8 waves at 1.053 {micro}m within 30 minutes after a full system shot to 3.9 waves after 4.5 hours. The adaptive optics system must have enough range to correctmore » accumulated thermal aberrations from several shots in addition to the immediate shot-induced error. Accumulated wavefront errors in the beam line will affect both the design of the adaptive optics system for NIF and the performance of that system.« less

  3. Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers

    PubMed Central

    Michel, Burlyn E.

    1979-01-01

    Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685

  4. Life cycle monitoring of lithium-ion polymer batteries using cost-effective thermal infrared sensors with applications for lifetime prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Xunfei; Malik, Anav; Hsieh, Sheng-Jen

    2017-05-01

    Lithium-ion batteries have become indispensable parts of our lives for their high-energy density and long lifespan. However, failure due to from abusive usage conditions, flawed manufacturing processes, and aging and adversely affect battery performance and even endanger people and property. Therefore, battery cells that are failing or reaching their end-of-life need to be replaced. Traditionally, battery lifetime prediction is achieved by analyzing data from current, voltage and impedance sensors. However, such a prognostic system is expensive to implement and requires direct contact. In this study, low-cost thermal infrared sensors were used to acquire thermographic images throughout the entire lifetime of small scale lithium-ion polymer batteries (410 cycles). The infrared system (non-destructive) took temperature readings from multiple batteries during charging and discharging cycles of 1C. Thermal characteristics of the batteries were derived from the thermographic images. A time-dependent and spatially resolved temperature mapping was obtained and quantitatively analyzed. The developed model can predict cycle number using the first 10 minutes of surface temperature data acquired through infrared imaging at the beginning of the cycle, with an average error rate of less than 10%. This approach can be used to correlate thermal characteristics of the batteries with life cycles, and to propose cost-effective thermal infrared imaging applications in battery prognostic systems.

  5. Synchronized Electronic Shutter System (SESS) for Thermal Nondestructive Evaluation

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.

    2001-01-01

    The purpose of this paper is to describe a new method for thermal nondestructive evaluation. This method uses a synchronized electronic shutter system (SESS) to remove the heat lamp's influence on the thermal data during and after flash heating. There are two main concerns when using flash heating. The first concern is during the flash when the photons are reflected back into the camera. This tends to saturate the detectors and potentially introduces unknown and uncorrectable errors when curve fitting the data to a model. To address this, an electronically controlled shutter was placed over the infrared camera lens. Before firing the flash lamps, the shutter is opened to acquire the necessary background data for offset calibration. During flash heating, the shutter is closed to prevent the photons from the high intensity flash from saturating the camera's detectors. The second concern is after the flash heating where the lamps radiate heat after firing. This residual cooling introduces an unwanted transient thermal response into the data. To remove this residual effect, a shutter was placed over the flash lamps to block the infrared heat radiating from the flash head after heating. This helped to remove the transient contribution of the flash. The flash lamp shutters were synchronized electronically with the camera shutter. Results are given comparing the use of the thermal inspection with and without the shutter system.

  6. Optical device for thermal diffusivity determination in liquids by reflection of a thermal wave

    NASA Astrophysics Data System (ADS)

    Sánchez-Pérez, C.; De León-Hernández, A.; García-Cadena, C.

    2017-08-01

    In this work, we present a device for determination of the thermal diffusivity using the oblique reflection of a thermal wave within a solid slab that is in contact with the medium to be characterized. By using the reflection near a critical angle under the assumption that thermal waves obey Snell's law of refraction with the square root of the thermal diffusivities, the unknown thermal diffusivity is obtained by simple formulae. Experimentally, the sensor response is measured using the photothermal beam deflection technique within a slab that results in a compact device with no contact of the laser probing beam with the sample. We describe the theoretical basis and provide experimental results to validate the proposed method. We determine the thermal diffusivity of tridistilled water and glycerin solutions with an error of less than 0.5%.

  7. Predictive Thermal Control Applied to HabEx

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas E.

    2017-01-01

    Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10(exp -10) contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).

  8. Predictive thermal control applied to HabEx

    NASA Astrophysics Data System (ADS)

    Brooks, Thomas E.

    2017-09-01

    Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10-10 contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).

  9. Investigating Summer Thermal Stratification in Lake Ontario

    NASA Astrophysics Data System (ADS)

    James, S. C.; Arifin, R. R.; Craig, P. M.; Hamlet, A. F.

    2017-12-01

    Seasonal temperature variations establish strong vertical density gradients (thermoclines) between the epilimnion and hypolimnion. Accurate simulation of vertical mixing and seasonal stratification of large lakes is a crucial element of the thermodynamic coupling between lakes and the atmosphere in integrated models. Time-varying thermal stratification patterns can be accurately simulated with the versatile Environmental Fluid Dynamics Code (EFDC). Lake Ontario bathymetry was interpolated onto a 2-km-resolution curvilinear grid with vertical layering using a new approach in EFDC+, the so-called "sigma-zed" coordinate system which allows the number of vertical layers to be varied based on water depth. Inflow from the Niagara River and outflow to the St. Lawrence River in conjunction with hourly meteorological data from seven local weather stations plus three-hourly data from the North American Regional Reanalysis govern the hydrodynamic and thermodynamic responses of the Lake. EFDC+'s evaporation algorithm was updated to more accurately simulate net surface heat fluxes. A new vertical mixing scheme from Vinçon-Leite that implements different eddy diffusivity formulations above and below the thermocline was compared to results from the original Mellor-Yamada vertical mixing scheme. The model was calibrated by adjusting solar-radiation absorption coefficients in addition to background horizontal and vertical mixing parameters. Model skill was evaluated by comparing measured and simulated vertical temperature profiles at shallow (20 m) and deep (180 m) locations on the Lake. These model improvements, especially the new sigma-zed vertical discretization, accurately capture thermal-stratification patterns with low root-mean-squared errors when using the Vinçon-Leite vertical mixing scheme.

  10. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    PubMed

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.

  11. Detailed finite element method modeling of evaporating multi-component droplets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diddens, Christian, E-mail: C.Diddens@tue.nl

    The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet.more » Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.« less

  12. MacBurn's cylinder test problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, Aleksei I.

    2016-02-29

    This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less

  13. RenderView: physics-based multi- and hyperspectral rendering using measured background panoramics

    NASA Astrophysics Data System (ADS)

    Talcott, Denise M.; Brown, Wade W.; Thomas, David J.

    2003-09-01

    As part of the survivability engineering process it is necessary to accurately model and visualize the vehicle signatures in multi- or hyperspectral bands of interest. The signature at a given wavelength is a function of the surface optical properties, reflection of the background and, in the thermal region, the emission of thermal radiation. Currently, it is difficult to obtain and utilize background models that are of sufficient fidelity when compared with the vehicle models. In addition, the background models create an additional layer of uncertainty in estimating the vehicles signature. Therefore, to meet exacting rendering requirements we have developed RenderView, which incorporates the full bidirectional reflectance distribution function (BRDF). Instead of using a modeled background we have incorporated a measured calibrated background panoramic image to provide the high fidelity background interaction. Uncertainty in the background signature is reduced to the error in the measurement which is considerably smaller than the uncertainty inherent in a modeled background. RenderView utilizes a number of different descriptions of the BRDF, including the Sandford-Robertson. In addition, it provides complete conservation of energy with off axis sampling. A description of RenderView will be presented along with a methodology developed for collecting background panoramics. Examples of the RenderView output and the background panoramics will be presented along with our approach to handling the solar irradiance problem.

  14. Analytical thermal resistance model for high power double-clad fiber on rectangular plate with convective cooling at upper and lower surfaces

    NASA Astrophysics Data System (ADS)

    Lv, Yi; Zheng, Huai; Liu, Sheng

    2018-07-01

    Whether convective heat transfer on the upper surface of the substrate is used or not, the thermal resistance network models of optical fiber embedded in the substrate are established in this research. These models are applied to calculate the heat dissipation in a high power ytterbium doped double-clad fiber (YDCF) power amplifier. Firstly, the temperature values of two points on the fiber are tested when there is no convective heat transfer on the upper surface. Then, the numerical simulation is used to verify the temperature change of the fiber with the effective convective heat transfer coefficient of the lower surface heff increasing when the upper surface is subjected to three loading conditions with hu as 1, 5 and 15 W/(m2 K), respectively. The axial temperature distribution of the optical fiber is also presented at four different values for hu when heff is 30 W/(m2 K). Absolute values of the relative errors are less than 7.08%. The results show that the analytical models can accurately calculate the temperature distribution of the optical fiber when the fiber is encapsulated into the substrate. The corresponding relationship is helpful to further optimize packaging design of the fiber cooling system.

  15. Predicting temperature drop rate of mass concrete during an initial cooling period using genetic programming

    NASA Astrophysics Data System (ADS)

    Bhattarai, Santosh; Zhou, Yihong; Zhao, Chunju; Zhou, Huawei

    2018-02-01

    Thermal cracking on concrete dams depends upon the rate at which the concrete is cooled (temperature drop rate per day) within an initial cooling period during the construction phase. Thus, in order to control the thermal cracking of such structure, temperature development due to heat of hydration of cement should be dropped at suitable rate. In this study, an attempt have been made to formulate the relation between cooling rate of mass concrete with passage of time (age of concrete) and water cooling parameters: flow rate and inlet temperature of cooling water. Data measured at summer season (April-August from 2009 to 2012) from recently constructed high concrete dam were used to derive a prediction model with the help of Genetic Programming (GP) software “Eureqa”. Coefficient of Determination (R) and Mean Square Error (MSE) were used to evaluate the performance of the model. The value of R and MSE is 0.8855 and 0.002961 respectively. Sensitivity analysis was performed to evaluate the relative impact on the target parameter due to input parameters. Further, testing the proposed model with an independent dataset those not included during analysis, results obtained from the proposed GP model are close enough to the real field data.

  16. Differential multi-MOSFET nuclear radiation sensor

    NASA Technical Reports Server (NTRS)

    Deoliveira, W. A.

    1977-01-01

    Circuit allows minimization of thermal-drift errors, low power consumption, operation over wide dynamic range, improved sensitivity and stability with metaloxide-semiconductor field-effect transistor sensors.

  17. Infrared identification of internal overheating components inside an electric control cabinet by inverse heat transfer problem

    NASA Astrophysics Data System (ADS)

    Yang, Li; Wang, Ye; Liu, Huikai; Yan, Guanghui; Kou, Wei

    2014-11-01

    The components overheating inside an object, such as inside an electric control cabinet, a moving object, and a running machine, can easily lead to equipment failure or fire accident. The infrared remote sensing method is used to inspect the surface temperature of object to identify the overheating components inside the object in recent years. It has important practical application of using infrared thermal imaging surface temperature measurement to identify the internal overheating elements inside an electric control cabinet. In this paper, through the establishment of test bench of electric control cabinet, the experimental study was conducted on the inverse identification technology of internal overheating components inside an electric control cabinet using infrared thermal imaging. The heat transfer model of electric control cabinet was built, and the temperature distribution of electric control cabinet with internal overheating element is simulated using the finite volume method (FVM). The outer surface temperature of electric control cabinet was measured using the infrared thermal imager. Combining the computer image processing technology and infrared temperature measurement, the surface temperature distribution of electric control cabinet was extracted, and using the identification algorithm of inverse heat transfer problem (IHTP) the position and temperature of internal overheating element were identified. The results obtained show that for single element overheating inside the electric control cabinet the identifying errors of the temperature and position were 2.11% and 5.32%. For multiple elements overheating inside the electric control cabinet the identifying errors of the temperature and positions were 3.28% and 15.63%. The feasibility and effectiveness of the method of IHTP and the correctness of identification algorithm of FVM were validated.

  18. Accelerated Aging in Electrolytic Capacitors for Prognostics

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank

    2012-01-01

    The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.

  19. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  20. Reusable bi-directional 3ω sensor to measure thermal conductivity of 100-μm thick biological tissues

    NASA Astrophysics Data System (ADS)

    Lubner, Sean D.; Choi, Jeunghwan; Wehmeyer, Geoff; Waag, Bastian; Mishra, Vivek; Natesan, Harishankar; Bischof, John C.; Dames, Chris

    2015-01-01

    Accurate knowledge of the thermal conductivity (k) of biological tissues is important for cryopreservation, thermal ablation, and cryosurgery. Here, we adapt the 3ω method—widely used for rigid, inorganic solids—as a reusable sensor to measure k of soft biological samples two orders of magnitude thinner than conventional tissue characterization methods. Analytical and numerical studies quantify the error of the commonly used "boundary mismatch approximation" of the bi-directional 3ω geometry, confirm that the generalized slope method is exact in the low-frequency limit, and bound its error for finite frequencies. The bi-directional 3ω measurement device is validated using control experiments to within ±2% (liquid water, standard deviation) and ±5% (ice). Measurements of mouse liver cover a temperature ranging from -69 °C to +33 °C. The liver results are independent of sample thicknesses from 3 mm down to 100 μm and agree with available literature for non-mouse liver to within the measurement scatter.

  1. Reusable bi-directional 3ω sensor to measure thermal conductivity of 100-μm thick biological tissues.

    PubMed

    Lubner, Sean D; Choi, Jeunghwan; Wehmeyer, Geoff; Waag, Bastian; Mishra, Vivek; Natesan, Harishankar; Bischof, John C; Dames, Chris

    2015-01-01

    Accurate knowledge of the thermal conductivity (k) of biological tissues is important for cryopreservation, thermal ablation, and cryosurgery. Here, we adapt the 3ω method-widely used for rigid, inorganic solids-as a reusable sensor to measure k of soft biological samples two orders of magnitude thinner than conventional tissue characterization methods. Analytical and numerical studies quantify the error of the commonly used "boundary mismatch approximation" of the bi-directional 3ω geometry, confirm that the generalized slope method is exact in the low-frequency limit, and bound its error for finite frequencies. The bi-directional 3ω measurement device is validated using control experiments to within ±2% (liquid water, standard deviation) and ±5% (ice). Measurements of mouse liver cover a temperature ranging from -69 °C to +33 °C. The liver results are independent of sample thicknesses from 3 mm down to 100 μm and agree with available literature for non-mouse liver to within the measurement scatter.

  2. On Combining Thermal-Infrared and Radio-Occultation Data of Saturn's Atmosphere

    NASA Technical Reports Server (NTRS)

    Flasar, F. M.; Schinder, P. J.; Conrath, B. J.

    2008-01-01

    Radio-occultation and thermal-infrared measurements are complementary investigations for sounding planetary atmospheres. The vertical resolution afforded by radio occultations is typically approximately 1 km or better, whereas that from infrared sounding is often comparable to a scale height. On the other hand, an instrument like CIRS can easily generate global maps of temperature and composition, whereas occultation soundings are usually distributed more sparsely. The starting point for radio-occultation inversions is determining the residual Doppler-shifted frequency, that is the shift in frequency from what it would be in the absence of the atmosphere. Hence the positions and relative velocities of the spacecraft, target atmosphere, and DSN receiving station must be known to high accuracy. It is not surprising that the inversions can be susceptible to sources of systematic errors. Stratospheric temperature profiles on Titan retrieved from Cassini radio occultations were found to be very susceptible to errors in the reconstructed spacecraft velocities (approximately equal to 1 mm/s). Here the ability to adjust the spacecraft ephemeris so that the profiles matched those retrieved from CIRS limb sounding proved to be critical in mitigating this error. A similar procedure can be used for Saturn, although the sensitivity of its retrieved profiles to this type of error seems to be smaller. One issue that has appeared in inverting the Cassini occultations by Saturn is the uncertainty in its equatorial bulge, that is, the shape in its iso-density surfaces at low latitudes. Typically one approximates that surface as a geopotential surface by assuming a barotropic atmosphere. However, the recent controversy in the equatorial winds, i.e., whether they changed between the Voyager (1981) era and later (after 1996) epochs of Cassini and some Hubble observations, has made it difficult to know the exact shape of the surface, and it leads to uncertainties in the retrieved temperature profiles of one to a few kelvins. This propagates into errors in the retrieved helium abundance, which makes use of thermal-infrared spectra and synthetic spectra computed with retrieved radio-occultation temperature profiles. The highest abundances are retrieved with the faster Voyager-era winds, but even these abundances are somewhat smaller than those retrieved from the thermal-infrared data alone (albeit with larger formal errors). The helium abundance determination is most sensitive to temperatures in the upper troposphere. Further progress may include matching the radio-occultation profiles with those from CIRS limb sounding in the upper stratosphere.

  3. Robust simulation of buckled structures using reduced order modeling

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  4. Modeling and validation of spectral BRDF on material surface of space target

    NASA Astrophysics Data System (ADS)

    Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei

    2014-11-01

    The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.

  5. Planck 2015 results. X. Diffuse component separation: Foreground maps

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Orlando, E.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps and the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.´5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100-353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adam, R.; Ade, P. A. R.; Aghanim, N.

    We report that Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps andmore » the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.« less

  7. Planck 2015 results: X. Diffuse component separation: Foreground maps

    DOE PAGES

    Adam, R.; Ade, P. A. R.; Aghanim, N.; ...

    2016-09-20

    We report that Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps andmore » the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.« less

  8. Impacts of updated spectroscopy on thermal infrared retrievals of methane evaluated with HIPPO data

    NASA Astrophysics Data System (ADS)

    Alvarado, M. J.; Payne, V. H.; Cady-Pereira, K. E.; Hegarty, J. D.; Kulawik, S. S.; Wecht, K. J.; Worden, J. R.; Pittman, J. V.; Wofsy, S. C.

    2014-09-01

    Errors in the spectroscopic parameters used in the forward radiative transfer model can introduce altitude-, spatially-, and temporally-dependent biases in trace gas retrievals. For well-mixed trace gases such as methane, where the variability of tropospheric mixing ratios is relatively small, reducing such biases is particularly important. We use aircraft observations from all five missions of the HIAPER Pole-to-Pole Observations (HIPPO) of the Carbon Cycle and Greenhouse Gases Study to evaluate the impact of updates to spectroscopic parameters for methane (CH4), water vapor (H2O), and nitrous oxide (N2O) on thermal infrared retrievals of methane from the NASA Aura Tropospheric Emission Spectrometer (TES). We find that updates to the spectroscopic parameters for CH4 result in a substantially smaller mean bias in the retrieved CH4 when compared with HIPPO observations. After an N2O-based correction, the bias in TES methane upper tropospheric representative values for measurements between 50° S and 50° N decreases from 56.9 to 25.7 ppbv, while the bias in the lower tropospheric representative value increases only slightly (from 27.3 to 28.4 ppbv). For retrievals with less than 1.6 DOFS, the bias is reduced from 26.8 to 4.8 ppbv. We also find that updates to the spectroscopic parameters for N2O reduce the errors in the retrieved N2O profile.

  9. VizieR Online Data Catalog: WISE/NEOWISE Mars-crossing asteroids (Ali-Lagoa+, 2017)

    NASA Astrophysics Data System (ADS)

    Ali-Lagoa, V.; Delbo, M.

    2017-07-01

    We fitted the near-Earth asteroid thermal model of Harris (1998, Icarus, 131, 29) to WISE/NEOWISE thermal infrared data (see, e.g., Mainzer et al. 2011ApJ...736..100M, and Masiero et al. 2014, Cat. J/ApJ/791/121). The table contains the best-fitting values of size and beaming parameter. We note that the beaming parameter is a strictly positive quantity, but a negative sign is given to indicate whenever we could not fit it and had to assume a default value. We also provide the visible geometric albedos computed from the diameter and the tabulated absolute magnitudes. Minimum relative errors of 10, 15, and 20 percent should be considered for size, beaming parameter and albedo in those cases for which the beaming parameter could be fitted. Otherwise, the minimum relative errors in size and albedo increase to 20 and 40 percent (see, e.g., Mainzer et al. 2011ApJ...736..100M). The asteroid absolute magnitudes and slope parameters retrieved from the Minor Planet Center (MPC) are included, as well as the number of observations used in each WISE band (nW2, nW3, nW4) and the corresponding average values of heliocentric and geocentric distances and phase angle of the observations. The ephemerides were retrieved from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).

  10. Venus' center of figure-center of mass offset

    NASA Technical Reports Server (NTRS)

    Bindschadler, Duane L.; Schubert, Gerald; Ford, Peter G.

    1994-01-01

    Magellan altimetry data reveal that the center of figure (CF) of Venus is displaced approximately 280 m from its center of mass (CM) toward 4.4 deg S, 135.8 deg E, a location in Aphrodite Terra. This offset is smaller than those of other terrestrial planets but larger than the estimated error, which is no more than a few tens of meters. We examine the possibility that the CF-CM offset is related to specific geologic provinces on Venus by deriving three simple models for the offset: a thick-crust model, a hotspot model, and a thick-lithosphere model. The offset caused by a region of thick crust depends upon the region's extent, the crust-mantle density contrast, and the thickness of excess crust. A hotspot-related offset depends on the extent of the thermally anomalous region and the magnitude of the thermal anomaly. Offset due to a region of thick lithosphere depends upon the extent of the region, the average temperature contrast across the lithosphere, and the amount of excess lithosphere. We apply the three models to Venus plateau-shaped highlands, volcanic rises, and lowlands, respectively, in an attempt to match the observed CF-CM offset location and magnitude. The influence of most volcanic rises and of Ishtar Terra on the CF-CM offset must be quite small if we are to explain the direction of the observed offset. The lack of influence of volcanic rises can be explained if the related thermal anomalies are limited to a few hundred degrees or less and are plume-shaped (i.e., characterized by a flattened sublithospheric `head' with a narrow cylindrical feeder `tail'). The unimportance of Ishtar Terra is most easily explained if it lies atop a significant mantle downwelling.

  11. DAC-3 Pointing Stability Analysis Results for SAGE 3 and Other Users of the International Space Station (ISS) Payload Attachment Sites (PAS)

    NASA Technical Reports Server (NTRS)

    Woods-Vedeler, Jessica A.; Rombado, Gabriel

    1997-01-01

    The purpose of this paper is to provide final results of a pointing stability analysis for external payload attachment sites (PAS) on the International Space Station (ISS). As a specific example, the pointing stability requirement of the SAGE III atmospheric science instrument was examined in this paper. The instrument requires 10 arcsec stability over 2 second periods. SAGE 3 will be mounted on the ISS starboard side at the lower, outboard FIAS. In this engineering analysis, an open-loop DAC-3 finite element model of ISS was used by the Microgravity Group at Johnson Space Flight Center to generate transient responses at PAS to a limited number of disturbances. The model included dynamics up to 50 Hz. Disturbance models considered included operation of the solar array rotary joints, thermal radiator rotary joints, and control moment gyros. Responses were filtered to model the anticipated vibration attenuation effects of active control systems on the solar and thermal radiator rotary joints. A pointing stability analysis was conducted by double integrating acceleration transient over a 2 second period. Results of the analysis are tabulated for ISS X, Y, and Z Axis rotations. These results indicate that the largest excursions in rotation during pointing occurred due to rapid slewing of the thermal radiator. Even without attenuation at the rotary joints, the resulting pointing error was limited to less than 1.6 arcsec. With vibration control at the joints, to a maximum 0.5 arcsec over a 2 second period. Based on this current level of model definition, it was concluded that between 0 - 50 Hz, the pointing stability requirement for SAGE 3 will not be exceeded by the disturbances evaluated in this study.

  12. Thermal consequences of thrust faulting: simultaneous versus successive fault activation and exhumation

    NASA Astrophysics Data System (ADS)

    ter Voorde, M.; de Bruijne, C. H.; Cloetingh, S. A. P. L.; Andriessen, P. A. M.

    2004-07-01

    When converting temperature-time curves obtained from geochronology into the denudation history of an area, variations in the isotherm geometry should not be neglected. The geothermal gradient changes with depth due to heat production and evolves with time due to heat advection, if the deformation rate is high. Furthermore, lateral variations arise due to topographic effects. Ignoring these aspects can result in significant errors when estimating denudation rates. We present a numerical model for the thermal response to thrust faulting, which takes these features into account. This kinematic two-dimensional model is fully time-dependent, and includes the effects of alternating fault activation in the upper crust. Furthermore, any denudation history can be imposed, implying that erosion and rock uplift can be studied independently to each other. The model is used to investigate the difference in thermal response between scenarios with simultaneous compressional faulting and erosion, and scenarios with a time lag between rock uplift and denudation. Hereby, we aim to contribute to the analysis of the mutual interaction between mountain growth and surface processes. We show that rock uplift occurring before the onset of erosion might cause 10% to more than 50% of the total amount of cooling. We applied the model to study the Cenozoic development of the Sierra de Guadarrama in the Spanish Central System, aiming to find the source of a cooling event in the Pliocene in this region. As shown by our modeling, this temperature drop cannot be caused by erosion of a previously uplifted mountain chain: the only scenarios giving results compatible with the observations are those incorporating active compressional deformation during the Pliocene, which is consistent with the ongoing NW-SE oriented convergence between Africa and Iberia.

  13. A simple differential steady-state method to measure the thermal conductivity of solid bulk materials with high accuracy.

    PubMed

    Kraemer, D; Chen, G

    2014-02-01

    Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.

  14. Amended Results for Hard X-Ray Emission by Non-thermal Thick Target Recombination in Solar Flares

    NASA Astrophysics Data System (ADS)

    Reep, J. W.; Brown, J. C.

    2016-06-01

    Brown & Mallik and the corresponding corrigendum Brown et al. presented expressions for non-thermal recombination (NTR) in the collisionally thin- and thick-target regimes, claiming that the process could account for a substantial part of the hard X-ray continuum in solar flares usually attributed entirely to thermal and non-thermal bremsstrahlung (NTB). However, we have found the thick-target expression to become unphysical for low cut-offs in the injected electron energy spectrum. We trace this to an error in the derivation, derive a corrected version that is real-valued and continuous for all photon energies and cut-offs, and show that, for thick targets, Brown et al. overestimated NTR emission at small photon energies. The regime of small cut-offs and large spectral indices involve large (reducing) correction factors but in some other thick-target parameter regimes NTR/NTB can still be of the order of unity. We comment on the importance of these results to flare and microflare modeling and spectral fitting. An empirical fit to our results shows that the peak NTR contribution comprises over half of the hard X-ray signal if δ ≳ 6{≤ft(\\tfrac{{E}0c}{4{keV}}\\right)}0.4.

  15. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  16. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  17. Outdoor surface temperature measurement: ground truth or lie?

    NASA Astrophysics Data System (ADS)

    Skauli, Torbjorn

    2004-08-01

    Contact surface temperature measurement in the field is essential in trials of thermal imaging systems and camouflage, as well as for scene modeling studies. The accuracy of such measurements is challenged by environmental factors such as sun and wind, which induce temperature gradients around a surface sensor and lead to incorrect temperature readings. In this work, a simple method is used to test temperature sensors under conditions representative of a surface whose temperature is determined by heat exchange with the environment. The tested sensors are different types of thermocouples and platinum thermistors typically used in field trials, as well as digital temperature sensors. The results illustrate that the actual measurement errors can be much larger than the specified accuracy of the sensors. The measurement error typically scales with the difference between surface temperature and ambient air temperature. Unless proper care is taken, systematic errors can easily reach 10% of this temperature difference, which is often unacceptable. Reasonably accurate readings are obtained using a miniature platinum thermistor. Thermocouples can perform well on bare metal surfaces if the connection to the surface is highly conductive. It is pointed out that digital temperature sensors have many advantages for field trials use.

  18. Space based optical staring sensor LOS determination and calibration using GCPs observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; An, Wei; Deng, Xinpu; Yang, Jungang; Sha, Zhichao

    2016-10-01

    Line of sight (LOS) attitude determination and calibration is the key prerequisite of tracking and location of targets in space based infrared (IR) surveillance systems (SBIRS) and the LOS determination and calibration of staring sensor is one of the difficulties. This paper provides a novel methodology for removing staring sensor bias through the use of Ground Control Points (GCPs) detected in the background field of the sensor. Based on researching the imaging model and characteristics of the staring sensor of SBIRS geostationary earth orbit part (GEO), the real time LOS attitude determination and calibration algorithm using landmark control point is proposed. The influential factors (including the thermal distortions error, assemble error, and so on) of staring sensor LOS attitude error are equivalent to bias angle of LOS attitude. By establishing the observation equation of GCPs and the state transition equation of bias angle, and using an extend Kalman filter (EKF), the real time estimation of bias angle and the high precision sensor LOS attitude determination and calibration are achieved. The simulation results show that the precision and timeliness of the proposed algorithm meet the request of target tracking and location process in space based infrared surveillance system.

  19. Geometrical Model of Solar Radiation Pressure Based on High-Performing Galileo Clocks - First Geometrical Mapping of the Yarkowsky effect

    NASA Astrophysics Data System (ADS)

    Svehla, Drazen; Rothacher, Markus; Hugentobler, Urs; Steigenberger, Peter; Ziebart, Marek

    2014-05-01

    Solar radiation pressure is the main source of errors in the precise orbit determination of GNSS satellites. All deficiencies in the modeling of Solar radiation pressure map into estimated terrestrial reference frame parameters as well as into derived gravity field coefficients and altimetry results when LEO orbits are determined using GPS. Here we introduce a new approach to geometrically map radial orbit perturbations of GNSS satellites using highly-performing clocks on board the first Galileo satellites. Only a linear model (time bias and time drift) needs to be removed from the estimated clock parameters and the remaining clock residuals map all radial orbit perturbations along the orbit. With the independent SLR measurements, we show that a Galileo clock is stable enough to map radial orbit perturbations continuously along the orbit with a negative sign in comparison to SLR residuals. Agreement between the SLR residuals and the clock residuals is at the 1 cm RMS for an orbit arc of 24 h. Looking at the clock parameters determined along one orbit revolution over a period of one year, we show that the so-called SLR bias in Galileo and GPS orbits can be explained by the translation of the determined orbit in the orbital plane towards the Sun. This orbit translation is due to thermal re-radiation and not accounting for the Sun elevation in the parameterization of the estimated Solar radiation pressure parameters. SLR ranging to GNSS satellites takes place typically at night, e.g. between 6 pm and 6 am local time when the Sun is in opposition to the satellite. Therefore, SLR observes only one part of the GNSS orbit with a negative radial orbit error that is mapped as an artificial bias in SLR observables. The Galileo clocks clearly show orbit translation for all Sun elevations: the radial orbit error is positive when the Sun is in conjuction (orbit noon) and negative when the Sun is in opposition (orbit midnight). The magnitude of this artificial negative SLR bias depends on the orbit quality and should rather be called GNSS orbit bias instead of SLR bias. When LEO satellite orbits are estimated using GPS, this GPS orbit bias is mapped into the antenna phase center. All LEO satellites, such as CHAMP, GRACE and JASON-1/2, need an adjustment of the radial antenna phase center offset. GNSS orbit translations towards the Sun in the orbital plane do not only propagate into the estimated LEO orbits, but also into derived gravity field and altimetry products. Geometrical mapping of orbit perturbations using an on board GNSS clock is a new technique to monitor orbit perturbations along the orbit and was successfully applied in the modeling of Solar radiation pressure. We show that CODE Solar radiation pressure parameterization lacks dependency with the Sun's elevation, i.e. elongation angle (rotation of Solar arrays), especially at low Sun elevations (eclipses). Parameterisation with the Sun elongation angle is used in the so-called T30 model (ROCK-model) that includes thermal re-radiation. A preliminary version of Solar radiation pressure for the first five Galileo and the GPS-36 satellite is based on 2×180 days of the MGEX Campaign. We show that Galileo clocks map the Yarkowsky effect along the orbit, i.e. the lag between the Sun's illumination and thermal re-radiation. We present the first geometrical mapping of anisotropic thermal emission of absorbed sunlight of an illuminated satellite. In this way, the effects of Solar radiation pressure can be modelled with only two paramaters for all Sun elevations.

  20. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  1. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  2. Assessment of the Appalachian Basin Geothermal Field: Combining Risk Factors to Inform Development of Low Temperature Projects

    NASA Astrophysics Data System (ADS)

    Smith, J. D.; Whealton, C.; Camp, E. R.; Horowitz, F.; Frone, Z. S.; Jordan, T. E.; Stedinger, J. R.

    2015-12-01

    Exploration methods for deep geothermal energy projects must primarily consider whether or not a location has favorable thermal resources. Even where the thermal field is favorable, other factors may impede project development and success. A combined analysis of these factors and their uncertainty is a strategy for moving geothermal energy proposals forward from the exploration phase at the scale of a basin to the scale of a project, and further to design of geothermal systems. For a Department of Energy Geothermal Play Fairway Analysis we assessed quality metrics, which we call risk factors, in the Appalachian Basin of New York, Pennsylvania, and West Virginia. These included 1) thermal field variability, 2) productivity of natural reservoirs from which to extract heat, 3) potential for induced seismicity, and 4) presence of thermal utilization centers. The thermal field was determined using a 1D heat flow model for 13,400 bottomhole temperatures (BHT) from oil and gas wells. Steps included the development of i) a set of corrections to BHT data and ii) depth models of conductivity stratigraphy at each borehole based on generalized stratigraphy that was verified for a select set of wells. Wells are control points in a spatial statistical analysis that resulted in maps of the predicted mean thermal field properties and of the standard error of the predicted mean. Seismic risk was analyzed by comparing earthquakes and stress orientations in the basin to gravity and magnetic potential field edges at depth. Major edges in the potential fields served as interpolation boundaries for the thermal maps (Figure 1). Natural reservoirs were identified from published studies, and productivity was determined based on the expected permeability and dimensions of each reservoir. Visualizing the natural reservoirs and population centers on a map of the thermal field communicates options for viable pilot sites and project designs (Figure 1). Furthermore, combining the four risk factors at favorable sites enables an evaluation of project feasibility across sites based on tradeoffs in the risk factors. Uncertainties in each risk factor can also be considered to determine if the tradeoffs in risk factors between sites are meaningful.

  3. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  4. Feedback control of thermal lensing in a high optical power cavity.

    PubMed

    Fan, Y; Zhao, C; Degallaix, J; Ju, L; Blair, D G; Slagmolen, B J J; Hosken, D J; Brooks, A F; Veitch, P J; Munch, J

    2008-10-01

    This paper reports automatic compensation of strong thermal lensing in a suspended 80 m optical cavity with sapphire test mass mirrors. Variation of the transmitted beam spot size is used to obtain an error signal to control the heating power applied to the cylindrical surface of an intracavity compensation plate. The negative thermal lens created in the compensation plate compensates the positive thermal lens in the sapphire test mass, which was caused by the absorption of the high intracavity optical power. The results show that feedback control is feasible to compensate the strong thermal lensing expected to occur in advanced laser interferometric gravitational wave detectors. Compensation allows the cavity resonance to be maintained at the fundamental mode, but the long thermal time constant for thermal lensing control in fused silica could cause difficulties with the control of parametric instabilities.

  5. NEOSURVEY 1: INITIAL RESULTS FROM THE WARM SPITZER EXPLORATION SCIENCE SURVEY OF NEAR-EARTH OBJECT PROPERTIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trilling, David E.; Mommert, Michael; Hora, Joseph

    Near-Earth objects (NEOs) are small solar system bodies whose orbits bring them close to the Earth’s orbit. We are carrying out a Warm Spitzer Cycle 11 Exploration Science program entitled NEOSurvey—a fast and efficient flux-limited survey of 597 known NEOs in which we derive a diameter and albedo for each target. The vast majority of our targets are too faint to be observed by NEOWISE, though a small sample has been or will be observed by both observatories, which allows for a cross-check of our mutual results. Our primary goal is to create a large and uniform catalog of NEO properties. Wemore » present here the first results from this new program: fluxes and derived diameters and albedos for 80 NEOs, together with a description of the overall program and approach, including several updates to our thermal model. The largest source of error in our diameter and albedo solutions, which derive from our single-band thermal emission measurements, is uncertainty in η , the beaming parameter used in our thermal modeling; for albedos, improvements in solar system absolute magnitudes would also help significantly. All data and derived diameters and albedos from this entire program are being posted on a publicly accessible Web page at nearearthobjects.nau.edu.« less

  6. Ocean haline skin layer and turbulent surface convections

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Zhang, X.

    2012-04-01

    The ocean haline skin layer is of great interest to oceanographic applications, while its attribute is still subject to considerable uncertainty due to observational difficulties. By introducing Batchelor micro-scale, a turbulent surface convection model is developed to determine the depths of various ocean skin layers with same model parameters. These parameters are derived from matching cool skin layer observations. Global distributions of salinity difference across ocean haline layers are then simulated, using surface forcing data mainly from OAFlux project and ISCCP. It is found that, even though both thickness of the haline layer and salinity increment across are greater than the early global simulations, the microwave remote sensing error caused by the haline microlayer effect is still smaller than that from other geophysical error sources. It is shown that forced convections due to sea surface wind stress are dominant over free convections driven by surface cooling in most regions of oceans. The free convection instability is largely controlled by cool skin effect for the thermal microlayer is much thicker and becomes unstable much earlier than the haline microlayer. The similarity of the global distributions of temperature difference and salinity difference across cool and haline skin layers is investigated by comparing their forcing fields of heat fluxes. The turbulent convection model is also found applicable to formulating gas transfer velocity at low wind.

  7. Monitoring of deep brain temperature in infants using multi-frequency microwave radiometry and thermal modelling.

    PubMed

    Han, J W; Van Leeuwen, G M; Mizushina, S; Van de Kamer, J B; Maruyama, K; Sugiura, T; Azzopardi, D V; Edwards, A D

    2001-07-01

    In this study we present a design for a multi-frequency microwave radiometer aimed at prolonged monitoring of deep brain temperature in newborn infants and suitable for use during hypothermic neural rescue therapy. We identify appropriate hardware to measure brightness temperature and evaluate the accuracy of the measurements. We describe a method to estimate the tissue temperature distribution from measured brightness temperatures which uses the results of numerical simulations of the tissue temperature as well as the propagation of the microwaves in a realistic detailed three-dimensional infant head model. The temperature retrieval method is then used to evaluate how the statistical fluctuations in the measured brightness temperatures limit the confidence interval for the estimated temperature: for an 18 degrees C temperature differential between cooled surface and deep brain we found a standard error in the estimated central brain temperature of 0.75 degrees C. Evaluation of the systematic errors arising from inaccuracies in model parameters showed that realistic deviations in tissue parameters have little impact compared to uncertainty in the thickness of the bolus between the receiving antenna and the infant's head or in the skull thickness. This highlights the need to pay particular attention to these latter parameters in future practical implementation of the technique.

  8. Bathymetric mapping of submarine sand waves using multiangle sun glitter imagery: a case of the Taiwan Banks with ASTER stereo imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hua-guo; Yang, Kang; Lou, Xiu-lin; Li, Dong-ling; Shi, Ai-qin; Fu, Bin

    2015-01-01

    Submarine sand waves are visible in optical sun glitter remote sensing images and multiangle observations can provide valuable information. We present a method for bathymetric mapping of submarine sand waves using multiangle sun glitter information from Advanced Spaceborne Thermal Emission and Reflection Radiometer stereo imagery. Based on a multiangle image geometry model and a sun glitter radiance transfer model, sea surface roughness is derived using multiangle sun glitter images. These results are then used for water depth inversions based on the Alpers-Hennings model, supported by a few true depth data points (sounding data). Case study results show that the inversion and true depths match well, with high-correlation coefficients and root-mean-square errors from 1.45 to 2.46 m, and relative errors from 5.48% to 8.12%. The proposed method has some advantages over previous methods in that it requires fewer true depth data points, it does not require environmental parameters or knowledge of sand-wave morphology, and it is relatively simple to operate. On this basis, we conclude that this method is effective in mapping submarine sand waves and we anticipate that it will also be applicable to other similar topography types.

  9. Distribution and depth of bottom-simulating reflectors in the Nankai subduction margin

    NASA Astrophysics Data System (ADS)

    Ohde, Akihiro; Otsuka, Hironori; Kioka, Arata; Ashi, Juichiro

    2018-04-01

    Surface heat flow has been observed to be highly variable in the Nankai subduction margin. This study presents an investigation of local anomalies in surface heat flows on the undulating seafloor in the Nankai subduction margin. We estimate the heat flows from bottom-simulating reflectors (BSRs) marking the lower boundaries of the methane hydrate stability zone and evaluate topographic effects on heat flow via two-dimensional thermal modeling. BSRs have been used to estimate heat flows based on the known stability characteristics of methane hydrates under low-temperature and high-pressure conditions. First, we generate an extensive map of the distribution and subseafloor depths of the BSRs in the Nankai subduction margin. We confirm that BSRs exist at the toe of the accretionary prism and the trough floor of the offshore Tokai region, where BSRs had previously been thought to be absent. Second, we calculate the BSR-derived heat flow and evaluate the associated errors. We conclude that the total uncertainty of the BSR-derived heat flow should be within 25%, considering allowable ranges in the P-wave velocity, which influences the time-to-depth conversion of the BSR position in seismic images, the resultant geothermal gradient, and thermal resistance. Finally, we model a two-dimensional thermal structure by comparing the temperatures at the observed BSR depths with the calculated temperatures at the same depths. The thermal modeling reveals that most local variations in BSR depth over the undulating seafloor can be explained by topographic effects. Those areas that cannot be explained by topographic effects can be mainly attributed to advective fluid flow, regional rapid sedimentation, or erosion. Our spatial distribution of heat flow data provides indispensable basic data for numerical studies of subduction zone modeling to evaluate margin parallel age dependencies of subducting plates.[Figure not available: see fulltext.

  10. Electrosurgical vessel sealing tissue temperature: experimental measurement and finite element modeling.

    PubMed

    Chen, Roland K; Chastagner, Matthew W; Dodde, Robert E; Shih, Albert J

    2013-02-01

    The temporal and spatial tissue temperature profile in electrosurgical vessel sealing was experimentally measured and modeled using finite element modeling (FEM). Vessel sealing procedures are often performed near the neurovascular bundle and may cause collateral neural thermal damage. Therefore, the heat generated during electrosurgical vessel sealing is of concern among surgeons. Tissue temperature in an in vivo porcine femoral artery sealed using a bipolar electrosurgical device was studied. Three FEM techniques were incorporated to model the tissue evaporation, water loss, and fusion by manipulating the specific heat, electrical conductivity, and electrical contact resistance, respectively. These three techniques enable the FEM to accurately predict the vessel sealing tissue temperature profile. The averaged discrepancy between the experimentally measured temperature and the FEM predicted temperature at three thermistor locations is less than 7%. The maximum error is 23.9%. Effects of the three FEM techniques are also quantified.

  11. Increasing the Thermal Conductivity and Thermal Diffusivity of Asbestos-Reinforced Laminates Through Modification of their Polymer Matrix with Carbon Nanomaterials

    NASA Astrophysics Data System (ADS)

    Danilova-Tret'yak, S. M.; Evseeva, L. E.; Tanaeva, S. A.

    2014-11-01

    Experimental investigations of the thermophysical properties of traditional and modified asbestos-reinforced laminates depending on the type of their carbon nanofiller have been carried out in the range of temperatures from -150 to 150°C. It has been shown that the largest (nearly twofold) increase in the thermal-conductivity and thermal-diffusivity coefficients of the indicated materials is observed when they are modified with a small-scale fraction of a nanofiller (carbon nanotubes). The specific heats of the modified and traditional asbestos-reinforced laminates turned out to be identical, in practice, within the measurement error.

  12. Testing and extension of a sea lamprey feeding model

    USGS Publications Warehouse

    Cochran, Philip A.; Swink, William D.; Kinziger, Andrew P.

    1999-01-01

    A previous model of feeding by sea lamprey Petromyzon marinus predicted energy intake and growth by lampreys as a function of lamprey size, host size, and duration of feeding attachments, but it was applicable only to lampreys feeding at 10°C and it was tested against only a single small data set of limited scope. We extended the model to other temperatures and tested it against an extensive data set (more than 700 feeding bouts) accumulated during experiments with captive sea lampreys. Model predictions of instantaneous growth were highly correlated with observed growth, and a partitioning of mean squared error between model predictions and observed results showed that 88.5% of the variance was due to random variation rather than to systematic errors. However, deviations between observed and predicted values varied substantially, especially for short feeding bouts. Predicted and observed growth trajectories of individual lampreys during multiple feeding bouts during the summer tended to correspond closely, but predicted growth was generally much higher than observed growth late in the year. This suggests the possibility that large overwintering lampreys reduce their feeding rates while attached to hosts. Seasonal or size-related shifts in the fate of consumed energy may provide an alternative explanation. The lamprey feeding model offers great flexibility in assessing growth of captive lampreys within various experimental protocols (e.g., different host species or thermal regimes) because it controls for individual differences in feeding history.

  13. Visual servoing for a US-guided therapeutic HIFU system by coagulated lesion tracking: a phantom study.

    PubMed

    Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru

    2011-06-01

    Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Microgravity

    NASA Image and Video Library

    1997-06-27

    This is a computer generated model of a ground based casting. The objective of the therophysical properties program is to measure thermal physical properties of commercial casting alloys for use in computer programs that predict soldification behavior. This could reduce trial and error in casting design and promote less scrap, sounder castings, and less weight. In order for the computer models to reliably simulate the details of industrial alloy solidification, the input thermophysical property data must be absolutely reliable. Recently Auburn University and TPRL Inc. formed a teaming relationship to establish reliable measurement techniques for the most critical properties of commercially important alloys: transformation temperatures, thermal conductivity, electrical conductivity, specific heat, latent heat, density, solid fraction evolution, surface tension, and viscosity. A new initiative with the American Foundrymens Society has been started to measure the thermophysical properties of commercial ferrous and non-ferrous casting alloys and make the thermophysical property data widely available. Development of casting processes for the new gamma titanium aluminide alloys as well as existing titanium alloys will remain a trial-and-error procedure until accurate thermophysical properties can be obtained. These molten alloys react with their containers on earth and change their composition - invalidating the measurements even while the data are being acquired in terrestrial laboratories. However, measurements on the molten alloys can be accomplished in space using freely floating droplets which are completely untouched by any container. These data are expected to be exceptionally precise because of the absence of impurity contamination and buoyancy convection effects. Although long duration orbital experiments will be required for the large scale industrial alloy measurement program that results from this research, short duration experiments on NASA's KC-135 low-g aircraft are already providing preliminary data and experience.

  15. Radiometric calibration status of Landsat-7 and Landsat-5

    USGS Publications Warehouse

    Barsi, J.A.; Markham, B.L.; Helder, D.L.; Chander, G.

    2007-01-01

    Launched in April 1999, Landsat-7 ETM+ continues to acquire data globally. The Scan Line Corrector in failure in 2003 has affected ground coverage and the recent switch to Bumper Mode operations in April 2007 has degraded the internal geometric accuracy of the data, but the radiometry has been unaffected. The best of the three on-board calibrators for the reflective bands, the Full Aperture Solar Calibrator, has indicated slow changes in the ETM+, but this is believed to be due to contamination on the panel rather then instrument degradation. The Internal Calibrator lamp 2, though it has not been used regularly throughout the whole mission, indicates smaller changes than the FASC since 2003. The changes indicated by lamp 2 are only statistically significant in band 1, circa 0.3% per year, and may be lamp as opposed to instrument degradations. Regular observations of desert targets in the Saharan and Arabian deserts indicate the no change in the ETM+ reflective band response, though the uncertainty is larger and does not preclude the small changes indicated by lamp 2. The thermal band continues to be stable and well-calibrated since an offset error was corrected in late-2000. Launched in 1984, Landsat-5 TM also continues to acquire global data; though without the benefit of an on-board recorder, data can only be acquired where a ground station is within range. Historically, the calibration of the TM reflective bands has used an onboard calibration system with multiple lamps. The calibration procedure for the TM reflective bands was updated in 2003 based on the best estimate at the time, using only one of the three lamps and a cross-calibration with Landsat-7 ETM+. Since then, the Saharan desert sites have been used to validate this calibration model. Problems were found with the lamp based model of up to 13% in band 1. Using the Saharan data, a new model was developed and implemented in the US processing system in April 2007. The TM thermal band was found to have a calibration offset error of 0.092 W/m 2 sr ??m (0.68K at 300K) based on vicarious calibration data between 1999 and 2006. The offset error was corrected in the US processing system on April 2007 for all data acquired since April 1999.

  16. Observation and simulation of net primary productivity in Qilian Mountain, western China.

    PubMed

    Zhou, Y; Zhu, Q; Chen, J M; Wang, Y Q; Liu, J; Sun, R; Tang, S

    2007-11-01

    We modeled net primary productivity (NPP) at high spatial resolution using an advanced spaceborne thermal emission and reflection radiometer (ASTER) image of a Qilian Mountain study area using the boreal ecosystem productivity simulator (BEPS). Two key driving variables of the model, leaf area index (LAI) and land cover type, were derived from ASTER and moderate resolution imaging spectroradiometer (MODIS) data. Other spatially explicit inputs included daily meteorological data (radiation, precipitation, temperature, humidity), available soil water holding capacity (AWC), and forest biomass. NPP was estimated for coniferous forests and other land cover types in the study area. The result showed that NPP of coniferous forests in the study area was about 4.4 tCha(-1)y(-1). The correlation coefficient between the modeled NPP and ground measurements was 0.84, with a mean relative error of about 13.9%.

  17. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less

  18. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  19. Vibrational spectra from atomic fluctuations in dynamics simulations. I. Theory, limitations, and a sample application

    NASA Astrophysics Data System (ADS)

    Schmitz, Matthias; Tavan, Paul

    2004-12-01

    Hybrid molecular dynamics (MD) simulations, which combine density functional theory (DFT) descriptions of a molecule with a molecular mechanics (MM) modeling of its solvent environment, have opened the way towards accurate computations of solvation effects in the vibrational spectra of molecules. Recently, Wheeler et al. [ChemPhysChem 4, 382 (2002)] have suggested to compute these spectra from DFT/MM-MD trajectories by diagonalizing the covariance matrix of atomic fluctuations. This so-called principal mode analysis (PMA) allegedly can replace the well-established approaches, which are based on Fourier transform methods or on conventional normal mode analyses. By scrutinizing and revising the PMA approach we identify five conditions, which must be guaranteed if PMA is supposed to render exact vibrational frequencies. Besides specific choices of (a) coordinates and (b) coordinate systems, these conditions cover (c) a harmonic intramolecular potential, (d) a complete thermal equilibrium within the molecule, and (e) a molecular Hamiltonian independent of time. However, the PMA conditions [(c)-(d)] and [(c)-(e)] are generally violated in gas phase DFT-MD and liquid phase DFT/MM-MD trajectories, respectively. Based on a series of simple analytical model calculations and on the analysis of MD trajectories calculated for the formaldehyde molecule in the gas phase (DFT) and in liquid water (DFT/MM) we show that in both phases the violation of condition (d) can cause huge errors in PMA frequency computations, whereas the inevitable violations of conditions (c) and (e), the latter being generic to the liquid phase, imply systematic and sizable underestimates of the vibrational frequencies by PMA. We demonstrate that the huge errors, which are caused by an incomplete thermal equilibrium violating (d), can be avoided if one introduces mode-specific temperatures Tj and calculates the frequencies from a "generalized virial" (GV) expression instead from PMA. Concerning ways to additionally remove the remaining errors, which GV still shares with PMA, we refer to Paper II of this work [M. Schmitz and P. Tavan, J. Chem. Phys. 121, 12247 (2004)].

  20. Comment on {open_quotes}Experimental determination of the thermal conductivity of molten CaMgSi{sub 2}O{sub 6} and the transport of heat through magmas{close_quotes} by Don Snyder, Elizabeth Gier, and Ian Carmichael

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrigan, C.; McBirney, A.

    1997-07-01

    A comparative evaluation is made of various experiments and techniques for measuring the thermal properties of molten silicates. Sources of errors for measurements of Snyder et al are discussed. (AIP) {copyright} {ital 1997 American Geophysical Union.}

  1. Development of an air flow thermal balance calorimeter

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1972-01-01

    An air flow calorimeter, based on the idea of balancing an unknown rate of heat evolution with a known rate of heat evolution, was developed. Under restricted conditions, the prototype system is capable of measuring thermal wattages from 10 milliwatts to 1 watt, with an error no greater than 1 percent. Data were obtained which reveal system weaknesses and point to modifications which would effect significant improvements.

  2. Engineering evaluations and studies. Report for IUS studies

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The reviews, investigations, and analyses of the Inertial Upper Stage (IUS) Spacecraft Tracking and Data Network (STDN) transponder are reviewed. Carrier lock detector performance for Tracking and Data Relay Satellite System (TDRSS) dual-mode operation is discussed, as is the problem of predicting instantaneous frequency error in the carrier loop. Coastal loop performance analysis is critiqued and the static tracking phase error induced by thermal noise biases is discussed.

  3. Experimental and artificial neural network based prediction of performance and emission characteristics of DI diesel engine using Calophyllum inophyllum methyl ester at different nozzle opening pressure

    NASA Astrophysics Data System (ADS)

    Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.

    2018-01-01

    The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.

  4. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  5. Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection

    NASA Astrophysics Data System (ADS)

    Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.

    2014-02-01

    Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.

  6. Predicting thermal reference conditions for USA streams and rivers

    USGS Publications Warehouse

    Hill, Ryan A.; Hawkins, Charles P.; Carlisle, Daren M.

    2013-01-01

    Temperature is a primary driver of the structure and function of stream ecosystems. However, the lack of stream temperature (ST) data for the vast majority of streams and rivers severely compromises our ability to describe patterns of thermal variation among streams, test hypotheses regarding the effects of temperature on macroecological patterns, and assess the effects of altered STs on ecological resources. Our goal was to develop empirical models that could: 1) quantify the effects of stream and watershed alteration (SWA) on STs, and 2) accurately and precisely predict natural (i.e., reference condition) STs in conterminous USA streams and rivers. We modeled 3 ecologically important elements of the thermal regime: mean summer, mean winter, and mean annual ST. To build reference-condition models (RCMs), we used daily mean ST data obtained from several thousand US Geological Survey temperature sites distributed across the conterminous USA and iteratively modeled ST with Random Forests to identify sites in reference condition. We first created a set of dirty models (DMs) that related STs to both natural factors (e.g., climate, watershed area, topography) and measures of SWA, i.e., reservoirs, urbanization, and agriculture. The 3 models performed well (r2 = 0.84–0.94, residual mean square error [RMSE] = 1.2–2.0°C). For each DM, we used partial dependence plots to identify SWA thresholds below which response in ST was minimal. We then used data from only the sites with upstream SWA below these thresholds to build RCMs with only natural factors as predictors (r2 = 0.87–0.95, RMSE = 1.1–1.9°C). Use of only reference-quality sites caused RCMs to suffer modest loss of predictor space and spatial coverage, but this loss was associated with parts of ST response curves that were flat and, therefore, not responsive to further variation in predictor space. We then compared predictions made with the RCMs to predictions made with the DMs with SWA set to 0. For most DMs, setting SWAs to 0 resulted in biased estimates of thermal reference condition.

  7. A predictive model of nuclear power plant crew decision-making and performance in a dynamic simulation environment

    NASA Astrophysics Data System (ADS)

    Coyne, Kevin Anthony

    The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.

  8. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  9. Basin-scale geothermal model calibration: experience from the Perth Basin, Australia

    NASA Astrophysics Data System (ADS)

    Wellmann, Florian; Reid, Lynn

    2014-05-01

    The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure. The calibration resulted in a better determination of subsurface temperatures, and, in addition, provided an insight into model quality. Furthermore, a detailed analysis of the measurements used for calibration highlighted potential outliers, and limitations with the model assumptions. Extending the previously existing large-scale geothermal simulation with iTOUGH2 provided us with a valuable insight into the sensitive parameters and data in the model, which would clearly not be possible with a simple trial-and-error calibration method. Using the gained knowledge, future work will include more detailed studies on the influence of advection and convection.

  10. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  11. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  12. Chiral pathways in DNA dinucleotides using gradient optimized refinement along metastable borders

    NASA Astrophysics Data System (ADS)

    Romano, Pablo; Guenza, Marina

    We present a study of DNA breathing fluctuations using Markov state models (MSM) with our novel refinement procedure. MSM have become a favored method of building kinetic models, however their accuracy has always depended on using a significant number of microstates, making the method costly. We present a method which optimizes macrostates by refining borders with respect to the gradient along the free energy surface. As the separation between macrostates contains highest discretization errors, this method corrects for any errors produced by limited microstate sampling. Using our refined MSM methods, we investigate DNA breathing fluctuations, thermally induced conformational changes in native B-form DNA. Running several microsecond MD simulations of DNA dinucleotides of varying sequences, to include sequence and polarity effects, we've analyzed using our refined MSM to investigate conformational pathways inherent in the unstacking of DNA bases. Our kinetic analysis has shown preferential chirality in unstacking pathways that may be critical in how proteins interact with single stranded regions of DNA. These breathing dynamics can help elucidate the connection between conformational changes and key mechanisms within protein-DNA recognition. NSF Chemistry Division (Theoretical Chemistry), the Division of Physics (Condensed Matter: Material Theory), XSEDE.

  13. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  14. Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2016-02-22

    A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.

  15. A model based on Rock-Eval thermal analysis to quantify the size of the centennially persistent organic carbon pool in temperate soils

    NASA Astrophysics Data System (ADS)

    Cécillon, Lauric; Baudin, François; Chenu, Claire; Houot, Sabine; Jolivet, Romain; Kätterer, Thomas; Lutfalla, Suzanne; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Savignac, Florence; Soucémarianadin, Laure N.; Barré, Pierre

    2018-05-01

    Changes in global soil carbon stocks have considerable potential to influence the course of future climate change. However, a portion of soil organic carbon (SOC) has a very long residence time ( > 100 years) and may not contribute significantly to terrestrial greenhouse gas emissions during the next century. The size of this persistent SOC reservoir is presumed to be large. Consequently, it is a key parameter required for the initialization of SOC dynamics in ecosystem and Earth system models, but there is considerable uncertainty in the methods used to quantify it. Thermal analysis methods provide cost-effective information on SOC thermal stability that has been shown to be qualitatively related to SOC biogeochemical stability. The objective of this work was to build the first quantitative model of the size of the centennially persistent SOC pool based on thermal analysis. We used a unique set of 118 archived soil samples from four agronomic experiments in northwestern Europe with long-term bare fallow and non-bare fallow treatments (e.g., manure amendment, cropland and grassland) as a sample set for which estimating the size of the centennially persistent SOC pool is relatively straightforward. At each experimental site, we estimated the average concentration of centennially persistent SOC and its uncertainty by applying a Bayesian curve-fitting method to the observed declining SOC concentration over the duration of the long-term bare fallow treatment. Overall, the estimated concentrations of centennially persistent SOC ranged from 5 to 11 g C kg-1 of soil (lowest and highest boundaries of four 95 % confidence intervals). Then, by dividing the site-specific concentrations of persistent SOC by the total SOC concentration, we could estimate the proportion of centennially persistent SOC in the 118 archived soil samples and the associated uncertainty. The proportion of centennially persistent SOC ranged from 0.14 (standard deviation of 0.01) to 1 (standard deviation of 0.15). Samples were subjected to thermal analysis by Rock-Eval 6 that generated a series of 30 parameters reflecting their SOC thermal stability and bulk chemistry. We trained a nonparametric machine-learning algorithm (random forests multivariate regression model) to predict the proportion of centennially persistent SOC in new soils using Rock-Eval 6 thermal parameters as predictors. We evaluated the model predictive performance with two different strategies. We first used a calibration set (n = 88) and a validation set (n = 30) with soils from all sites. Second, to test the sensitivity of the model to pedoclimate, we built a calibration set with soil samples from three out of the four sites (n = 84). The multivariate regression model accurately predicted the proportion of centennially persistent SOC in the validation set composed of soils from all sites (R2 = 0.92, RMSEP = 0.07, n = 30). The uncertainty of the model predictions was quantified by a Monte Carlo approach that produced conservative 95 % prediction intervals across the validation set. The predictive performance of the model decreased when predicting the proportion of centennially persistent SOC in soils from one fully independent site with a different pedoclimate, yet the mean error of prediction only slightly increased (R2 = 0.53, RMSEP = 0.10, n = 34). This model based on Rock-Eval 6 thermal analysis can thus be used to predict the proportion of centennially persistent SOC with known uncertainty in new soil samples from different pedoclimates, at least for sites that have similar Rock-Eval 6 thermal characteristics to those included in the calibration set. Our study reinforces the evidence that there is a link between the thermal and biogeochemical stability of soil organic matter and demonstrates that Rock-Eval 6 thermal analysis can be used to quantify the size of the centennially persistent organic carbon pool in temperate soils.

  16. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  17. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  18. Experimental Verification of Sparse Aperture Mask for Low Order Wavefront Sensing

    NASA Astrophysics Data System (ADS)

    Subedi, Hari; Kasdin, N. Jeremy

    2017-01-01

    To directly image exoplanets, future space-based missions are equipped with coronagraphs which manipulate the diffraction of starlight and create regions of high contrast called dark holes. Theoretically, coronagraphs can be designed to achieve the high level of contrast required to image exoplanets, which are billions of times dimmer than their host stars, however the aberrations caused by optical imperfections and thermal fluctuations cause the degradation of contrast in the dark holes. Focal plane wavefront control (FPWC) algorithms using deformable mirrors (DMs) are used to mitigate the quasi-static aberrations caused by optical imperfections. Although the FPWC methods correct the quasi-static aberrations, they are blind to dynamic errors caused by telescope jitter and thermal fluctuations. At Princeton's High Contrast Imaging Lab we have developed a new technique that integrates a sparse aperture mask with the coronagraph to estimate these low-order dynamic wavefront errors. This poster shows the effectiveness of a SAM Low-Order Wavefront Sensor in estimating and correcting these errors via simulation and experiment and compares the results to other methods, such as the Zernike Wavefront Sensor planned for WFIRST.

  19. Thermally stratified squeezed flow between two vertical Riga plates with no slip conditions

    NASA Astrophysics Data System (ADS)

    Farooq, M.; Mansoor, Zahira; Ijaz Khan, M.; Hayat, T.; Anjum, A.; Mir, N. A.

    2018-04-01

    This paper demonstrates the mixed convective squeezing nanomaterials flow between two vertical plates, one of which is a Riga plate embedded in a thermally stratified medium subject to convective boundary conditions. Heat transfer features are elaborated with viscous dissipation. Single-wall and multi-wall carbon nanotubes are taken as nanoparticles to form a homogeneous solution in the water. A non-linear system of differential equations is obtained for the considered flow by using suitable transformations. Convergence analysis for velocity and temperature is computed and discussed explicitly through BVPh 2.0. Residual errors are also computed by BVPh 2.0 for the dimensionless governing equations. We introduce two undetermined convergence control parameters, i.e. \\hslash_{θ} and \\hslashf , to compute the lowest entire error. The average residual error for the k -th-order approximation is given in a table. The effects of different flow variables on temperature and velocity distributions are sketched graphically and discussed comprehensively. Furthermore the coefficient of skin friction and the Nusselt number are also analyzed through graphical data.

  20. Assesment of longwave radiation effects on air quality modelling in street canyons

    NASA Astrophysics Data System (ADS)

    Soucasse, L.; Buchan, A.; Pain, C.

    2016-12-01

    Computational Fluid Dynamics is widely used as a predictive tool to evaluate people's exposure to pollutants in urban street canyons. However, in low-wind conditions, flow and pollutant dispersion in the canyons are driven by thermal effects and may be affected by longwave (infrared) radiation due to the absorption and emission of water vapor contained in the air. These effects are mostly ignored in the literature dedicated to air quality modelling at this scale. This study aims at quantifying the uncertainties due to neglecting thermal radiation in air quality models. The Large-Eddy-Simulation of air flow in a single 2D canyon with a heat source on the ground is considered for Rayleigh and Reynolds numbers in the range of [10e8-10e10] and [5.10e3-5.10e4] respectively. The dispersion of a tracer is monitored once the statistically steady regime is reached. Incoming radiation is computed for a mid-latitude summer atmosphere and canyon surfaces are assumed to be black. Water vapour is the only radiating molecule considered and a global model is used to treat the spectral dependancy of its absorption coefficient. Flow and radiation fields are solved in a coupled way using the finite element solvers Fluidity and Fetch which have the capability of adapting their space and angular resolution according to an estimate of the solution error. Results show significant effects of thermal radiation on flow patterns and tracer dispersion. When radiation is taken into account, the air is heated far from the heat source leading to a stronger natural convection flow. The tracer is then dispersed faster out of the canyon potentially decreasing people's exposure to pollution within the street canyon.

  1. Thermal infrared data of active lava surfaces using a newly-developed camera system

    NASA Astrophysics Data System (ADS)

    Thompson, J. O.; Ramsey, M. S.

    2017-12-01

    Our ability to acquire accurate data during lava flow emplacement greatly improves models designed to predict their dynamics and down-flow hazard potential. For example, better constraint on the physical property of emissivity as a lava cools improves the accuracy of the derived temperature, a critical parameter for flow models that estimate at-vent eruption rate, flow length, and distribution. Thermal infrared (TIR) data are increasingly used as a tool to determine eruption styles and cooling regimes by measuring temperatures at high temporal resolutions. Factors that control the accurate measurement of surface temperatures include both material properties (e.g., emissivity and surface texture) as well as external factors (e.g., camera geometry and the intervening atmosphere). We present a newly-developed, field-portable miniature multispectral thermal infrared camera (MMT-Cam) to measure both temperature and emissivity of basaltic lava surfaces at up to 7 Hz. The MMT-Cam acquires emitted radiance in six wavelength channels in addition to the broadband temperature. The instrument was laboratory calibrated for systematic errors and fully field tested at the Overlook Crater lava lake (Kilauea, HI) in January 2017. The data show that the major emissivity absorption feature (around 8.5 to 9.0 µm) transitions to higher wavelengths and the depth of the feature decreases as a lava surface cools, forming a progressively thicker crust. This transition occurs over a temperature range of 758 to 518 K. Constraining the relationship between this spectral change and temperature derived from this data will provide more accurate temperatures and therefore, more accurate modeling results. This is the first time that emissivity and its link to temperature has been measured in situ on active lava surfaces, which will improve input parameters of flow propagation models and possibly improve flow forecasting.

  2. Thermal Property Measurement of Semiconductor Melt using Modified Laser Flash Method

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalla N.; Su, Ching-Hua; Lehoczky, Sandor L.

    2003-01-01

    This study further developed standard laser flash method to measure multiple thermal properties of semiconductor melts. The modified method can determine thermal diffusivity, thermal conductivity, and specific heat capacity of the melt simultaneously. The transient heat transfer process in the melt and its quartz container was numerically studied in detail. A fitting procedure based on numerical simulation results and the least root-mean-square error fitting to the experimental data was used to extract the values of specific heat capacity, thermal conductivity and thermal diffusivity. This modified method is a step forward from the standard laser flash method, which is usually used to measure thermal diffusivity of solids. The result for tellurium (Te) at 873 K: specific heat capacity 300.2 Joules per kilogram K, thermal conductivity 3.50 Watts per meter K, thermal diffusivity 2.04 x 10(exp -6) square meters per second, are within the range reported in literature. The uncertainty analysis showed the quantitative effect of sample geometry, transient temperature measured, and the energy of the laser pulse.

  3. Designing, building, and testing a solar thermoelectric generation, STEG, for energy delivery to remote residential areas in developing regions

    NASA Astrophysics Data System (ADS)

    Moumouni, Yacouba

    New alternatives and inventive renewable energy techniques which encompass both generation and power management solutions are fundamental for meeting remote residential energy supply and demand today, especially if the grid is quasi-inexistent. Solar thermoelectric generators can be a cost-effective alternative to photovoltaics for a remote residential household power supply. A complete solar thermoelectric energy harvesting system is presented for energy delivery to remote residential areas in developing regions. To this end, the entire system was built, modeled, and then validated with LTspice simulator software via thermal-to-electrical analogy schemes. Valuable data in conjunction with two novel LTspice circuits were obtained, showing the achievability of analyzing transient heat transfer with the Spice simulator. Hence, the proposed study begins with a comprehensive method of extracting thermal parameters that appear in thermoelectric modules. A step-by-step procedure was developed and followed to succinctly extract parameters, such as the Seebeck coefficient, electrical conductivity, thermal resistance, and thermal conductivity needed to model the system. Data extracted from datasheet, material properties, and geometries were successfully utilized to compute the thermal capacities and resistances necessary to perform the analogy. In addition, temperature variations of the intrinsic internal parameters were accounted for in this process for accuracy purposes. The steps that it takes to simulate any thermo-electrical system with the LTspice simulator are thoroughly explained in this work. As a consequence, an improved Spice model for a thermoelectric generator is proposed. Experimental results were compiled in the form of a lookup table and then fed into the Spice simulator using the piecewise linear (PWL) command in order to validate the model. Experimental results show that a temperature differential of 13.43°C was achievable whereas the simulation indicates a temperature gap of 9.86°C, with the higher error being associated with the hot side. Also, since the analytical method of transient heat transfer analysis is cumbersome, an LTspice model of a real-world solar thermoelectric generation system was investigated. All the physical parameters were converted into their electrical equivalences through the thermal-to-electrical analogy. Real site direct normal insolation was fed into the Spice model via PWL in order to capture the true system's thermal behavior. Interestingly, two distinct analogies result from this study: 1) an RC analogy and 2) another analogy similar to an N-type doped semiconductor material's carrier density dependence with temperature. The RC analogy is derived in order to demonstrate how thermoelectric generation systems respond to square wave-like solar radiation. This analogy is utilized to measure temperature variations on the cold side of the Spice model; it shows 80% accuracy. The N-type analogy is intended to help analyze the actual performance of a LTC3105 converter. However a few of the problems to be solved remain at the practical level. Despite the unusual operation of the thermoelectric modules with the solar radiation, the measurements and simulation were in good agreement, thus validating the new thermal modeling strategy.

  4. Study of Convection Heat Transfer in a Very High Temperature Reactor Flow Channel: Numerical and Experimental Results

    DOE PAGES

    Valentin, Francisco I.; Artoun, Narbeh; Anderson, Ryan; ...

    2016-12-01

    Very High Temperature Reactors (VHTRs) are one of the Generation IV gas-cooled reactor models proposed for implementation in next generation nuclear power plants. A high temperature/pressure test facility for forced and natural circulation experiments has been constructed. This test facility consists of a single flow channel in a 2.7 m (9’) long graphite column equipped with four 2.3kW heaters. Extensive 3D numerical modeling provides a detailed analysis of the thermal-hydraulic behavior under steady-state, transient, and accident scenarios. In addition, forced/mixed convection experiments with air, nitrogen and helium were conducted for inlet Reynolds numbers from 500 to 70,000. Our numerical resultsmore » were validated with forced convection data displaying maximum percentage errors under 15%, using commercial finite element package, COMSOL Multiphysics. Based on this agreement, important information can be extracted from the model, with regards to the modified radial velocity and property gas profiles. Our work also examines flow laminarization for a full range of Reynolds numbers including laminar, transition and turbulent flow under forced convection and its impact on heat transfer under various scenarios to examine the thermal-hydraulic phenomena that could occur during both normal operation and accident conditions.« less

  5. A comparison of performance of several artificial intelligence methods for predicting the dynamic viscosity of TiO2/SAE 50 nano-lubricant

    NASA Astrophysics Data System (ADS)

    Hemmat Esfe, Mohammad; Tatar, Afshin; Ahangar, Mohammad Reza Hassani; Rostamian, Hossein

    2018-02-01

    Since the conventional thermal fluids such as water, oil, and ethylene glycol have poor thermal properties, the tiny solid particles are added to these fluids to increase their heat transfer improvement. As viscosity determines the rheological behavior of a fluid, studying the parameters affecting the viscosity is crucial. Since the experimental measurement of viscosity is expensive and time consuming, predicting this parameter is the apt method. In this work, three artificial intelligence methods containing Genetic Algorithm-Radial Basis Function Neural Networks (GA-RBF), Least Square Support Vector Machine (LS-SVM) and Gene Expression Programming (GEP) were applied to predict the viscosity of TiO2/SAE 50 nano-lubricant with Non-Newtonian power-law behavior using experimental data. The correlation factor (R2), Average Absolute Relative Deviation (AARD), Root Mean Square Error (RMSE), and Margin of Deviation were employed to investigate the accuracy of the proposed models. RMSE values of 0.58, 1.28, and 6.59 and R2 values of 0.99998, 0.99991, and 0.99777 reveal the accuracy of the proposed models for respective GA-RBF, CSA-LSSVM, and GEP methods. Among the developed models, the GA-RBF shows the best accuracy.

  6. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  7. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  8. Atomistic modeling of metallic thin films by modified embedded atom method

    NASA Astrophysics Data System (ADS)

    Hao, Huali; Lau, Denvid

    2017-11-01

    Molecular dynamics simulation is applied to investigate the deposition process of metallic thin films. Eight metals, titanium, vanadium, iron, cobalt, nickel, copper, tungsten, and gold, are chosen to be deposited on the aluminum substrate. The second nearest-neighbor modified embedded atom method potential is adopted to predict their thermal and mechanical properties. When quantifying the screening parameters of the potential, the error for Young's modulus and coefficient of thermal expansion between the simulated results and the experimental measurements is less than 15%, demonstrating the reliability of the potential to predict metallic behaviors related to thermal and mechanical properties. A set of potential parameters which governs the interactions between aluminum and other metals in a binary system is also generated from ab initio calculation. The details of interfacial structures between the chosen films and substrate are successfully simulated with the help of these parameters. Our results indicate that the preferred orientation of film growth depends on the film crystal structure, and the inter-diffusion at the interface is correlated the cohesive energy parameter of potential for the binary system. Such finding provides an important basis to further understand the interfacial science, which contributes to the improvement of the mechanical properties, reliability and durability of films.

  9. Thermal properties of nonstoichiometry uranium dioxide

    NASA Astrophysics Data System (ADS)

    Kavazauri, R.; Pokrovskiy, S. A.; Baranov, V. G.; Tenishev, A. V.

    2016-04-01

    In this paper, was developed a method of oxidation pure uranium dioxide to a predetermined deviation from the stoichiometry. Oxidation was carried out using the thermogravimetric method on NETZSCH STA 409 CD with a solid electrolyte galvanic cell for controlling the oxygen potential of the environment. 4 samples uranium oxide were obtained with a different ratio of oxygen-to-metal: O / U = 2.002, O / U = 2.005, O / U = 2.015, O / U = 2.033. For the obtained samples were determined basic thermal characteristics of the heat capacity, thermal diffusivity, thermal conductivity. The error of heat capacity determination is equal to 5%. Thermal diffusivity and thermal conductivity of the samples decreased with increasing deviation from stoichiometry. For the sample with O / M = 2.033, difference of both values with those of stoichiometric uranium dioxide is close to 50%.

  10. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  11. Hysteresis of thin film IPRTs in the range 100 °C to 600 °C

    NASA Astrophysics Data System (ADS)

    Zvizdić, D.; Šestan, D.

    2013-09-01

    As opposed to SPRTs, the IPRTs succumb to hysteresis when submitted to change of temperature. This uncertainty component, although acknowledged as omnipresent at many other types of sensors (pressure, electrical, magnetic, humidity, etc.) has often been disregarded in their calibration certificates' uncertainty budgets in the past, its determination being costly, time-consuming and not appreciated by customers and manufacturers. In general, hysteresis is a phenomenon that results in a difference in an item's behavior when approached from a different path. Thermal hysteresis results in a difference in resistance at a given temperature based on the thermal history to which the PRTs were exposed. The most prominent factor that contributes to the hysteresis error in an IPRT is a strain within the sensing element caused by the thermal expansion and contraction. The strains that cause hysteresis error are closely related to the strains that cause repeatability error. Therefore, it is typical that PRTs that exhibit small hysteresis also exhibit small repeatability error, and PRTs that exhibit large hysteresis have poor repeatability. Aim of this paper is to provide hysteresis characterization of a batch of IPRTs using the same type of thin-film sensor, encapsulated by same procedure and same company and to estimate to what extent the thermal hysteresis obtained by testing one single thermometer (or few thermometers) can serve as representative of other thermometers of the same type and manufacturer. This investigation should also indicate the range of hysteresis departure between IPRTs of the same type. Hysteresis was determined by cycling IPRTs temperature from 100 °C through intermediate points up to 600 °C and subsequently back to 100 °C. Within that range several typical sub-ranges are investigated: 100 °C to 400 °C, 100 °C to 500 °C, 100 °C to 600 °C, 300 °C to 500 °C and 300 °C to 600 °C . The hysteresis was determined at various temperatures by comparison calibration with SPRT. The results of investigation are presented in a graphical form for all IPRTs, ranges and calibration points.

  12. Numerical modeling of the thermoelectric cooler with a complementary equation for heat circulation in air gaps

    NASA Astrophysics Data System (ADS)

    Fang, En; Wu, Xiaojie; Yu, Yuesen; Xiu, Junrui

    2017-03-01

    In this paper, a numerical model is developed by combining thermodynamics with heat transfer theory. Taking inner and external multi-irreversibility into account, it is with a complementary equation for heat circulation in air gaps of a steady cooling system with commercial thermoelectric modules operating in refrigeration mode. With two modes concerned, the equation presents the heat flowing through air gaps which forms heat circulations between both sides of thermoelectric coolers (TECs). In numerical modelling, a TEC is separated as two temperature controlled constant heat flux reservoirs in a thermal resistance network. In order to obtain the parameter values, an experimental apparatus with a commercial thermoelectric cooler was built to characterize the performance of a TEC with heat source and sink assembly. At constant power dissipation, steady temperatures of heat source and both sides of the thermoelectric cooler were compared with those in a standard numerical model. The method displayed that the relationship between Φf and the ratio Φ_{c}'/Φ_{c} was linear as expected. Then, for verifying the accuracy of proposed numerical model, the data in another system were recorded. It is evident that the experimental results are in good agreement with simulation(proposed model) data at different heat transfer rates. The error is small and mainly results from the instabilities of thermal resistances with temperature change and heat flux, heat loss of the device vertical surfaces and measurements.

  13. Prediction of microcracking in composite laminates under thermomechanical loading

    NASA Technical Reports Server (NTRS)

    Maddocks, Jason R.; Mcmanus, Hugh L.

    1995-01-01

    Composite laminates used in space structures are exposed to both thermal and mechanical loads. Cracks in the matrix form, changing the laminate thermoelastic properties. An analytical methodology is developed to predict microcrack density in a general laminate exposed to an arbitrary thermomechanical load history. The analysis uses a shear lag stress solution in conjunction with an energy-based cracking criterion. Experimental investigation was used to verify the analysis. Correlation between analysis and experiment is generally excellent. The analysis does not capture machining-induced cracking, or observed delayed crack initiation in a few ply groups, but these errors do not prevent the model from being a useful preliminary design tool.

  14. Comparison between the land surface response of the ECMWF model and the FIFE-1987 data

    NASA Technical Reports Server (NTRS)

    Betts, Alan K.; Ball, John H.; Beljaars, Anton C. M.

    1993-01-01

    An averaged time series for the surface data for the 15 x 15 km FIFE site was prepared for the summer of 1987. Comparisons with 48-hr forecasts from the ECMWF model for extended periods in July, August, and October 1987 identified model errors in the incoming SW radiation in clear skies, the ground heat flux, the formulation of surface evaporation, the soil-moisture model, and the entrainment at boundary-layer top. The model clear-sky SW flux is too high at the surface by 5-10 percent. The ground heat flux is too large by a factor of 2 to 3 because of the large thermal capacity of the first soil layer (which is 7 cm thick), and a time truncation error. The surface evaporation was near zero in October 1987, rather than of order 70 W/sq m at noon. The surface evaporation falls too rapidly after rainfall, with a time-scale of a few days rather than the 7-10 d (or more) of the observations. On time-scales of more than a few days the specified 'climate layer' soil moisture, rather than the storage of precipitation, has a large control on the evapotranspiration. The boundary-layer-top entrainment is too low. This results in a moist bias in the boundary-layer mixing ratio of order 2 g/Kg in forecasts from an experimental analysis with nearly realistic surface fluxes; this because there is insufficient downward mixing of dry air.

  15. Theoretical and experimental errors for in situ measurements of plant water potential.

    PubMed

    Shackel, K A

    1984-07-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.

  16. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  17. The Precise Orbit and the Challenge of Long Term Stability

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Cerri, Luca; Otten, Michiel; Bertiger, William; Zelensky, Nikita; Willis, Pascal

    2012-01-01

    The computation of a precise orbit reference is a fundamental component of the altimetric measurement. Since the dawn of the modern altimeter age, orbit accuracy has been determined by the quality of the GPS, SLR, and DORIS tracking systems, the fidelity of the measurement and force models, and the choice of parameterization for the orbit solutions, and whether a dynamic or a reduced-dynamic strategy is used to calculate the orbits. At the start of the TOPEX mission, the inaccuracies in the modeling of static gravity, dynamic ocean tides, and the nonconservative forces dominated the orbit error budget. Much of the error due to dynamic mismodeling can be compensated by reduced-dynamic tracking techniques depending on the measurement system strength. In the last decade, the launch of the GRACE mission has eliminated the static gravity field as a concern, and the background force models and the terrestrial reference frame have been systematically refined. GPS systems have realized many improvements, including better modeling of the forces on the GPS spacecraft, large increases in the ground tracking network, and improved modeling of the GPS measurements. DORIS systems have achieved improvements through the use of new antennae, more stable monumentation, and of satellite receivers that can track multiple beacons, and as well as through improved modeling of the nonconservative forces. Many of these improvements have been applied in the new reprocessed time series of orbits produced for the ERS satellites, Envisat, TOPEX/Poseidon and the Jason satellites, and as well as for the most recent Cryosat-2 and HY2A. We now face the challenge of maintaining a stable orbit reference for these altimetric satellites. Changes in the time-variable gravity field of the Earth and how these are modelled have been shown to affect the orbit evolution, and the calibration of the altimetric data with tide gauges. The accuracy of the reference frame realizations, and their projection into the future remains a source of error. Other sources of omission error include the geocenter for which no consensus model is as of yet applied. Although progress has been made in nonconservative force modeling through the use of detailed satellite-specific models, radiation pressure modeling, and atmospheric density modeling remain a potential source of orbit error. The longer term influence of variations in the solar and terrestrial radiation fields over annual and solar cycles remains principally untested. Also the long term variation in optical and thermal properties of the space vehicle surfaces would contribute to biases in the orbital frame if ignored. We review the status of altimetric precision orbit determination as exemplified by the recent computations undertaken by the different analysis centers for ERS, Envisat, TOPEX/Poseidon, Jason, Cryosat2 and HY2A, and we provide a perspective on the challenges for future missions such as the Jason-3, SENTINEL-3 and SWOT.

  18. Uncooled Thermal Camera Calibration and Optimization of the Photogrammetry Process for UAV Applications in Agriculture

    PubMed Central

    Ballesteros, Rocío

    2017-01-01

    The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m. PMID:28946606

  19. Uncooled Thermal Camera Calibration and Optimization of the Photogrammetry Process for UAV Applications in Agriculture.

    PubMed

    Ribeiro-Gomes, Krishna; Hernández-López, David; Ortega, José F; Ballesteros, Rocío; Poblete, Tomás; Moreno, Miguel A

    2017-09-23

    The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.

  20. Comparison of a single-view and a double-view aerosol optical depth retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Henderson, Bradley G.; Chylek, Petr

    2003-11-01

    We compare the results of a single-view and a double-view aerosol optical depth (AOD) retrieval algorithm applied to image pairs acquired over NASA Stennis Space Center, Mississippi. The image data were acquired by the Department of Energy's (DOE) Multispectral Thermal Imager (MTI), a pushbroom satellite imager with 15 bands from the visible to the thermal infrared. MTI has the ability to acquire imagery in pairs in which the first image is a near-nadir view and the second image is off-nadir with a zenith angle of approximately 60°. A total of 15 image pairs were used in the analysis. For a given image pair, AOD retrieval is performed twice---once using a single-view algorithm applied to the near-nadir image, then again using a double-view algorithm. Errors for both retrievals are computed by comparing the results to AERONET AOD measurements obtained at the same time and place. The single-view algorithm showed an RMS error about the mean of 0.076 in AOD units, whereas the double-view algorithm showed a modest improvement with an RMS error of 0.06. The single-view errors show a positive bias which is presumed to be a result of the empirical relationship used to determine ground reflectance in the visible. A plot of AOD error of the double-view algorithm versus time shows a noticeable trend which is interpreted to be a calibration drift. When this trend is removed, the RMS error of the double-view algorithm drops to 0.030. The single-view algorithm qualitatively appears to perform better during the spring and summer whereas the double-view algorithm seems to be less sensitive to season.

Top