Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology
NASA Astrophysics Data System (ADS)
Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya
2017-09-01
Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.
Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools
NASA Astrophysics Data System (ADS)
Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu
2018-03-01
Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
Error compensation for thermally induced errors on a machine tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
Multisensor systems today and tomorrow: Machine control, diagnosis and thermal compensation
NASA Astrophysics Data System (ADS)
Nunzio, D'Addea
2000-05-01
Multisensor techniques that deal with control of tribology test rig and with diagnosis and thermal error compensation of machine tools are the starting point for some consideration about the use of these techniques as in fuzzy and neural net systems. The author comes to conclusion that anticipatory systems and multisensor techniques will have in the next future a great improvement and a great development mainly in the thermal error compensation of machine tools.
The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center
NASA Astrophysics Data System (ADS)
Tseng, Pai-Chung; Chen, Shen-Len
The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.
Experiments and simulation of thermal behaviors of the dual-drive servo feed system
NASA Astrophysics Data System (ADS)
Yang, Jun; Mei, Xuesong; Feng, Bin; Zhao, Liang; Ma, Chi; Shi, Hu
2015-01-01
The machine tool equipped with the dual-drive servo feed system could realize high feed speed as well as sharp precision. Currently, there is no report about the thermal behaviors of the dual-drive machine, and the current research of the thermal characteristics of machines mainly focuses on steady simulation. To explore the influence of thermal characterizations on the precision of a jib boring machine assembled dual-drive feed system, the thermal equilibrium tests and the research on thermal-mechanical transient behaviors are carried out. A laser interferometer, infrared thermography and a temperature-displacement acquisition system are applied to measure the temperature distribution and thermal deformation at different feed speeds. Subsequently, the finite element method (FEM) is used to analyze the transient thermal behaviors of the boring machine. The complex boundary conditions, such as heat sources and convective heat transfer coefficient, are calculated. Finally, transient variances in temperatures and deformations are compared with the measured values, and the errors between the measurement and the simulation of the temperature and the thermal error are 2 °C and 2.5 μm, respectively. The researching results demonstrate that the FEM model can predict the thermal error and temperature distribution very well under specified operating condition. Moreover, the uneven temperature gradient is due to the asynchronous dual-drive structure that results in thermal deformation. Additionally, the positioning accuracy decreases as the measured point became further away from the motor, and the thermal error and equilibrium period both increase with feed speeds. The research proposes a systematical method to measure and simulate the boring machine transient thermal behaviors.
Method for automated building of spindle thermal model with use of CAE system
NASA Astrophysics Data System (ADS)
Kamenev, S. V.
2018-03-01
The spindle is one of the most important units of the metal-cutting machine tool. Its performance is critical to minimize the machining error, especially the thermal error. Various methods are applied to improve the thermal behaviour of spindle units. One of the most important methods is mathematical modelling based on the finite element analysis. The most common approach for its realization is the use of CAE systems. This approach, however, is not capable to address the number of important effects that need to be taken into consideration for proper simulation. In the present article, the authors propose the solution to overcome these disadvantages using automated thermal model building for the spindle unit utilizing the CAE system ANSYS.
NASA Astrophysics Data System (ADS)
Groppi, Christopher E.; Underhill, Matthew; Farkas, Zoltan; Pelham, Daniel
2016-07-01
We present the fabrication and measurement of monolithic aluminum flat mirrors designed to operate in the thermal infrared for the OSIRIS-Rex Thermal Emission Spectrometer (OTES) space instrument. The mirrors were cut using a conventional fly cutter with a large radius diamond cutting tool on a high precision Kern Evo 3-axis CNC milling machine. The mirrors were measured to have less than 150 angstroms RMS surface error.
NASA Astrophysics Data System (ADS)
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
NASA Astrophysics Data System (ADS)
Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia
2017-04-01
In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.
Miller, Donald M.
1978-01-01
A micromachining tool system with X- and omega-axes is used to machine spherical, aspherical, and irregular surfaces with a maximum contour error of 100 nonometers (nm) and surface waviness of no more than 0.8 nm RMS. The omega axis, named for the angular measurement of the rotation of an eccentric mechanism supporting one end of a tool bar, enables the pulse increments of the tool toward the workpiece to be as little as 0 to 4.4 nm. A dedicated computer coordinates motion in the two axes to produce the workpiece contour. Inertia is reduced by reducing the mass pulsed toward the workpiece to about one-fifth of its former value. The tool system includes calibration instruments to calibrate the micromachining tool system. Backlash is reduced and flexing decreased by using a rotary table and servomotor to pulse the tool in the omega-axis instead of a ball screw mechanism. A thermally-stabilized spindle rotates the workpiece and is driven by a motor not mounted on the micromachining tool base through a torque-smoothing pulley and vibrationless rotary coupling. Abbe offset errors are almost eliminated by tool setting and calibration at spindle center height. Tool contour and workpiece contour are gaged on the machine; this enables the source of machining errors to be determined more readily, because the workpiece is gaged before its shape can be changed by removal from the machine.
Investigation of approximate models of experimental temperature characteristics of machines
NASA Astrophysics Data System (ADS)
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen
2018-01-01
Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system. PMID:29473877
Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen
2018-02-23
Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system.
On-Line, Self-Learning, Predictive Tool for Determining Payload Thermal Response
NASA Technical Reports Server (NTRS)
Jen, Chian-Li; Tilwick, Leon
2000-01-01
This paper will present the results of a joint ManTech / Goddard R&D effort, currently under way, to develop and test a computer based, on-line, predictive simulation model for use by facility operators to predict the thermal response of a payload during thermal vacuum testing. Thermal response was identified as an area that could benefit from the algorithms developed by Dr. Jeri for complex computer simulations. Most thermal vacuum test setups are unique since no two payloads have the same thermal properties. This requires that the operators depend on their past experiences to conduct the test which requires time for them to learn how the payload responds while at the same time limiting any risk of exceeding hot or cold temperature limits. The predictive tool being developed is intended to be used with the new Thermal Vacuum Data System (TVDS) developed at Goddard for the Thermal Vacuum Test Operations group. This model can learn the thermal response of the payload by reading a few data points from the TVDS, accepting the payload's current temperature as the initial condition for prediction. The model can then be used as a predictive tool to estimate the future payload temperatures according to a predetermined shroud temperature profile. If the error of prediction is too big, the model can be asked to re-learn the new situation on-line in real-time and give a new prediction. Based on some preliminary tests, we feel this predictive model can forecast the payload temperature of the entire test cycle within 5 degrees Celsius after it has learned 3 times during the beginning of the test. The tool will allow the operator to play "what-if' experiments to decide what is his best shroud temperature set-point control strategy. This tool will save money by minimizing guess work and optimizing transitions as well as making the testing process safer and easier to conduct.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.; Shaklan, Stuart B.
2009-01-01
This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.
General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.
2011-01-01
The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.
The FLIR ONE thermal imager for the assessment of burn wounds: Reliability and validity study.
Jaspers, M E H; Carrière, M E; Meij-de Vries, A; Klaessens, J H G M; van Zuijlen, P P M
2017-11-01
Objective measurement tools may be of great value to provide early and reliable burn wound assessment. Thermal imaging is an easy, accessible and objective technique, which measures skin temperature as an indicator of tissue perfusion. These thermal images might be helpful in the assessment of burn wounds. However, before implementation of a novel measurement tool into clinical practice is considered, it is appropriate to test its clinimetric properties (i.e. reliability and validity). The objective of this study was to assess the reliability and validity of the recently introduced FLIR ONE thermal imager. Two observers obtained thermal images of burn wounds in adult patients at day 1-3, 4-7 and 8-10 after burn. Subsequently, temperature differences between the burn wound and healthy skin (ΔT) were calculated on an iPad mini containing the FLIR Tools app. To assess reliability, ΔT values of both observers were compared by calculating the intraclass correlation coefficient (ICC) and measurement error parameters. To assess validity, the ΔT values of the first observer were compared to the registered healing time of the burn wounds, which was specified into three categories: (I) ≤14 days, (II) 15-21 days and (III) >21 days. The ability of the FLIR ONE to discriminate between healing ≤21 days and >21 days was evaluated by means of a receiver operating characteristic curve and an optimal ΔT cut-off value. Reliability: ICCs were 0.99 for each time point, indicating excellent reliability up to 10 days after burn. The standard error of measurement varied between 0.17-0.22°C. the area under the curve was calculated at 0.69 (95% CI 0.54-0.84). A cut-off value of -1.15°C shows a moderate discrimination between burn wound healing ≤21 days and >21 days (46% sensitivity; 82% specificity). Our results show that the FLIR ONE thermal imager is highly reliable, but the moderate validity calls for additional research. However, the FLIR ONE is pre-eminently feasible, allowing easy and fast measurements in clinical burn practice. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
NASA Astrophysics Data System (ADS)
Duan, Pengfei; Lei, Wenping
2017-11-01
A number of disciplines (mechanics, structures, thermal, and optics) are needed to design and build Space Camera. Separate design models are normally constructed by each discipline CAD/CAE tools. Design and analysis is conducted largely in parallel subject to requirements that have been levied on each discipline, and technical interaction between the different disciplines is limited and infrequent. As a result a unified view of the Space Camera design across discipline boundaries is not directly possible in the approach above, and generating one would require a large manual, and error-prone process. A collaborative environment that is built on abstract model and performance template allows engineering data and CAD/CAE results to be shared across above discipline boundaries within a common interface, so that it can help to attain speedy multivariate design and directly evaluate optical performance under environment loadings. A small interdisciplinary engineering team from Beijing Institute of Space Mechanics and Electricity has recently conducted a Structural/Thermal/Optical (STOP) analysis of a space camera with this collaborative environment. STOP analysis evaluates the changes in image quality that arise from the structural deformations when the thermal environment of the camera changes throughout its orbit. STOP analyses were conducted for four different test conditions applied during final thermal vacuum (TVAC) testing of the payload on the ground. The STOP Simulation Process begins with importing an integrated CAD model of the camera geometry into the collaborative environment, within which 1. Independent thermal and structural meshes are generated. 2. The thermal mesh and relevant engineering data for material properties and thermal boundary conditions are then used to compute temperature distributions at nodal points in both the thermal and structures mesh through Thermal Desktop, a COTS thermal design and analysis code. 3. Thermally induced structural deformations of the camera are then evaluated in Nastran, an industry standard code for structural design and analysis. 4. Thermal and structural results are next imported into SigFit, another COTS tool that computes deformation and best fit rigid body displacements for the optical surfaces. 5. SigFit creates a modified optical prescription that is imported into CODE V for evaluation of optical performance impacts. The integrated STOP analysis was validated using TVAC test data. For the four different TVAC tests, the relative errors between simulation and test data of measuring points temperatures were almost around 5%, while in some test conditions, they were even much lower to 1%. As to image quality MTF, relative error between simulation and test was 8.3% in the worst condition, others were all below 5%. Through the validation, it has been approved that the collaborative design and simulation environment can achieved the integrated STOP analysis of Space Camera efficiently. And further, the collaborative environment allows an interdisciplinary analysis that formerly might take several months to perform to be completed in two or three weeks, which is very adaptive to scheme demonstration of projects in earlier stages.
Numerical modeling of the divided bar measurements
NASA Astrophysics Data System (ADS)
LEE, Y.; Keehm, Y.
2011-12-01
The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.
Matsuki, Kosuke; Narumi, Ryuta; Azuma, Takashi; Yoshinaka, Kiyoshi; Sasaki, Akira; Okita, Kohei; Takagi, Shu; Matsumoto, Yoichiro
2013-01-01
To improve the throughput of high intensity focused ultrasound (HIFU) treatment, we have considered a focus switching method at two points. For this method, it is necessary to evaluate the thermal distribution under exposure to ultrasound. The thermal distribution was measured using a prototype thin-film thermocouple array, which has the advantage of minimizing the influence of the thermocouple on the acoustic and temperature fields. Focus switching was employed to enlarge the area of temperature increase and evaluate the proposed evaluation parameters with respect to safety and uniformity. The results indicate that focus switching can effectively expand the thermal lesion while maintaining a steep thermal boundary. In addition, the influence caused by the thin-film thermocouple array was estimated experimentally. This thermocouple was demonstrated to be an effective tool for the measurement of temperature distributions induced by HIFU.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.
One-step random mutagenesis by error-prone rolling circle amplification
Fujii, Ryota; Kitaoka, Motomitsu; Hayashi, Kiyoshi
2004-01-01
In vitro random mutagenesis is a powerful tool for altering properties of enzymes. We describe here a novel random mutagenesis method using rolling circle amplification, named error-prone RCA. This method consists of only one DNA amplification step followed by transformation of the host strain, without treatment with any restriction enzymes or DNA ligases, and results in a randomly mutated plasmid library with 3–4 mutations per kilobase. Specific primers or special equipment, such as a thermal-cycler, are not required. This method permits rapid preparation of randomly mutated plasmid libraries, enabling random mutagenesis to become a more commonly used technique. PMID:15507684
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Optical analysis and thermal management of 2-cell strings linear concentrating photovoltaic system
NASA Astrophysics Data System (ADS)
Reddy, K. S.; Kamnapure, Nikhilesh R.
2015-09-01
This paper presents the optical and thermal analyses for a linear concentrating photovoltaic/thermal collector under different operating conditions. Linear concentrating photovoltaic system (CPV) consists of a highly reflective mirror, a receiver and semi-dual axis tracking mechanism. The CPV receiver embodies two strings of triple-junction cells (100 cells in each string) adhered to a mild steel circular tube mounted at the focal length of trough. This system provides 560 W of electricity and 1580 W of heat which needs to be dissipated by active cooling. The Al2O3/Water nanofluid is used as heat transfer fluid (HTF) flowing through circular receiver for CPV cells cooling. Optical analysis of linear CPV system with 3.35 m2 aperture and geometric concentration ratio (CR) of 35 is carried out using Advanced System Analysis Program (ASAP) an optical simulation tool. Non-uniform intensity distribution model of solar disk is used to model the sun in ASAP. The impact of random errors including slope error (σslope), tracking error (σtrack) and apparent change in sun's width (σsun) on optical performance of collector is shown. The result from the optical simulations shows the optical efficiency (ηo) of 88.32% for 2-cell string CPV concentrator. Thermal analysis of CPV receiver is carried out with conjugate heat transfer modeling in ANSYS FLUENT-14. Numerical simulations of Al2O3/Water nanofluid turbulent forced convection are performed for various parameters such as nanoparticle volume fraction (φ), Reynolds number (Re). The addition of the nanoparticle in water enhances the heat transfer in the ranges of 3.28% - 35.6% for φ = 1% - 6%. Numerical results are compared with literature data which shows the reasonable agreement.
Evaluation of algorithms for geological thermal-inertia mapping
NASA Technical Reports Server (NTRS)
Miller, S. H.; Watson, K.
1977-01-01
The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).
Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler
2016-12-01
An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.
Accuracy control in Monte Carlo radiative calculations
NASA Technical Reports Server (NTRS)
Almazan, P. Planas
1993-01-01
The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Robert; Goudey, Howdy; Curcija, D. Charlie
Virtually every home in the US has some form of shades, blinds, drapes, or other window attachment, but few have been designed for energy savings. In order to provide a common basis of comparison for thermal performance it is important to have validated simulation tools. This study outlines a review and validation of the ISO 15099 centre-of-glass thermal transmittance correlations for naturally ventilated cavities through measurement and detailed simulations. The focus is on the impacts of room-side ventilated cavities, such as those found with solar screens and horizontal louvred blinds. The thermal transmittance of these systems is measured experimentally, simulatedmore » using computational fluid dynamics analysis, and simulated utilizing simplified correlations from ISO 15099. Finally, correlation coefficients are proposed for the ISO 15099 algorithm that reduces the mean error between measured and simulated heat flux for typical solar screens from 16% to 3.5% and from 13% to 1% for horizontal blinds.« less
Hart, Robert; Goudey, Howdy; Curcija, D. Charlie
2017-05-16
Virtually every home in the US has some form of shades, blinds, drapes, or other window attachment, but few have been designed for energy savings. In order to provide a common basis of comparison for thermal performance it is important to have validated simulation tools. This study outlines a review and validation of the ISO 15099 centre-of-glass thermal transmittance correlations for naturally ventilated cavities through measurement and detailed simulations. The focus is on the impacts of room-side ventilated cavities, such as those found with solar screens and horizontal louvred blinds. The thermal transmittance of these systems is measured experimentally, simulatedmore » using computational fluid dynamics analysis, and simulated utilizing simplified correlations from ISO 15099. Finally, correlation coefficients are proposed for the ISO 15099 algorithm that reduces the mean error between measured and simulated heat flux for typical solar screens from 16% to 3.5% and from 13% to 1% for horizontal blinds.« less
A Nonlinear Adaptive Filter for Gyro Thermal Bias Error Cancellation
NASA Technical Reports Server (NTRS)
Galante, Joseph M.; Sanner, Robert M.
2012-01-01
Deterministic errors in angular rate gyros, such as thermal biases, can have a significant impact on spacecraft attitude knowledge. In particular, thermal biases are often the dominant error source in MEMS gyros after calibration. Filters, such as J\\,fEKFs, are commonly used to mitigate the impact of gyro errors and gyro noise on spacecraft closed loop pointing accuracy, but often have difficulty in rapidly changing thermal environments and can be computationally expensive. In this report an existing nonlinear adaptive filter is used as the basis for a new nonlinear adaptive filter designed to estimate and cancel thermal bias effects. A description of the filter is presented along with an implementation suitable for discrete-time applications. A simulation analysis demonstrates the performance of the filter in the presence of noisy measurements and provides a comparison with existing techniques.
Analysis tool and methodology design for electronic vibration stress understanding and prediction
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Jen; Crane, Robert L.; Sathish, Shamachary
2005-03-01
The objectives of this research were to (1) understand the impact of vibration on electronic components under ultrasound excitation; (2) model the thermal profile presented under vibration stress; and (3) predict stress level given a thermal profile of an electronic component. Research tasks included: (1) retrofit of current ultrasonic/infrared nondestructive testing system with sensory devices for temperature readings; (2) design of software tool to process images acquired from the ultrasonic/infrared system; (3) developing hypotheses and conducting experiments; and (4) modeling and evaluation of electronic vibration stress levels using a neural network model. Results suggest that (1) an ultrasonic/infrared system can be used to mimic short burst high vibration loads for electronics components; (2) temperature readings for electronic components under vibration stress are consistent and repeatable; (3) as stress load and excitation time increase, temperature differences also increase; (4) components that are subjected to a relatively high pre-stress load, followed by a normal operating load, have a higher heating rate and lower cooling rate. These findings are based on grayscale changes in images captured during experimentation. Discriminating variables and a neural network model were designed to predict stress levels given temperature and/or grayscale readings. Preliminary results suggest a 15.3% error when using grayscale change rate and 12.8% error when using average heating rate within the neural network model. Data were obtained from a high stress point (the corner) of the chip.
NASA Astrophysics Data System (ADS)
Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.
2018-04-01
Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.
Analysis of the thermo-mechanical deformations in a hot forging tool by numerical simulation
NASA Astrophysics Data System (ADS)
L-Cancelos, R.; Varas, F.; Martín, E.; Viéitez, I.
2016-03-01
Although programs have been developed for the design of tools for hot forging, its design is still largely based on the experience of the tool maker. This obliges to build some test matrices and correct their errors to minimize distortions in the forged piece. This phase prior to mass production consumes time and material resources, which makes the final product more expensive. The forging tools are usually constituted by various parts made of different grades of steel, which in turn have different mechanical properties and therefore suffer different degrees of strain. Furthermore, the tools used in the hot forging are exposed to a thermal field that also induces strain or stress based on the degree of confinement of the piece. Therefore, the mechanical behaviour of the assembly is determined by the contact between the different pieces. The numerical simulation allows to analyse different configurations and anticipate possible defects before tool making, thus, reducing the costs of this preliminary phase. In order to improve the dimensional quality of the manufactured parts, the work presented here focuses on the application of a numerical model to a hot forging manufacturing process in order to predict the areas of the forging die subjected to large deformations. The thermo-mechanical model developed and implemented with free software (Code-Aster) includes the strains of thermal origin, strains during forge impact and contact effects. The numerical results are validated with experimental measurements in a tooling set that produces forged crankshafts for the automotive industry. The numerical results show good agreement with the experimental tests. Thereby, a very useful tool for the design of tooling sets for hot forging is achieved.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Twenty-Five Years of Landsat Thermal Band Calibration
NASA Technical Reports Server (NTRS)
Barsi, Julia A.; Markham, Brian L.; Schoff, John R.; Hook, Simon J.; Raqueno, Nina G.
2010-01-01
Landsat-7 Enhanced Thematic Mapper+ (ETM+), launched in April 1999, and Landsat-5 Thematic Mapper (TM), launched in 1984, both have a single thermal band. Both instruments thermal band calibrations have been updated previously: ETM+ in 2001 for a pre-launch calibration error and TM in 2007 for data acquired since the current era of vicarious calibration has been in place (1999). Vicarious calibration teams at Rochester Institute of Technology (RIT) and NASA/Jet Propulsion Laboratory (JPL) have been working to validate the instrument calibration since 1999. Recent developments in their techniques and sites have expanded the temperature and temporal range of the validation. The new data indicate that the calibration of both instruments had errors: the ETM+ calibration contained a gain error of 5.8% since launch; the TM calibration contained a gain error of 5% and an additional offset error between 1997 and 1999. Both instruments required adjustments in their thermal calibration coefficients in order to correct for the errors. The new coefficients were calculated and added to the Landsat operational processing system in early 2010. With the corrections, both instruments are calibrated to within +/-0.7K.
General MACOS Interface for Modeling and Analysis for Controlled Optical Systems
NASA Technical Reports Server (NTRS)
Sigrist, Norbert; Basinger, Scott A.; Redding, David C.
2012-01-01
The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.
Solar dynamic heat receiver thermal characteristics in low earth orbit
NASA Technical Reports Server (NTRS)
Wu, Y. C.; Roschke, E. J.; Birur, G. C.
1988-01-01
A simplified system model is under development for evaluating the thermal characteristics and thermal performance of a solar dynamic spacecraft energy system's heat receiver. Results based on baseline orbit, power system configuration, and operational conditions, are generated for three basic receiver concepts and three concentrator surface slope errors. Receiver thermal characteristics and thermal behavior in LEO conditions are presented. The configuration in which heat is directly transferred to the working fluid is noted to generate the best system and thermal characteristics. as well as the lowest performance degradation with increasing slope error.
NASA Astrophysics Data System (ADS)
Laib dit Leksir, Y.; Mansour, M.; Moussaoui, A.
2018-03-01
Analysis and processing of databases obtained from infrared thermal inspections made on electrical installations require the development of new tools to obtain more information to visual inspections. Consequently, methods based on the capture of thermal images show a great potential and are increasingly employed in this field. However, there is a need for the development of effective techniques to analyse these databases in order to extract significant information relating to the state of the infrastructures. This paper presents a technique explaining how this approach can be implemented and proposes a system that can help to detect faults in thermal images of electrical installations. The proposed method classifies and identifies the region of interest (ROI). The identification is conducted using support vector machine (SVM) algorithm. The aim here is to capture the faults that exist in electrical equipments during an inspection of some machines using A40 FLIR camera. After that, binarization techniques are employed to select the region of interest. Later the comparative analysis of the obtained misclassification errors using the proposed method with Fuzzy c means and Ostu, has also be addressed.
A micromanipulation cell including a tool changer
NASA Astrophysics Data System (ADS)
Clévy, Cédric; Hubert, Arnaud; Agnus, Joël; Chaillet, Nicolas
2005-10-01
This paper deals with the design, fabrication and characterization of a tool changer for micromanipulation cells. This tool changer is part of a manipulation cell including a three linear axes robot and a piezoelectric microgripper. All these parts are designed to perform micromanipulation tasks in confined spaces such as a microfactory or in the chamber of a scanning electron microscope (SEM). The tool changer principle is to fix a pair of tools (i.e. the gripper tips) either on the tips of the microgripper actuator (piezoceramic bulk) or on a tool magazine. The temperature control of a thermal glue enables one to fix or release this pair of tools. Liquefaction and solidification are generated by surface mounted device (SMD) resistances fixed on the surface of the actuator or magazine. Based on this principle, the tool changer can be adapted to other kinds of micromanipulation cells. Hundreds of automatic tool exchanges were performed with a maximum positioning error between two consecutive tool exchanges of 3.2 µm, 2.3 µm and 2.8 µm on the X, Y and Z axes respectively (Z refers to the vertical axis). Finally, temperature measurements achieved under atmospheric pressure and in a vacuum environment and pressure measurements confirm the possibility of using this device in the air as well as in a SEM.
Ulysses, one year after the launch
NASA Astrophysics Data System (ADS)
Petersen, H.
1991-12-01
Ulysses is currently one year underway in a huge heliocentric orbit. A late change in some of the blankets' external material was required to prevent electrical charging due to contamination by nozzle outgassing products. Test results are shown, governing various ranges of plasma parameters and sample temperatures. Even clean materials show a few volts charging due to imperfections in the conductive film. Thermal environment in the Shuttle cargo bay proved to be slightly different from prelaunch predictions: less warm with doors closed, and less cold with doors opened. Temperatures experienced in orbit are nominal. A problem was caused by a complex interaction of a Sun induced thermal gradient in a sensitive boom on the dynamic stability of the spacecraft. A user interface program was an invaluable tool to ease computations with the mathematical models, eliminate error risk and provide configuration control.
NASA Astrophysics Data System (ADS)
Jacobsen, M. K.; Liu, W.; Li, B.
2012-09-01
In this paper, a high pressure setup is presented for performing simultaneous measurements of Seebeck coefficient and thermal diffusivity in multianvil apparatus for the purpose of enhancing the study of transport phenomena. Procedures for the derivation of Seebeck coefficient and thermal diffusivity/conductivity, as well as their associated sources of errors, are presented in detail, using results obtained on the filled skutterudite, Ce0.8Fe3CoSb12, up to 12 GPa at ambient temperature. Together with recent resistivity and sound velocity measurements in the same apparatus, these developments not only provide the necessary data for a self-consistent and complete characterization of the figure of merit of thermoelectric materials under pressure, but also serve as an important tool for furthering our knowledge of the dynamics and interplay between these transport phenomena.
Jacobsen, M K; Liu, W; Li, B
2012-09-01
In this paper, a high pressure setup is presented for performing simultaneous measurements of Seebeck coefficient and thermal diffusivity in multianvil apparatus for the purpose of enhancing the study of transport phenomena. Procedures for the derivation of Seebeck coefficient and thermal diffusivity/conductivity, as well as their associated sources of errors, are presented in detail, using results obtained on the filled skutterudite, Ce(0.8)Fe(3)CoSb(12,) up to 12 GPa at ambient temperature. Together with recent resistivity and sound velocity measurements in the same apparatus, these developments not only provide the necessary data for a self-consistent and complete characterization of the figure of merit of thermoelectric materials under pressure, but also serve as an important tool for furthering our knowledge of the dynamics and interplay between these transport phenomena.
NASA Astrophysics Data System (ADS)
Languy, Fabian; Vandenrijt, Jean-François; Saint-Georges, Philippe; Georges, Marc P.
2017-06-01
The manufacture of mirrors for space application is expensive and the requirements on the optical performance increase over years. To achieve higher performance, larger mirrors are manufactured but the larger the mirror the higher the sensitivity to temperature variation and therefore the higher the degradation of optical performances. To avoid the use of an expensive thermal regulation, we need to develop tools able to predict how optics behaves with thermal constraints. This paper presents the comparison between experimental surface mirror deformation and theoretical results from a multiphysics model. The local displacements of the mirror surface have been measured with the use of electronic speckle pattern interferometry (ESPI) and the deformation itself has been calculated by subtracting the rigid body motion. After validation of the mechanical model, experimental and numerical wave front errors are compared.
Advanced repair solution of clear defects on HTPSM by using nanomachining tool
NASA Astrophysics Data System (ADS)
Lee, Hyemi; Kim, Munsik; Jung, Hoyong; Kim, Sangpyo; Yim, Donggyu
2015-10-01
As the mask specifications become tighter for low k1 lithography, more aggressive repair accuracy is required below sub 20nm tech. node. To meet tight defect specifications, many maskshops select effective repair tools according to defect types. Normally, pattern defects are repaired by the e-beam repair tool and soft defects such as particles are repaired by the nanomachining tool. It is difficult for an e-beam repair tool to remove particle defects because it uses chemical reaction between gas and electron, and a nanomachining tool, which uses physical reaction between a nano-tip and defects, cannot be applied for repairing clear defects. Generally, film deposition process is widely used for repairing clear defects. However, the deposited film has weak cleaning durability, so it is easily removed by accumulated cleaning process. Although the deposited film is strongly attached on MoSiN(or Qz) film, the adhesive strength between deposited Cr film and MoSiN(or Qz) film becomes weaker and weaker by the accumulated energy when masks are exposed in a scanner tool due to the different coefficient of thermal expansion of each materials. Therefore, whenever a re-pellicle process is needed to a mask, all deposited repair points have to be confirmed whether those deposition film are damaged or not. And if a deposition point is damaged, repair process is needed again. This process causes longer and more complex process. In this paper, the basic theory and the principle are introduced to recover clear defects by using nanomachining tool, and the evaluated results are reviewed at dense line (L/S) patterns and contact hole (C/H) patterns. Also, the results using a nanomachining were compared with those using an e-beam repair tool, including the cleaning durability evaluated by the accumulated cleaning process. Besides, we discuss the phase shift issue and the solution about the image placement error caused by phase error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suess, D.; Abert, C.; Bruckner, F.
2015-04-28
The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePtmore » film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5–0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10{sup −4} if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.« less
Correlation of spacecraft thermal mathematical models to reference data
NASA Astrophysics Data System (ADS)
Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier
2018-03-01
Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.
Demonstration of spectral calibration for stellar interferometry
NASA Technical Reports Server (NTRS)
Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun
2006-01-01
A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea
2000-01-01
The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.
Interferometric correction system for a numerically controlled machine
Burleson, Robert R.
1978-01-01
An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.
Development of a direct push based in-situ thermal conductivity measurement system
NASA Astrophysics Data System (ADS)
Chirla, Marian Andrei; Vienken, Thomas; Dietrich, Peter; Bumberger, Jan
2016-04-01
Heat pump systems are commonly utilized in Europe, for the exploitation of the shallow geothermal potential. To guarantee a sustainable use of the geothermal heat pump systems by saving resources and minimizing potential negative impacts induced by temperature changes within soil and groundwater, new geothermal exploration methods and tools are required. The knowledge of the underground thermal properties is a necessity for a correct and optimum design of borehole heat exchangers. The most important parameter that indicates the performance of the systems is thermal conductivity of the ground. Mapping the spatial variability of thermal conductivity, with high resolution in the shallow subsurface for geothermal purposes, requires a high degree of technical effort to procure adequate samples for thermal analysis. A collection of such samples from the soil can disturb sample structure, so great care must be taken during collection to avoid this. Factors such as transportation and sample storage can also influence measurement results. The use of technologies like Thermal Response Test (TRT) require complex mechanical and electrical systems for convective heat transport in the subsurface and longer monitoring times, often three days. Finally, by using thermal response tests, often only one integral value is obtained for the entire coupled subsurface with the borehole heat exchanger. The common thermal conductivity measurement systems (thermal analyzers) can perform vertical thermal conductivity logs only with the aid of sample procurement, or by integration into a drilling system. However, thermal conductivity measurements using direct push with this type of probes are not possible, due to physical and mechanical limitations. Applying vertical forces using direct push technology, in order to penetrate the shallow subsurface, can damage the probe and the sensors systems. The aim of this study is to develop a new, robust thermal conductivity measurement probe, for direct push based approaches, called Thermal Conductivity Profiler (TCP), that operates based on the principles of a hollow cylindrical geometry heat source. To determinate thermal conductivity in situ, the transient temperature at the middle of the probe and electrical power dissipation is measured. At the same time, this work presents laboratory results obtained when this novel hollow cylindrical probe system was tested on different materials for calibration. By using the hollow cylindrical probe, the thermal conductivity results have an error of less than 2.5% error for solid samples (Teflon, Agar jelly, and Nylatron). These findings are useful to achieve a proper thermal energy balance in the shallow subsurface by using direct push technology and TCP. By providing information of layers with high thermal conductivity, suitable for thermal storage capability, can be used determine borehole heat exchanger design and, therefore, determine geothermal heat pump architecture.
Development and evaluation of thermal model reduction algorithms for spacecraft
NASA Astrophysics Data System (ADS)
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
NASA Technical Reports Server (NTRS)
Timofeyev, Y. M.
1979-01-01
In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
Multidisciplinary Tool for Systems Analysis of Planetary Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
Systems analysis of a planetary entry (SAPE), descent, and landing (EDL) is a multidisciplinary activity in nature. SAPE improves the performance of the systems analysis team by automating and streamlining the process, and this improvement can reduce the errors that stem from manual data transfer among discipline experts. SAPE is a multidisciplinary tool for systems analysis of planetary EDL for Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Titan. It performs EDL systems analysis for any planet, operates cross-platform (i.e., Windows, Mac, and Linux operating systems), uses existing software components and open-source software to avoid software licensing issues, performs low-fidelity systems analysis in one hour on a computer that is comparable to an average laptop, and keeps discipline experts in the analysis loop. SAPE uses Python, a platform-independent, open-source language, for integration and for the user interface. Development has relied heavily on the object-oriented programming capabilities that are available in Python. Modules are provided to interface with commercial and government off-the-shelf software components (e.g., thermal protection systems and finite-element analysis). SAPE currently includes the following analysis modules: geometry, trajectory, aerodynamics, aerothermal, thermal protection system, and interface for structural sizing.
Final Report: Correctness Tools for Petascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
2014-10-27
In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoringmore » of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.« less
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Alley, R. E.; Schieldge, J. P.
1984-01-01
The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A. F.; Jacobs, C. S.
2011-01-01
The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.
Finite Element Simulations of Micro Turning of Ti-6Al-4V using PCD and Coated Carbide tools
NASA Astrophysics Data System (ADS)
Jagadesh, Thangavel; Samuel, G. L.
2017-02-01
The demand for manufacturing axi-symmetric Ti-6Al-4V implants is increasing in biomedical applications and it involves micro turning process. To understand the micro turning process, in this work, a 3D finite element model has been developed for predicting the tool chip interface temperature, cutting, thrust and axial forces. Strain gradient effect has been included in the Johnson-Cook material model to represent the flow stress of the work material. To verify the simulation results, experiments have been conducted at four different feed rates and at three different cutting speeds. Since titanium alloy has low Young's modulus, spring back effect is predominant for higher edge radius coated carbide tool which leads to the increase in the forces. Whereas, polycrystalline diamond (PCD) tool has smaller edge radius that leads to lesser forces and decrease in tool chip interface temperature due to high thermal conductivity. Tool chip interface temperature increases by increasing the cutting speed, however the increase is less for PCD tool as compared to the coated carbide tool. When uncut chip thickness decreases, there is an increase in specific cutting energy due to material strengthening effects. Surface roughness is higher for coated carbide tool due to ploughing effect when compared with PCD tool. The average prediction error of finite element model for cutting and thrust forces are 11.45 and 14.87 % respectively.
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, J.; Whitmore, J.; Blair, N.
2014-08-01
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less
Vehicle Thermal Management Models and Tools | Transportation Research |
NREL Models and Tools Vehicle Thermal Management Models and Tools The National Renewable Energy Laboratory's (NREL's) vehicle thermal management modeling tools allow researchers to assess the trade-offs and calculate the potential benefits of thermal design options. image of three models of semi truck cabs. Truck
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
Improving the thermal efficiency of a jaggery production module using a fire-tube heat exchanger.
La Madrid, Raul; Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel
2017-12-15
Jaggery is a product obtained after heating and evaporation processes have been applied to sugar cane juice via the addition of thermal energy, followed by the crystallisation process through mechanical agitation. At present, jaggery production uses furnaces and pans that are designed empirically based on trial and error procedures, which results in low ranges of thermal efficiency operation. To rectify these deficiencies, this study proposes the use of fire-tube pans to increase heat transfer from the flue gases to the sugar cane juice. With the aim of increasing the thermal efficiency of a jaggery installation, a computational fluid dynamic (CFD)-based model was used as a numerical tool to design a fire-tube pan that would replace the existing finned flat pan. For this purpose, the original configuration of the jaggery furnace was simulated via a pre-validated CFD model in order to calculate its current thermal performance. Then, the newly-designed fire-tube pan was virtually replaced in the jaggery furnace with the aim of numerically estimating the thermal performance at the same operating conditions. A comparison of both simulations highlighted the growth of the heat transfer rate at around 105% in the heating/evaporation processes when the fire-tube pan replaced the original finned flat pan. This enhancement impacted the jaggery production installation, whereby the thermal efficiency of the installation increased from 31.4% to 42.8%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extension of similarity test procedures to cooled engine components with insulating ceramic coatings
NASA Technical Reports Server (NTRS)
Gladden, H. J.
1980-01-01
Material thermal conductivity was analyzed for its effect on the thermal performance of air cooled gas turbine components, both with and without a ceramic thermal-barrier material, tested at reduced temperatures and pressures. The analysis shows that neglecting the material thermal conductivity can contribute significant errors when metal-wall-temperature test data taken on a turbine vane are extrapolated to engine conditions. This error in metal temperature for an uncoated vane is of opposite sign from that for a ceramic-coated vane. A correction technique is developed for both ceramic-coated and uncoated components.
Content Validity of a Tool Measuring Medication Errors.
Tabassum, Nishat; Allana, Saleema; Saeed, Tanveer; Dias, Jacqueline Maria
2015-08-01
The objective of this study was to determine content and face validity of a tool measuring medication errors among nursing students in baccalaureate nursing education. Data was collected from the Aga Khan University School of Nursing and Midwifery (AKUSoNaM), Karachi, from March to August 2014. The tool was developed utilizing literature and the expertise of the team members, expert in different areas. The developed tool was then sent to five experts from all over Karachi for ensuring the content validity of the tool, which was measured on relevance and clarity of the questions. The Scale Content Validity Index (S-CVI) for clarity and relevance of the questions was found to be 0.94 and 0.98, respectively. The tool measuring medication errors has an excellent content validity. This tool should be used for future studies on medication errors, with different study populations such as medical students, doctors, and nurses.
Prediction of microcracking in composite laminates under thermomechanical loading
NASA Technical Reports Server (NTRS)
Maddocks, Jason R.; Mcmanus, Hugh L.
1995-01-01
Composite laminates used in space structures are exposed to both thermal and mechanical loads. Cracks in the matrix form, changing the laminate thermoelastic properties. An analytical methodology is developed to predict microcrack density in a general laminate exposed to an arbitrary thermomechanical load history. The analysis uses a shear lag stress solution in conjunction with an energy-based cracking criterion. Experimental investigation was used to verify the analysis. Correlation between analysis and experiment is generally excellent. The analysis does not capture machining-induced cracking, or observed delayed crack initiation in a few ply groups, but these errors do not prevent the model from being a useful preliminary design tool.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
A thermal sensation prediction tool for use by the profession
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fountain, M.E.; Huizenga, C.
1997-12-31
As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
NASA Astrophysics Data System (ADS)
Abdel-Aal, H. A.; Mansori, M. El
2012-12-01
Cutting tools are subject to extreme thermal and mechanical loads during operation. The state of loading is intensified in dry cutting environment especially when cutting the so called hard-to-cut-materials. Although, the effect of mechanical loads on tool failure have been extensively studied, detailed studies on the effect of thermal dissipation on the deterioration of the cutting tool are rather scarce. In this paper we study failure of coated carbide tools due to thermal loading. The study emphasizes the role assumed by the thermo-physical properties of the tool material in enhancing or preventing mass attrition of the cutting elements within the tool. It is shown that within a comprehensive view of the nature of conduction in the tool zone, thermal conduction is not solely affected by temperature. Rather it is a function of the so called thermodynamic forces. These are the stress, the strain, strain rate, rate of temperature rise, and the temperature gradient. Although that within such consideration description of thermal conduction is non-linear, it is beneficial to employ such a form because it facilitates a full mechanistic understanding of thermal activation of tool wear.
NASA Astrophysics Data System (ADS)
Harrington, David M.; Sueoka, Stacey R.
2018-01-01
Data products from high spectral resolution astronomical polarimeters are often limited by fringes. Fringes can skew derived magnetic field properties from spectropolarimetric data. Fringe removal algorithms can also corrupt the data if the fringes and object signals are too similar. For some narrow-band imaging polarimeters, fringes change the calibration retarder properties and dominate the calibration errors. Systems-level engineering tools for polarimetric instrumentation require accurate predictions of fringe amplitudes, periods for transmission, diattenuation, and retardance. The relevant instabilities caused by environmental, thermal, and optical properties can be modeled and mitigation tools developed. We create spectral polarization fringe amplitude and temporal instability predictions by applying the Berreman calculus and simple interferometric calculations to optics in beams of varying F/ number. We then apply the formalism to superachromatic six-crystal retarders in converging beams under beam thermal loading in outdoor environmental conditions for two of the world's largest observatories: the 10-m Keck telescope and the Daniel K. Inouye Solar Telescope (DKIST). DKIST will produce a 300-W optical beam, which has imposed stringent requirements on the large diameter six-crystal retarders, dichroic beamsplitters, and internal optics. DKIST retarders are used in a converging beam with F/ ratios between 8 and 62. The fringe spectral periods, amplitudes, and thermal models of retarder behavior assisted DKIST optical designs and calibration plans with future application to many astronomical spectropolarimeters. The Low Resolution Imaging Spectrograph with polarimetry instrument at Keck also uses six-crystal retarders in a converging F / 13 beam in a Cassegrain focus exposed to summit environmental conditions providing observational verification of our predictions.
A quantitative comparison of soil moisture inversion algorithms
NASA Technical Reports Server (NTRS)
Zyl, J. J. van; Kim, Y.
2001-01-01
This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.
Minimal entropy reconstructions of thermal images for emissivity correction
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.
1999-03-01
Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.
CE: Original Research: Exploring How Nursing Schools Handle Student Errors and Near Misses.
Disch, Joanne; Barnsteiner, Jane; Connor, Susan; Brogren, Fabiana
2017-10-01
: Background: Little attention has been paid to how nursing students learn about quality and safety, and to the tools and policies that guide nursing schools in helping students respond to errors and near misses. This study sought to determine whether prelicensure nursing programs have a policy for reporting and following up on student clinical errors and near misses, a tool for such reporting, a tool or process (or both) for identifying trends, strategies for follow-up with students after errors and near misses, and strategies for follow-up with clinical agencies and individual faculty members. A national electronic survey of 1,667 schools of nursing with a prelicensure registered nursing program was conducted. Data from 494 responding schools (30%) were analyzed. Of the responding schools, 245 (50%) reported having no policy for managing students following a clinical error or near miss, and 272 (55%) reported having no tool for reporting student errors or near misses. Significant work is needed if the principles of a fair and just culture are to shape the response to nursing student errors and near misses. For nursing schools, some essential first steps are to understand the tools and policies a school has in place; the school's philosophy regarding errors and near misses; the resources needed to establish a fair and just culture; and how faculty can work together to create learning environments that eliminate or minimize the negative consequences of errors and near misses for patients, students, and faculty.
Improve homology search sensitivity of PacBio data by correcting frameshifts.
Du, Nan; Sun, Yanni
2016-09-01
Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Steady-state low thermal resistance characterization apparatus: The bulk thermal tester
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burg, Brian R.; Kolly, Manuel; Blasakis, Nicolas
The reliability of microelectronic devices is largely dependent on electronic packaging, which includes heat removal. The appropriate packaging design therefore necessitates precise knowledge of the relevant material properties, including thermal resistance and thermal conductivity. Thin materials and high conductivity layers make their thermal characterization challenging. A steady state measurement technique is presented and evaluated with the purpose to characterize samples with a thermal resistance below 100 mm{sup 2} K/W. It is based on the heat flow meter bar approach made up by two copper blocks and relies exclusively on temperature measurements from thermocouples. The importance of thermocouple calibration is emphasizedmore » in order to obtain accurate temperature readings. An in depth error analysis, based on Gaussian error propagation, is carried out. An error sensitivity analysis highlights the importance of the precise knowledge of the thermal interface materials required for the measurements. Reference measurements on Mo samples reveal a measurement uncertainty in the range of 5% and most accurate measurements are obtained at high heat fluxes. Measurement techniques for homogeneous bulk samples, layered materials, and protruding cavity samples are discussed. Ultimately, a comprehensive overview of a steady state thermal characterization technique is provided, evaluating the accuracy of sample measurements with thermal resistances well below state of the art setups. Accurate characterization of materials used in heat removal applications, such as electronic packaging, will enable more efficient designs and ultimately contribute to energy savings.« less
b matrix errors in echo planar diffusion tensor imaging
Boujraf, Saïd; Luypaert, Robert; Osteaux, Michel
2001-01-01
Diffusion‐weighted magnetic resonance imaging (DW‐MRI) is a recognized tool for early detection of infarction of the human brain. DW‐MRI uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive parameters that reflect the translational mobility of the water molecules in tissues. If diffusion‐weighted images with different values of b matrix are acquired during one individual investigation, it is possible to calculate apparent diffusion coefficient maps that are the elements of the diffusion tensor. The diffusion tensor elements represent the apparent diffusion coefficient of protons of water molecules in each pixel in the corresponding sample. The relation between signal intensity in the diffusion‐weighted images, diffusion tensor, and b matrix is derived from the Bloch equations. Our goal is to establish the magnitude of the error made in the calculation of the elements of the diffusion tensor when the imaging gradients are ignored. PACS number(s): 87.57. –s, 87.61.–c PMID:11602015
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).
Domené, E A; Martínez, O E
2013-01-01
An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).
NASA Astrophysics Data System (ADS)
Ries, Paul A.
2012-05-01
The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.
Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Guangdong; Turchi, Craig
Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less
Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant
Zhu, Guangdong; Turchi, Craig
2017-01-27
Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less
Smith, Kenneth J; Handler, Steven M; Kapoor, Wishwa N; Martich, G Daniel; Reddy, Vivek K; Clark, Sunday
2016-07-01
This study sought to determine the effects of automated primary care physician (PCP) communication and patient safety tools, including computerized discharge medication reconciliation, on discharge medication errors and posthospitalization patient outcomes, using a pre-post quasi-experimental study design, in hospitalized medical patients with ≥2 comorbidities and ≥5 chronic medications, at a single center. The primary outcome was discharge medication errors, compared before and after rollout of these tools. Secondary outcomes were 30-day rehospitalization, emergency department visit, and PCP follow-up visit rates. This study found that discharge medication errors were lower post intervention (odds ratio = 0.57; 95% confidence interval = 0.44-0.74; P < .001). Clinically important errors, with the potential for serious or life-threatening harm, and 30-day patient outcomes were not significantly different between study periods. Thus, automated health system-based communication and patient safety tools, including computerized discharge medication reconciliation, decreased hospital discharge medication errors in medically complex patients. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Asteroid thermal modeling in the presence of reflected sunlight
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2018-03-01
A new derivation of simple asteroid thermal models is presented, investigating the need to account correctly for Kirchhoff's law of thermal radiation when IR observations contain substantial reflected sunlight. The framework applies to both the NEATM and related thermal models. A new parameterization of these models eliminates the dependence of thermal modeling on visible absolute magnitude H, which is not always available. Monte Carlo simulations are used to assess the potential impact of violating Kirchhoff's law on estimates of physical parameters such as diameter and IR albedo, with an emphasis on NEOWISE results. The NEOWISE papers use ten different models, applied to 12 different combinations of WISE data bands, in 47 different combinations. The most prevalent combinations are simulated and the accuracy of diameter estimates is found to be depend critically on the model and data band combination. In the best case of full thermal modeling of all four band the errors in an idealized model the 1σ (68.27%) confidence interval is -5% to +6%, but this combination is just 1.9% of NEOWISE results. Other combinations representing 42% of the NEOWISE results have about twice the CI at -10% to +12%, before accounting for errors due to irregular shape or other real world effects that are not simulated. The model and data band combinations found for the majority of NEOWISE results have much larger systematic and random errors. Kirchhoff's law violation by NEOWISE models leads to errors in estimation accuracy that are strongest for asteroids with W1, W2 band emissivity ɛ12 in both the lowest (0.605 ≤ɛ12 ≤ 0 . 780), and highest decile (0.969 ≤ɛ12 ≤ 0 . 988), corresponding to the highest and lowest deciles of near-IR albedo pIR. Systematic accuracy error between deciles ranges from a low of 5% to as much as 45%, and there are also differences in the random errors. Kirchhoff's law effects also produce large errors in NEOWISE estimates of pIR, particularly for high values. IR observations of asteroids in bands that have substantial reflected sunlight can largely avoid these problems by adopting the Kirchhoff law compliant modeling framework presented here, which is conceptually straightforward and comes without computational cost.
Extraction and Analysis of Display Data
NASA Technical Reports Server (NTRS)
Land, Chris; Moye, Kathryn
2008-01-01
The Display Audit Suite is an integrated package of software tools that partly automates the detection of Portable Computer System (PCS) Display errors. [PCS is a lap top computer used onboard the International Space Station (ISS).] The need for automation stems from the large quantity of PCS displays (6,000+, with 1,000,000+ lines of command and telemetry data). The Display Audit Suite includes data-extraction tools, automatic error detection tools, and database tools for generating analysis spread sheets. These spread sheets allow engineers to more easily identify many different kinds of possible errors. The Suite supports over 40 independent analyses, 16 NASA Tech Briefs, November 2008 and complements formal testing by being comprehensive (all displays can be checked) and by revealing errors that are difficult to detect via test. In addition, the Suite can be run early in the development cycle to find and correct errors in advance of testing.
A Starshade Petal Error Budget for Exo-Earth Detection and Characterization
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy
2011-01-01
We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.
NASA Astrophysics Data System (ADS)
Mohyud Din, S. T.; Zubair, T.; Usman, M.; Hamid, M.; Rafiq, M.; Mohsin, S.
2018-04-01
This study is devoted to analyze the influence of variable diffusion coefficient and variable thermal conductivity on heat and mass transfer in Casson fluid flow. The behavior of concentration and temperature profiles in the presence of Joule heating and viscous dissipation is also studied. The dimensionless conversation laws with suitable BCs are solved via Modified Gegenbauer Wavelets Method (MGWM). It has been observed that increase in Casson fluid parameter (β ) and parameter ɛ enhances the Nusselt number. Moreover, Nusselt number of Newtonian fluid is less than that of the Casson fluid. The phenomenon of mass transport can be increased by solute of variable diffusion coefficient rather than solute of constant diffusion coefficient. A detailed analysis of results is appropriately highlighted. The obtained results, error estimates, and convergence analysis reconfirm the credibility of proposed algorithm. It is concluded that MGWM is an appropriate tool to tackle nonlinear physical models and hence may be extended to some other nonlinear problems of diversified physical nature also.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1994-01-01
Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.
Thermal stability analysis and modelling of advanced perpendicular magnetic tunnel junctions
NASA Astrophysics Data System (ADS)
Van Beek, Simon; Martens, Koen; Roussel, Philippe; Wu, Yueh Chang; Kim, Woojin; Rao, Siddharth; Swerts, Johan; Crotti, Davide; Linten, Dimitri; Kar, Gouri Sankar; Groeseneken, Guido
2018-05-01
STT-MRAM is a promising non-volatile memory for high speed applications. The thermal stability factor (Δ = Eb/kT) is a measure for the information retention time, and an accurate determination of the thermal stability is crucial. Recent studies show that a significant error is made using the conventional methods for Δ extraction. We investigate the origin of the low accuracy. To reduce the error down to 5%, 1000 cycles or multiple ramp rates are necessary. Furthermore, the thermal stabilities extracted from current switching and magnetic field switching appear to be uncorrelated and this cannot be explained by a macrospin model. Measurements at different temperatures show that self-heating together with a domain wall model can explain these uncorrelated Δ. Characterizing self-heating properties is therefore crucial to correctly determine the thermal stability.
ERIC Educational Resources Information Center
Huprich, Julia; Green, Ravonne
2007-01-01
The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…
Pourasghar, Faramarz; Tabrizi, Jafar Sadegh; Yarifard, Khadijeh
2016-01-01
Background: Patient safety is one of the most important elements of quality of healthcare. It means preventing any harm to the patients during medical care process. Objective: This paper introduces a cost-effective tool in which the Radio Frequency Identification (RFID) technology is used to identify medical errors in hospital. Methods: The proposed clinical error management system (CEMS) is consisted of a reader device, a transfer/receiver device, a database and managing software. The reader device works using radio waves and is wireless. The reader sends and receives data to/from the database via the transfer/receiver device which is connected to the computer via USB port. The database contains data about patients’ medication orders. Results: The CEMS has the ability to identify the clinical errors before they occur and then warns the care-giver with voice and visual messages to prevent the error. This device reduces the errors and thus improves the patient safety. Conclusion: A new tool including software and hardware was developed in this study. Application of this tool in clinical settings can help the nurses prevent medical errors. It can also be a useful tool for clinical risk management. Using this device can improve the patient safety to a considerable extent and thus improve the quality of healthcare. PMID:27147802
Pourasghar, Faramarz; Tabrizi, Jafar Sadegh; Yarifard, Khadijeh
2016-04-01
Patient safety is one of the most important elements of quality of healthcare. It means preventing any harm to the patients during medical care process. This paper introduces a cost-effective tool in which the Radio Frequency Identification (RFID) technology is used to identify medical errors in hospital. The proposed clinical error management system (CEMS) is consisted of a reader device, a transfer/receiver device, a database and managing software. The reader device works using radio waves and is wireless. The reader sends and receives data to/from the database via the transfer/receiver device which is connected to the computer via USB port. The database contains data about patients' medication orders. The CEMS has the ability to identify the clinical errors before they occur and then warns the care-giver with voice and visual messages to prevent the error. This device reduces the errors and thus improves the patient safety. A new tool including software and hardware was developed in this study. Application of this tool in clinical settings can help the nurses prevent medical errors. It can also be a useful tool for clinical risk management. Using this device can improve the patient safety to a considerable extent and thus improve the quality of healthcare.
Interferometer for Measuring Displacement to Within 20 pm
NASA Technical Reports Server (NTRS)
Zhao, Feng
2003-01-01
An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
System Measures Thermal Noise In A Microphone
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J.; Ngo, Kim Chi T.
1994-01-01
Vacuum provides acoustic isolation from environment. System for measuring thermal noise of microphone and its preamplifier eliminates some sources of error found in older systems. Includes isolation vessel and exterior suspension, acting together, enables measurement of thermal noise under realistic conditions while providing superior vibrational and accoustical isolation. System yields more accurate measurements of thermal noise.
SEU induced errors observed in microprocessor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asenek, V.; Underwood, C.; Oldfield, M.
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Optical Testing of Retroreflectors for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Ohl, Raymond G.; Frey, Bradley J.; Stock, Joseph M.; McMann, Joseph C.; Zukowiski, Tmitri J.
2010-01-01
A laser tracker (LT) is an important coordinate metrology tool that uses laser interferometry to determine precise distances to objects, points, or surfaces defined by an optical reference, such as a retroreflector. A retroreflector is a precision optic consisting of three orthogonal faces that returns an incident laser beam nearly exactly parallel to the incident beam. Commercial retroreflectors are designed for operation at room temperature and are specified by the divergence, or beam deviation, of the returning laser beam, usually a few arcseconds or less. When a retroreflector goes to extreme cold (.35 K), however, it could be anticipated that the precision alignment between the three faces and the surface figure of each face would be compromised, resulting in wavefront errors and beam divergence, degrading the accuracy of the LT position determination. Controlled tests must be done beforehand to determine survivability and these LT coordinate errors. Since conventional interferometer systems and laser trackers do not operate in vacuum or at cold temperatures, measurements must be done through a vacuum window, and care must be taken to ensure window-induced errors are negligible, or can be subtracted out. Retroreflector holders must be carefully designed to minimize thermally induced stresses. Changes in the path length and refractive index of the retroreflector have to be considered. Cryogenic vacuum testing was done on commercial solid glass retroreflectors for use on cryogenic metrology tasks. The capabilities to measure wavefront errors, measure beam deviations, and acquire laser tracker coordinate data were demonstrated. Measurable but relatively small increases in beam deviation were shown, and further tests are planned to make an accurate determination of coordinate errors.
NASA Astrophysics Data System (ADS)
Choi, J. H.; Kim, S. W.; Won, J. S.
2017-12-01
The objective of this study is monitoring and evaluating the stability of buildings in Seoul, Korea. This study includes both algorithm development and application to a case study. The development focuses on improving the PSI approach for discriminating various geophysical phase components and separating them from the target displacement phase. A thermal expansion is one of the key components that make it difficult for precise displacement measurement. The core idea is to optimize the thermal expansion factor using air temperature data and to model the corresponding phase by fitting the residual phase. We used TerraSAR-X SAR data acquired over two years from 2011 to 2013 in Seoul, Korea. The temperature fluctuation according to seasons is considerably high in Seoul, Korea. Other problem is the highly-developed skyscrapers in Seoul, which seriously contribute to DEM errors. To avoid a high computational burden and unstable solution of the nonlinear equation due to unknown parameters (a thermal expansion parameter as well as two conventional parameters: linear velocity and DEM errors), we separate a phase model into two main steps as follows. First, multi-baseline pairs with very short time interval in which deformation components and thermal expansion can be negligible were used to estimate DEM errors first. Second, single-baseline pairs were used to estimate two remaining parameters, linear deformation rate and thermal expansion. The thermal expansion of buildings closely correlate with the seasonal temperature fluctuation. Figure 1 shows deformation patterns of two selected buildings in Seoul. In the figures of left column (Figure 1), it is difficult to observe the true ground subsidence due to a large cyclic pattern caused by thermal dilation of the buildings. The thermal dilation often mis-leads the results into wrong conclusions. After the correction by the proposed method, true ground subsidence was able to be precisely measured as in the bottom right figure in Figure 1. The results demonstrate how the thermal expansion phase blinds the time-series measurement of ground motion and how well the proposed approach able to remove the noise phases caused by thermal expansion and DEM errors. Some of the detected displacements matched well with the pre-reported events, such as ground subsidence and sinkhole.
Integrating automated structured analysis and design with Ada programming support environments
NASA Technical Reports Server (NTRS)
Hecht, Alan; Simmons, Andy
1986-01-01
Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
An Approximate Ablative Thermal Protection System Sizing Tool for Entry System Design
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2005-01-01
A computer tool to perform entry vehicle ablative thermal protection systems sizing has been developed. Two options for calculating the thermal response are incorporated into the tool. One, an industry-standard, high-fidelity ablation and thermal response program was integrated into the tool, making use of simulated trajectory data to calculate its boundary conditions at the ablating surface. Second, an approximate method that uses heat of ablation data to estimate heat shield recession during entry has been coupled to a one-dimensional finite-difference calculation that calculates the in-depth thermal response. The in-depth solution accounts for material decomposition, but does not account for pyrolysis gas energy absorption through the material. Engineering correlations are used to estimate stagnation point convective and radiative heating as a function of time. The sizing tool calculates recovery enthalpy, wall enthalpy, surface pressure, and heat transfer coefficient. Verification of this tool is performed by comparison to past thermal protection system sizings for the Mars Pathfinder and Stardust entry systems and calculations are performed for an Apollo capsule entering the atmosphere at lunar and Mars return speeds.
An Approximate Ablative Thermal Protection System Sizing Tool for Entry System Design
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2006-01-01
A computer tool to perform entry vehicle ablative thermal protection systems sizing has been developed. Two options for calculating the thermal response are incorporated into the tool. One, an industry-standard, high-fidelity ablation and thermal response program was integrated into the tool, making use of simulated trajectory data to calculate its boundary conditions at the ablating surface. Second, an approximate method that uses heat of ablation data to estimate heat shield recession during entry has been coupled to a one-dimensional finite-difference calculation that calculates the in-depth thermal response. The in-depth solution accounts for material decomposition, but does not account for pyrolysis gas energy absorption through the material. Engineering correlations are used to estimate stagnation point convective and radiative heating as a function of time. The sizing tool calculates recovery enthalpy, wall enthalpy, surface pressure, and heat transfer coefficient. Verification of this tool is performed by comparison to past thermal protection system sizings for the Mars Pathfinder and Stardust entry systems and calculations are performed for an Apollo capsule entering the atmosphere at lunar and Mars return speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkhabela, P.; Han, J.; Tyobeka, B.
2006-07-01
The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
NASA Astrophysics Data System (ADS)
Abdel-Aal, H. A.; El Mansori, M.
2011-05-01
In this paper we study failure of coated carbide tools due to thermal loading. The study emphasizes the role assumed by the thermo-physical properties of the tool material in enhancing or preventing mass attrition of the cutting elements within the tool. It is shown that within a comprehensive view of the nature of conduction in the tool zone, thermal conduction is not solely affected by temperature. Rather it is a function of the so called thermodynamic forces. These are the stress, the strain, strain rate, rate of temperature rise, and the temperature gradient. Although that within such consideration description of thermal conduction is non-linear, it is beneficial to employ such a form because it facilitates a full mechanistic understanding of thermal activation of tool wear.
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Stultz, Jeremy S; Nahata, Milap C
2015-07-01
Information technology (IT) has the potential to prevent medication errors. While many studies have analyzed specific IT technologies and preventable adverse drug events, no studies have identified risk factors for errors still occurring that are not preventable by IT. The objective of this study was to categorize reported or trigger tool-identified errors and adverse events (AEs) at a pediatric tertiary care institution. Also, we sought to identify medication errors preventable by IT, determine why IT-preventable errors occurred, and to identify risk factors for errors that were not preventable by IT. This was a retrospective analysis of voluntarily reported or trigger tool-identified errors and AEs occurring from 1 July 2011 to 30 June 2012. Medication errors reaching the patients were categorized based on the origin, severity, and location of the error, the month in which they occurred, and the age of the patient involved. Error characteristics were included in a multivariable logistic regression model to determine independent risk factors for errors occurring that were not preventable by IT. A medication error was defined as a medication-related failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. An IT-preventable error was defined as having an IT system in place to aid in prevention of the error at the phase and location of its origin. There were 936 medication errors (identified by voluntarily reporting or a trigger tool system) included and analyzed. Drug administration errors were identified most frequently (53.4% ), but prescribing errors most frequently caused harm (47.2 % of harmful errors). There were 470 (50.2 %) errors that were IT preventable at their origin, including 155 due to IT system bypasses, 103 due to insensitivity of IT alerting systems, and 47 with IT alert overrides. Dispensing, administration, and documentation errors had higher odds than prescribing errors for being not preventable by IT [odds ratio (OR) 8.0, 95 % CI 4.4-14.6; OR 2.4, 95 % CI 1.7-3.7; and OR 6.7, 95 % CI 3.3-14.5, respectively; all p < 0.001). Errors occurring in the operating room and in the outpatient setting had higher odds than intensive care units for being not preventable by IT (OR 10.4, 95 % CI 4.0-27.2, and OR 2.6, 95 % CI 1.3-5.0, respectively; all p ≤ 0.004). Despite extensive IT implementation at the studied institution, approximately one-half of the medication errors identified by voluntarily reporting or a trigger tool system were not preventable by the utilized IT systems. Inappropriate use of IT systems was a common cause of errors. The identified risk factors represent areas where IT safety features were lacking.
Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314
A new method for the analysis of fire spread modeling errors
Francis M. Fujioka
2002-01-01
Fire spread models have a long history, and their use will continue to grow as they evolve from a research tool to an operational tool. This paper describes a new method to analyse two-dimensional fire spread modeling errors, particularly to quantify the uncertainties of fire spread predictions. Measures of error are defined from the respective spread distances of...
Chew, Keng Sheng; Kueh, Yee Cheng; Abdul Aziz, Adlihafizi
2017-03-21
Despite their importance on diagnostic accuracy, there is a paucity of literature on questionnaire tools to assess clinicians' awareness toward cognitive errors. A validation study was conducted to develop a questionnaire tool to evaluate the Clinician's Awareness Towards Cognitive Errors (CATChES) in clinical decision making. This questionnaire is divided into two parts. Part A is to evaluate the clinicians' awareness towards cognitive errors in clinical decision making while Part B is to evaluate their perception towards specific cognitive errors. Content validation for both parts was first determined followed by construct validation for Part A. Construct validation for Part B was not determined as the responses were set in a dichotomous format. For content validation, all items in both Part A and Part B were rated as "excellent" in terms of their relevance in clinical settings. For construct validation using exploratory factor analysis (EFA) for Part A, a two-factor model with total variance extraction of 60% was determined. Two items were deleted. Then, the EFA was repeated showing that all factor loadings are above the cut-off value of >0.5. The Cronbach's alpha for both factors are above 0.6. The CATChES questionnaire tool is a valid questionnaire tool aimed to evaluate the awareness among clinicians toward cognitive errors in clinical decision making.
Sensitivity in error detection of patient specific QA tools for IMRT plans
NASA Astrophysics Data System (ADS)
Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.
2016-03-01
The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan
TOUGH2 and iTOUGH2 are powerful models that simulate the heat and fluid flows in porous and fracture media, and perform parameter estimation, sensitivity analysis and uncertainty propagation analysis. However, setting up the input files is not only tedious, but error prone, and processing output files is time consuming. Here, we present an open source Matlab-based tool (iMatTOUGH) that supports the generation of all necessary inputs for both TOUGH2 and iTOUGH2 and visualize their outputs. The tool links the inputs of TOUGH2 and iTOUGH2, making sure the two input files are consistent. It supports the generation of rectangular computational mesh, i.e.,more » it automatically generates the elements and connections as well as their properties as required by TOUGH2. The tool also allows the specification of initial and time-dependent boundary conditions for better subsurface heat and water flow simulations. The effectiveness of the tool is illustrated by an example that uses TOUGH2 and iTOUGH2 to estimate soil hydrological and thermal properties from soil temperature data and simulate the heat and water flows at the Rifle site in Colorado.« less
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan
2016-04-01
TOUGH2 and iTOUGH2 are powerful models that simulate the heat and fluid flows in porous and fracture media, and perform parameter estimation, sensitivity analysis and uncertainty propagation analysis. However, setting up the input files is not only tedious, but error prone, and processing output files is time consuming. Here, we present an open source Matlab-based tool (iMatTOUGH) that supports the generation of all necessary inputs for both TOUGH2 and iTOUGH2 and visualize their outputs. The tool links the inputs of TOUGH2 and iTOUGH2, making sure the two input files are consistent. It supports the generation of rectangular computational mesh, i.e.,more » it automatically generates the elements and connections as well as their properties as required by TOUGH2. The tool also allows the specification of initial and time-dependent boundary conditions for better subsurface heat and water flow simulations. The effectiveness of the tool is illustrated by an example that uses TOUGH2 and iTOUGH2 to estimate soil hydrological and thermal properties from soil temperature data and simulate the heat and water flows at the Rifle site in Colorado.« less
Theoretical Analysis of Pore Pressure Diffusion in Some Basic Rock Mechanics Experiments
NASA Astrophysics Data System (ADS)
Braun, Philipp; Ghabezloo, Siavash; Delage, Pierre; Sulem, Jean; Conil, Nathalie
2018-05-01
Non-homogeneity of the pore pressure field in a specimen is an issue for characterization of the thermo-poromechanical behaviour of low-permeability geomaterials, as in the case of the Callovo-Oxfordian claystone ( k < 10-20 m2), a possible host rock for deep radioactive waste disposal in France. In tests with drained boundary conditions, excess pore pressure can result in significant errors in the measurement of material parameters. Analytical solutions are presented for the change in time of the pore pressure field in a specimen submitted to various loading paths and different rates. The pore pressure field in mechanical and thermal undrained tests is simulated with a 1D finite difference model taking into account the dead volume of the drainage system of the triaxial cell connected to the specimen. These solutions provide a simple and efficient tool for the estimation of the conditions that must hold for reliable determination of material parameters and for optimization of various test conditions to minimize the experimental duration, while keeping the measurement errors at an acceptable level.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Tool Wear Monitoring Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu
A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
Kurylyk, Barret L.; Irvine, Dylan J.; Carey, Sean K.; Briggs, Martin A.; Werkema, Dale D.; Bonham, Mariah
2017-01-01
Groundwater flow advects heat, and thus, the deviation of subsurface temperatures from an expected conduction‐dominated regime can be analysed to estimate vertical water fluxes. A number of analytical approaches have been proposed for using heat as a groundwater tracer, and these have typically assumed a homogeneous medium. However, heterogeneous thermal properties are ubiquitous in subsurface environments, both at the scale of geologic strata and at finer scales in streambeds. Herein, we apply the analytical solution of Shan and Bodvarsson (2004), developed for estimating vertical water fluxes in layered systems, in 2 new environments distinct from previous vadose zone applications. The utility of the solution for studying groundwater‐surface water exchange is demonstrated using temperature data collected from an upwelling streambed with sediment layers, and a simple sensitivity analysis using these data indicates the solution is relatively robust. Also, a deeper temperature profile recorded in a borehole in South Australia is analysed to estimate deeper water fluxes. The analytical solution is able to match observed thermal gradients, including the change in slope at sediment interfaces. Results indicate that not accounting for layering can yield errors in the magnitude and even direction of the inferred Darcy fluxes. A simple automated spreadsheet tool (Flux‐LM) is presented to allow users to input temperature and layer data and solve the inverse problem to estimate groundwater flux rates from shallow (e.g., <1 m) or deep (e.g., up to 100 m) profiles. The solution is not transient, and thus, it should be cautiously applied where diel signals propagate or in deeper zones where multi‐decadal surface signals have disturbed subsurface thermal regimes.
NASA Astrophysics Data System (ADS)
Pickett, Brian K.; Cassen, Patrick; Durisen, Richard H.; Link, Robert
2000-02-01
In the paper ``The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions'' by Brian K. Pickett, Patrick Cassen, Richard H. Durisen, and Robert Link (ApJ, 529, 1034 [2000]), the wrong version of Figure 10 was published as a result of an error at the Press. The correct version of Figure 10 appears below. The Press sincerely regrets this error.
Thermal Conductance of Pressed Bimetal Contact Pairs at Liquid Nitrogen Temperatures
NASA Technical Reports Server (NTRS)
Kittle, Peter; Salerno, Louis J.; Spivak, Alan L.
1994-01-01
Large Dewars often use aluminum radiation shields and stainless steel vent lines. A simple, low cost method of making thermal contact between the shield and the line is to deform the shield around the line. A knowledge of the thermal conductance of such a joint is needed to thermally analyze the system. The thermal conductance of pressed metal contacts consisting of one aluminum and one stainless steel contact has been measured at 77 K, with applied forces from 8.9 N to 267 N. Both 5052 or 5083 aluminum were used as the upper contact. The lower contact was 304L stainless steel. The thermal conductance was found to be linear in temperature over the narrow temperature range of measurement. As the force was increased, the thermal conductance ranged from roughly 9 to 21 mW/K within a range of errors from 3% to 8%. Within the range of error no difference could be found between the using either of the aluminum alloys as the upper contact. Extrapolating the data to zero applied force does not result in zero thermal conductance. Possible causes of this anomalous effect are discussed.
A thermal scale modeling study for Apollo and Apollo applications, volume 1
NASA Technical Reports Server (NTRS)
Shannon, R. L.
1972-01-01
The program is reported for developing and demonstrating the capabilities of thermal scale modeling as a thermal design and verification tool for Apollo and Apollo Applications Projects. The work performed for thermal scale modeling of STB; cabin atmosphere/spacecraft cabin wall thermal interface; closed loop heat rejection radiator; and docked module/spacecraft thermal interface are discussed along with the test facility requirements for thermal scale model testing of AAP spacecraft. It is concluded that thermal scale modeling can be used as an effective thermal design and verification tool to provide data early in a spacecraft development program.
USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Augenbroe , Godfried
2015-12-09
Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less
Thin film absorption characterization by focus error thermal lensing
NASA Astrophysics Data System (ADS)
Domené, Esteban A.; Schiltz, Drew; Patel, Dinesh; Day, Travis; Jankowska, E.; Martínez, Oscar E.; Rocca, Jorge J.; Menoni, Carmen S.
2017-12-01
A simple, highly sensitive technique for measuring absorbed power in thin film dielectrics based on thermal lensing is demonstrated. Absorption of an amplitude modulated or pulsed incident pump beam by a thin film acts as a heat source that induces thermal lensing in the substrate. A second continuous wave collimated probe beam defocuses after passing through the sample. Determination of absorption is achieved by quantifying the change of the probe beam profile at the focal plane using a four-quadrant detector and cylindrical lenses to generate a focus error signal. This signal is inherently insensitive to deflection, which removes noise contribution from point beam stability. A linear dependence of the focus error signal on the absorbed power is shown for a dynamic range of over 105. This technique was used to measure absorption loss in dielectric thin films deposited on fused silica substrates. In pulsed configuration, a single shot sensitivity of about 20 ppm is demonstrated, providing a unique technique for the characterization of moving targets as found in thin film growth instrumentation.
Khan, Waseem S; Hamadneh, Nawaf N; Khan, Waqar A
2017-01-01
In this study, multilayer perception neural network (MLPNN) was employed to predict thermal conductivity of PVP electrospun nanocomposite fibers with multiwalled carbon nanotubes (MWCNTs) and Nickel Zinc ferrites [(Ni0.6Zn0.4) Fe2O4]. This is the second attempt on the application of MLPNN with prey predator algorithm for the prediction of thermal conductivity of PVP electrospun nanocomposite fibers. The prey predator algorithm was used to train the neural networks to find the best models. The best models have the minimal of sum squared error between the experimental testing data and the corresponding models results. The minimal error was found to be 0.0028 for MWCNTs model and 0.00199 for Ni-Zn ferrites model. The predicted artificial neural networks (ANNs) responses were analyzed statistically using z-test, correlation coefficient, and the error functions for both inclusions. The predicted ANN responses for PVP electrospun nanocomposite fibers were compared with the experimental data and were found in good agreement.
Achievable flatness in a large microwave power transmitting antenna
NASA Technical Reports Server (NTRS)
Ried, R. C.
1980-01-01
A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.
Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module
NASA Astrophysics Data System (ADS)
Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge
2017-04-01
A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.
Space Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Rayos, Elonsio M.; Campbell, Charles H.; Rickman, Steven L.; Larsen, Curtis E.
2007-01-01
Complex computer codes are used to estimate thermal and structural reentry loads on the Shuttle Orbiter induced by ice and foam debris impact during ascent. Such debris can create cavities in the Shuttle Thermal Protection System. The sizes and shapes of these cavities are approximated to accommodate a code limitation that requires simple "shoebox" geometries to describe the cavities -- rectangular areas and planar walls that are at constant angles with respect to vertical. These approximations induce uncertainty in the code results. The Modern Design of Experiments (MDOE) has recently been applied to develop a series of resource-minimal computational experiments designed to generate low-order polynomial graduating functions to approximate the more complex underlying codes. These polynomial functions were then used to propagate cavity geometry errors to estimate the uncertainty they induce in the reentry load calculations performed by the underlying code. This paper describes a methodological study focused on evaluating the application of MDOE to future operational codes in a rapid and low-cost way to assess the effects of cavity geometry uncertainty.
NASA Astrophysics Data System (ADS)
Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.
2016-09-01
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, J. N.; Smith, T. M.; Cyr, E. C.
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...
2016-05-20
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
Martin, Markus; Dressing, Andrea; Bormann, Tobias; Schmidt, Charlotte S M; Kümmerer, Dorothee; Beume, Lena; Saur, Dorothee; Mader, Irina; Rijntjes, Michel; Kaller, Christoph P; Weiller, Cornelius
2017-08-01
The study aimed to elucidate areas involved in recognizing tool-associated actions, and to characterize the relationship between recognition and active performance of tool use.We performed voxel-based lesion-symptom mapping in a prospective cohort of 98 acute left-hemisphere ischemic stroke patients (68 male, age mean ± standard deviation, 65 ± 13 years; examination 4.4 ± 2 days post-stroke). In a video-based test, patients distinguished correct tool-related actions from actions with spatio-temporal (incorrect grip, kinematics, or tool orientation) or conceptual errors (incorrect tool-recipient matching, e.g., spreading jam on toast with a paintbrush). Moreover, spatio-temporal and conceptual errors were determined during actual tool use.Deficient spatio-temporal error discrimination followed lesions within a dorsal network in which the inferior parietal lobule (IPL) and the lateral temporal cortex (sLTC) were specifically relevant for assessing functional hand postures and kinematics, respectively. Conversely, impaired recognition of conceptual errors resulted from damage to ventral stream regions including anterior temporal lobe. Furthermore, LTC and IPL lesions impacted differently on action recognition and active tool use, respectively.In summary, recognition of tool-associated actions relies on a componential network. Our study particularly highlights the dissociable roles of LTC and IPL for the recognition of action kinematics and functional hand postures, respectively. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zhang, P. P.; Guo, Y.; Wang, B.
2017-05-01
The main problems in milling difficult-to-machine materials are the high cutting temperature and rapid tool wear. However it is impossible to investigate tool wear in machining. Tool wear and cutting chip formation are two of the most important representations for machining efficiency and quality. The purpose of this paper is to develop the model of tool wear with cutting chip formation (width of chip and radian of chip) on difficult-to-machine materials. Thereby tool wear is monitored by cutting chip formation. A milling experiment on the machining centre with three sets cutting parameters was performed to obtain chip formation and tool wear. The experimental results show that tool wear increases gradually along with cutting process. In contrast, width of chip and radian of chip decrease. The model is developed by fitting the experimental data and formula transformations. The most of monitored errors of tool wear by the chip formation are less than 10%. The smallest error is 0.2%. Overall errors by the radian of chip are less than the ones by the width of chip. It is new way to monitor and detect tool wear by cutting chip formation in milling difficult-to-machine materials.
Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison
2017-03-01
This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.
NASA Technical Reports Server (NTRS)
Depater, I.
1977-01-01
Observations were made of Jupiter with the Westerbork telescope at all three frequencies available: 610 MHz, 1415 MHz, and 4995 MHz. The raw measurements were corrected for position errors, atmospheric extinction, Faraday rotation, clock, frequency, and baseline errors, and errors due to a shadowing effect. The data was then converted into brightness distribution of the sky by Fourier transformation. Maps of both thermal and nonthermal radiation were developed. Results indicate that the thermal disk of Jupiter measured at a wavelength of 6 cm has a temperature of 236 + or - 15 K. The radiation belts have an overall structure governed by the trapping of electrons in the dipolar field of the planet with significant beaming of the synchrotron radiation into the plane of the magnetic equator.
NASA Astrophysics Data System (ADS)
Kloppstech, K.; Könne, N.; Worbes, L.; Hellmann, D.; Kittel, A.
2015-11-01
We report on a precise in situ procedure to calibrate the heat flux sensor of a near-field scanning thermal microscope. This sensitive thermal measurement is based on 1ω modulation technique and utilizes a hot wire method to build an accessible and controllable heat reservoir. This reservoir is coupled thermally by near-field interactions to our probe. Thus, the sensor's conversion relation V th ( QGS ∗ ) can be precisely determined. Vth is the thermopower generated in the sensor's coaxial thermocouple and QGS ∗ is the thermal flux from reservoir through the sensor. We analyze our method with Gaussian error calculus with an error estimate on all involved quantities. The overall relative uncertainty of the calibration procedure is evaluated to be about 8% for the measured conversion constant, i.e., (2.40 ± 0.19) μV/μW. Furthermore, we determine the sensor's thermal resistance to be about 0.21 K/μW and find the thermal resistance of the near-field mediated coupling at a distance between calibration standard and sensor of about 250 pm to be 53 K/μW.
Report on Automated Semantic Analysis of Scientific and Engineering Codes
NASA Technical Reports Server (NTRS)
Stewart. Maark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.
NASA Astrophysics Data System (ADS)
Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.
2016-11-01
The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
Evaluation of platinum resistance thermometers
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Dillon-Townes, Lawrence A.
1988-01-01
An evaluation procedure for the characterization of industrial platinum resistance thermometers (PRTs) for use in the temperature range -120 to 160 C was investigated. This evaluation procedure consisted of calibration, thermal stability and hysteresis testing of four surface measuring PRTs. Five different calibration schemes were investigated for these sensors. The IPTS-68 formulation produced the most accurate result, yielding average sensor systematic error of 0.02 C and random error of 0.1 C. The sensors were checked for thermal stability by successive and thermal cycling between room temperature, 160 C, and boiling point of nitrogen. All the PRTs suffered from instability and hysteresis. The applicability of the self-heating technique as an in situ method for checking the calibration of PRTs located inside wind tunnels was investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesaran, Ahmad
This presentation describes the thermal design of battery packs at the National Renewable Energy Laboratory. A battery thermal management system essential for xEVs for both normal operation during daily driving (achieving life and performance) and off-normal operation during abuse conditions (achieving safety). The battery thermal management system needs to be optimized with the right tools for the lowest cost. Experimental tools such as NREL's isothermal battery calorimeter, thermal imaging, and heat transfer setups are needed. Thermal models and computer-aided engineering tools are useful for robust designs. During abuse conditions, designs should prevent cell-to-cell propagation in a module/pack (i.e., keep themore » fire small and manageable). NREL's battery ISC device can be used for evaluating the robustness of a module/pack to cell-to-cell propagation.« less
Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach
Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen
2016-01-01
A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design. PMID:28773800
Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach.
Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen
2016-08-09
A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design.
Majorana Braiding with Thermal Noise.
Pedrocchi, Fabio L; DiVincenzo, David P
2015-09-18
We investigate the self-correcting properties of a network of Majorana wires, in the form of a trijunction, in contact with a parity-preserving thermal environment. As opposed to the case where Majorana bound states are immobile, braiding Majorana bound states within a trijunction introduces dangerous error processes that we identify. Such errors prevent the lifetime of the memory from increasing with the size of the system. We confirm our predictions with Monte Carlo simulations. Our findings put a restriction on the degree of self-correction of this specific quantum computing architecture.
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Traceability of On-Machine Tool Measurement: A Review.
Mutilba, Unai; Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor; Yagüe-Fabra, Jose A
2017-07-11
Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand.
Design and thermal analysis of a mold used in the injection of elastomers
NASA Astrophysics Data System (ADS)
Fekiri, Nasser; Canto, Cécile; Madec, Yannick; Mousseau, Pierre; Plot, Christophe; Sarda, Alain
2017-10-01
In the process of injection molding of elastomers, improving the energy efficiency of the tools is a current challenge for industry in terms of energy consumption, productivity and product quality. In the rubber industry, 20% of the energy consumed by capital goods comes from heating processes; more than 50% of heat losses are linked to insufficient control and thermal insulation of Molds. The design of the tooling evolves in particular towards the reduction of the heated mass and the thermal insulation of the molds. In this paper, we present a complex tool composed, on one hand, of a multi-cavity mold designed by reducing the heated mass and equipped with independent control zones placed closest to each molding cavity and, on the other hand, of a regulated channel block (RCB) which makes it possible to limit the waste of rubber during the injection. The originality of this tool lies in thermally isolating the regulated channel block from the mold and the cavities between them in order to better control the temperature field in the material which is transformed. We present the design and the instrumentation of the experimental set-up. Experimental measurements allow us to understand the thermal of the tool and to show the thermal heterogeneities on the surface of the mold and in the various cavities. Tests of injection molding of the rubber and a thermal balance on the energy consumption of the tool are carried out.
Thermal imaging as a lie detection tool at airports.
Warmelink, Lara; Vrij, Aldert; Mann, Samantha; Leal, Sharon; Forrester, Dave; Fisher, Ronald P
2011-02-01
We tested the accuracy of thermal imaging as a lie detection tool in airport screening. Fifty-one passengers in an international airport departure hall told the truth or lied about their forthcoming trip in an interview. Their skin temperature was recorded via a thermal imaging camera. Liars' skin temperature rose significantly during the interview, whereas truth tellers' skin temperature remained constant. On the basis of these different patterns, 64% of truth tellers and 69% of liars were classified correctly. The interviewers made veracity judgements independently from the thermal recordings. The interviewers outperformed the thermal recordings and classified 72% of truth tellers and 77% of liars correctly. Accuracy rates based on the combination of thermal imaging scores and interviewers' judgements were the same as accuracy rates based on interviewers' judgements alone. Implications of the findings for the suitability of thermal imaging as a lie detection tool in airports are discussed.
Cirrus cloud retrieval from MSG/SEVIRI during day and night using artificial neural networks
NASA Astrophysics Data System (ADS)
Strandgren, Johan; Bugliaro, Luca
2017-04-01
By covering a large part of the Earth, cirrus clouds play an important role in climate as they reflect incoming solar radiation and absorb outgoing thermal radiation. Nevertheless, the cirrus clouds remain one of the largest uncertainties in atmospheric research and the understanding of the physical processes that govern their life cycle is still poorly understood, as is their representation in climate models. To monitor and better understand the properties and physical processes of cirrus clouds, it's essential that those tenuous clouds can be observed from geostationary spaceborne imagers like SEVIRI (Spinning Enhanced Visible and InfraRed Imager), that possess a high temporal resolution together with a large field of view and play an important role besides in-situ observations for the investigation of cirrus cloud processes. CiPS (Cirrus Properties from Seviri) is a new algorithm targeting thin cirrus clouds. CiPS is an artificial neural network trained with coincident SEVIRI and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) observations in order to retrieve a cirrus cloud mask along with the cloud top height (CTH), ice optical thickness (IOT) and ice water path (IWP) from SEVIRI. By utilizing only the thermal/IR channels of SEVIRI, CiPS can be used during day and night making it a powerful tool for the cirrus life cycle analysis. Despite the great challenge of detecting thin cirrus clouds and retrieving their properties from a geostationary imager using only the thermal/IR wavelengths, CiPS performs well. Among the cirrus clouds detected by CALIOP, CiPS detects 70 and 95 % of the clouds with an optical thickness of 0.1 and 1.0 respectively. Among the cirrus free pixels, CiPS classify 96 % correctly. For the CTH retrieval, CiPS has a mean absolute percentage error of 10 % or less with respect to CALIOP for cirrus clouds with a CTH greater than 8 km. For the IOT retrieval, CiPS has a mean absolute percentage error of 100 % or less with respect to CALIOP for cirrus clouds with an optical thickness down to 0.07. For such thin cirrus clouds an error of 100 % should be regarded as low from a geostationary imager like SEVIRI. The IWP retrieved by CiPS shows a similar performance, but has larger deviations for the thinner cirrus clouds.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.
Effective thermal conductivity determination for low-density insulating materials
NASA Technical Reports Server (NTRS)
Williams, S. D.; Curry, D. M.
1978-01-01
That nonlinear least squares can be used to determine effective thermal conductivity was demonstrated, and a method for assessing the relative error associated with these predicted values was provided. The differences between dynamic and static determination of effective thermal conductivity of low-density materials that transfer heat by a combination of conduction, convection, and radiation were discussed.
Infrared Thermal Imaging as a Tool in University Physics Education
ERIC Educational Resources Information Center
Mollmann, Klaus-Peter; Vollmer, Michael
2007-01-01
Infrared thermal imaging is a valuable tool in physics education at the university level. It can help to visualize and thereby enhance understanding of physical phenomena from mechanics, thermal physics, electromagnetism, optics and radiation physics, qualitatively as well as quantitatively. We report on its use as lecture demonstrations, student…
Optimizing X-ray mirror thermal performance using matched profile cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lin; Cocco, Daniele; Kelez, Nicholas
2015-08-07
To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less
Use of advanced modeling techniques to optimize thermal packaging designs.
Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar
2010-01-01
Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed during its validation. Thermal packaging is routinely used by the pharmaceutical industry to provide passive and active temperature control of their thermally sensitive products from manufacture through end use (termed the cold chain). In this study, the authors focus on passive temperature control (passive control does not require any external energy source and is entirely based on specific and/or latent heat of shipper components). As temperature-sensitive pharmaceuticals are being transported over longer distances, cold chain reliability is essential. To achieve reliability, a significant amount of time and resources must be invested in design, test, and production of optimized temperature-controlled packaging solutions. To shorten the cumbersome trial and error approach (design/test/design/test …), computer simulation (virtual prototyping and testing of thermal shippers) is a promising method. Although several companies have attempted to develop such a tool, there has been limited success to date. Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a coupled conductive/convective-based thermal shipper. A modeling technique capable of correctly capturing shipper thermal behavior can be used to develop packaging designs more quickly, reducing up-front costs while also improving shipper performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brevick, Jerald R.
2014-06-13
In the high pressure die casting process, molten metal is introduced into a die cavity at high pressure and velocity, enabling castings of thin wall section and complex geometry to be obtained. Traditional die materials have been hot work die steels, commonly H13. Manufacture of the dies involves machining the desired geometry from monolithic blocks of annealed tool steel, heat treating to desired hardness and toughness, and final machining, grinding and polishing. The die is fabricated with internal water cooling passages created by drilling. These materials and fabrication methods have been used for many years, however, there are limitations. Toolmore » steels have relatively low thermal conductivity, and as a result, it takes time to remove the heat from the tool steel via the drilled internal water cooling passages. Furthermore, the low thermal conductivity generates large thermal gradients at the die cavity surfaces, which ultimately leads to thermal fatigue cracking on the surfaces of the die steel. The high die surface temperatures also promote the metallurgical bonding of the aluminum casting alloy to the surface of the die steel (soldering). In terms of process efficiency, these tooling limitations reduce the number of die castings that can be made per unit time by increasing cycle time required for cooling, and increasing downtime and cost to replace tooling which has failed either by soldering or by thermal fatigue cracking (heat checking). The objective of this research was to evaluate the feasibility of designing, fabricating, and testing high pressure die casting tooling having properties equivalent to H13 on the surface in contact with molten casting alloy - for high temperature and high velocity molten metal erosion resistance – but with the ability to conduct heat rapidly to interior water cooling passages. A layered bimetallic tool design was selected, and the design evaluated for thermal and mechanical performance via finite element analysis. H13 was retained as the exterior layer of the tooling, while commercially pure copper was chosen for the interior structure of the tooling. The tooling was fabricated by traditional machining of the copper substrate, and H13 powder was deposited on the copper via the Laser Engineered Net Shape (LENSTM) process. The H13 deposition layer was then final machined by traditional methods. Two tooling components were designed and fabricated; a thermal fatigue test specimen, and a core for a commercial aluminum high pressure die casting tool. The bimetallic thermal fatigue specimen demonstrated promising performance during testing, and the test results were used to improve the design and LENS TM deposition methods for subsequent manufacture of the commercial core. Results of the thermal finite element analysis for the thermal fatigue test specimen indicate that it has the ability to lose heat to the internal water cooling passages, and to external spray cooling, significantly faster than a monolithic H13 thermal fatigue sample. The commercial core is currently in the final stages of fabrication, and will be evaluated in an actual production environment at Shiloh Die casting. In this research, the feasibility of designing and fabricating copper/H13 bimetallic die casting tooling via LENS TM processing, for the purpose of improving die casting process efficiency, is demonstrated.« less
Leyde, Brian P.; Klein, Sanford A; Nellis, Gregory F.; Skye, Harrison
2017-01-01
This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model. PMID:28785125
Kalman filtered MR temperature imaging for laser induced thermal therapies.
Fuentes, D; Yung, J; Hazle, J D; Weinberg, J S; Stafford, R J
2012-04-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3-D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L(2) (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, ∆t < 10 s, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss ∆t > 10 sec.
Calibration and temperature correction of heat dissipation matric potential sensors
Flint, A.L.; Campbell, G.S.; Ellett, K.M.; Calissendorff, C.
2002-01-01
This paper describes how heat dissipation sensors, used to measure soil water matric potential, were analyzed to develop a normalized calibration equation and a temperature correction method. Inference of soil matric potential depends on a correlation between the variable thermal conductance of the sensor's porous ceramic and matric poten-tial. Although this correlation varies among sensors, we demonstrate a normalizing procedure that produces a single calibration relationship. Using sensors from three sources and different calibration methods, the normalized calibration resulted in a mean absolute error of 23% over a matric potential range of -0.01 to -35 MPa. Because the thermal conductivity of variably saturated porous media is temperature dependent, a temperature correction is required for application of heat dissipation sensors in field soils. A temperature correction procedure is outlined that reduces temperature dependent errors by 10 times, which reduces the matric potential measurement errors by more than 30%. The temperature dependence is well described by a thermal conductivity model that allows for the correction of measurements at any temperature to measurements at the calibration temperature.
NASA Astrophysics Data System (ADS)
Leakeas, Charles L.; Capehart, Shay R.; Bartell, Richard J.; Cusumano, Salvatore J.; Whiteley, Matthew R.
2011-06-01
Laser weapon systems comprised of tiled subapertures are rapidly emerging in importance in the directed energy community. Performance models of these laser weapon systems have been developed from numerical simulations of a high fidelity wave-optics code called WaveTrain which is developed by MZA Associates. System characteristics such as mutual coherence, differential jitter, and beam quality rms wavefront error are defined for a focused beam on the target. Engagement scenarios are defined for various platform and target altitudes, speeds, headings, and slant ranges along with the natural wind speed and heading. Inputs to the performance model include platform and target height and velocities, Fried coherence length, Rytov number, isoplanatic angle, thermal blooming distortion number, Greenwood and Tyler frequencies, and atmospheric transmission. The performance model fit is based on power-in-the-bucket (PIB) values against the PIB from the simulation results for the vacuum diffraction-limited spot size as the bucket. The goal is to develop robust performance models for aperture phase error, turbulence, and thermal blooming effects in tiled subaperture systems.
Method for forming an abrasive surface on a tool
Seals, Roland D.; White, Rickey L.; Swindeman, Catherine J.; Kahl, W. Keith
1999-01-01
A method for fabricating a tool used in cutting, grinding and machining operations, is provided. The method is used to deposit a mixture comprising an abrasive material and a bonding material on a tool surface. The materials are propelled toward the receiving surface of the tool substrate using a thermal spray process. The thermal spray process melts the bonding material portion of the mixture, but not the abrasive material. Upon impacting the tool surface, the mixture or composition solidifies to form a hard abrasive tool coating.
Fast scattering simulation tool for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.
2015-12-01
A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.
Multidisciplinary Analysis of a Hypersonic Engine
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Stewart, Mark
2003-01-01
The objective is to develop high fidelity tools that can influence ISTAR design In particular, tools for coupling Fluid-Thermal-Structural simulations RBCC/TBCC designers carefully balance aerodynamic, thermal, weight, & structural considerations; consistent multidisciplinary solutions reveal details (at modest cost) At Scram mode design point, simulations give details of inlet & combustor performance, thermal loads, structural deflections.
Path planning and parameter optimization of uniform removal in active feed polishing
NASA Astrophysics Data System (ADS)
Liu, Jian; Wang, Shaozhi; Zhang, Chunlei; Zhang, Linghua; Chen, Huanan
2015-06-01
A high-quality ultrasmooth surface is demanded in short-wave optical systems. However, the existing polishing methods have difficulties meeting the requirement on spherical or aspheric surfaces. As a new kind of small tool polishing method, active feed polishing (AFP) could attain a surface roughness of less than 0.3 nm (RMS) on spherical elements, although AFP may magnify the residual figure error or mid-frequency error. The purpose of this work is to propose an effective algorithm to realize uniform removal of the surface in the processing. At first, the principle of the AFP and the mechanism of the polishing machine are introduced. In order to maintain the processed figure error, a variable pitch spiral path planning algorithm and the dwell time-solving model are proposed. For suppressing the possible mid-frequency error, the uniformity of the synthesis tool path, which is generated by an arbitrary point at the polishing tool bottom, is analyzed and evaluated, and the angular velocity ratio of the tool spinning motion to the revolution motion is optimized. Finally, an experiment is conducted on a convex spherical surface and an ultrasmooth surface is finally acquired. In conclusion, a high-quality ultrasmooth surface can be successfully obtained with little degradation of the figure and mid-frequency errors by the algorithm.
Micromachined Fluid Inertial Sensors
Liu, Shiqiang; Zhu, Rong
2017-01-01
Micromachined fluid inertial sensors are an important class of inertial sensors, which mainly includes thermal accelerometers and fluid gyroscopes, which have now been developed since the end of the last century for about 20 years. Compared with conventional silicon or quartz inertial sensors, the fluid inertial sensors use a fluid instead of a solid proof mass as the moving and sensitive element, and thus offer advantages of simple structures, low cost, high shock resistance, and large measurement ranges while the sensitivity and bandwidth are not competitive. Many studies and various designs have been reported in the past two decades. This review firstly introduces the working principles of fluid inertial sensors, followed by the relevant research developments. The micromachined thermal accelerometers based on thermal convection have developed maturely and become commercialized. However, the micromachined fluid gyroscopes, which are based on jet flow or thermal flow, are less mature. The key issues and technologies of the thermal accelerometers, mainly including bandwidth, temperature compensation, monolithic integration of tri-axis accelerometers and strategies for high production yields are also summarized and discussed. For the micromachined fluid gyroscopes, improving integration and sensitivity, reducing thermal errors and cross coupling errors are the issues of most concern. PMID:28216569
Lopez-Haro, S. A.; Leija, L.
2016-01-01
Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801
Formal Analysis of the Remote Agent Before and After Flight
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Lowry, Mike; Park, SeungJoon; Pecheur, Charles; Penix, John; Visser, Willem; White, Jon L.
2000-01-01
This paper describes two separate efforts that used the SPIN model checker to verify deep space autonomy flight software. The first effort occurred at the beginning of a spiral development process and found five concurrency errors early in the design cycle that the developers acknowledge would not have been found through testing. This effort required a substantial manual modeling effort involving both abstraction and translation from the prototype LISP code to the PROMELA language used by SPIN. This experience and others led to research to address the gap between formal method tools and the development cycle used by software developers. The Java PathFinder tool which directly translates from Java to PROMELA was developed as part of this research, as well as automatic abstraction tools. In 1999 the flight software flew on a space mission, and a deadlock occurred in a sibling subsystem to the one which was the focus of the first verification effort. A second quick-response "cleanroom" verification effort found the concurrency error in a short amount of time. The error was isomorphic to one of the concurrency errors found during the first verification effort. The paper demonstrates that formal methods tools can find concurrency errors that indeed lead to loss of spacecraft functions, even for the complex software required for autonomy. Second, it describes progress in automatic translation and abstraction that eventually will enable formal methods tools to be inserted directly into the aerospace software development cycle.
Computer aided manufacturing for complex freeform optics
NASA Astrophysics Data System (ADS)
Wolfs, Franciscus; Fess, Ed; Johns, Dustin; LePage, Gabriel; Matthews, Greg
2017-10-01
Recently, the desire to use freeform optics has been increasing. Freeform optics can be used to expand the capabilities of optical systems and reduce the number of optics needed in an assembly. The traits that increase optical performance also present challenges in manufacturing. As tolerances on freeform optics become more stringent, it is necessary to continue to improve methods for how the grinding and polishing processes interact with metrology. To create these complex shapes, OptiPro has developed a computer aided manufacturing package called PROSurf. PROSurf generates tool paths required for grinding and polishing freeform optics with multiple axes of motion. It also uses metrology feedback for deterministic corrections. ProSurf handles 2 key aspects of the manufacturing process that most other CAM systems struggle with. The first is having the ability to support several input types (equations, CAD models, point clouds) and still be able to create a uniform high-density surface map useable for generating a smooth tool path. The second is to improve the accuracy of mapping a metrology file to the part surface. To perform this OptiPro is using 3D error maps instead of traditional 2D maps. The metrology error map drives the tool path adjustment applied during processing. For grinding, the error map adjusts the tool position to compensate for repeatable system error. For polishing, the error map drives the relative dwell times of the tool across the part surface. This paper will present the challenges associated with these issues and solutions that we have created.
Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data
NASA Astrophysics Data System (ADS)
Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher
2017-05-01
Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance
Mihm, F G; Feeley, T W; Jamieson, S W
1987-01-01
The thermal dye double indicator dilution technique for estimating lung water was compared with gravimetric analyses in nine human subjects who were organ donors. As observed in animal studies, the thermal dye measurement of extravascular thermal volume (EVTV) consistently overestimated gravimetric extravascular lung water (EVLW), the mean (SEM) difference being 3.43 (0.59) ml/kg. In eight of the nine subjects the EVTV -3.43 ml/kg would yield an estimate of EVLW that would be from 3.23 ml/kg under to 3.37 ml/kg over the actual value EVLW at the 95% confidence limits. Reproducibility, assessed with the standard error of the mean percentage, suggested that a 15% change in EVTV can be reliably detected with repeated measurements. One subject was excluded from analysis because the EVTV measurement grossly underestimated its actual EVLW. This error was associated with regional injury observed on gross examination of the lung. Experimental and clinical evidence suggest that the thermal dye measurement provides a reliable estimate of lung water in diffuse pulmonary oedema states. PMID:3616974
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Inspection of the Math Model Tools for On-Orbit Assessment of Impact Damage Report
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Raju, Ivatury S.; Piascik, Robert S>
2007-01-01
In Spring of 2005, the NASA Engineering Safety Center (NESC) was engaged by the Space Shuttle Program (SSP) to peer review the suite of analytical tools being developed to support the determination of impact and damage tolerance of the Orbiter Thermal Protection Systems (TPS). The NESC formed an independent review team with the core disciplines of materials, flight sciences, structures, mechanical analysis and thermal analysis. The Math Model Tools reviewed included damage prediction and stress analysis, aeroheating analysis, and thermal analysis tools. Some tools are physics-based and other tools are empirically-derived. Each tool was created for a specific use and timeframe, including certification, real-time pre-launch assessments. In addition, the tools are used together in an integrated strategy for assessing the ramifications of impact damage to tile and RCC. The NESC teams conducted a peer review of the engineering data package for each Math Model Tool. This report contains the summary of the team observations and recommendations from these reviews.
Traceability of On-Machine Tool Measurement: A Review
Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor
2017-01-01
Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand. PMID:28696358
High Thermal Conductivity and High Wear Resistance Tool Steels for cost-effective Hot Stamping Tools
NASA Astrophysics Data System (ADS)
Valls, I.; Hamasaiid, A.; Padré, A.
2017-09-01
In hot stamping/press hardening, in addition to its shaping function, the tool controls the cycle time, the quality of the stamped components through determining the cooling rate of the stamped blank, the production costs and the feasibility frontier for stamping a given component. During the stamping, heat is extracted from the stamped blank and transported through the tool to the cooling medium in the cooling lines. Hence, the tools’ thermal properties determine the cooling rate of the blank, the heat transport mechanism, stamping times and temperature distribution. The tool’s surface resistance to adhesive and abrasive wear is also an important cost factor, as it determines the tool durability and maintenance costs. Wear is influenced by many tool material parameters, such as the microstructure, composition, hardness level and distribution of strengthening phases, as well as the tool’s working temperature. A decade ago, Rovalma developed a hot work tool steel for hot stamping that features a thermal conductivity of more than double that of any conventional hot work tool steel. Since that time, many complimentary grades have been developed in order to provide tailored material solutions as a function of the production volume, degree of blank cooling and wear resistance requirements, tool geometries, tool manufacturing method, type and thickness of the blank material, etc. Recently, Rovalma has developed a new generation of high thermal conductivity, high wear resistance tool steel grades that enable the manufacture of cost effective tools for hot stamping to increase process productivity and reduce tool manufacturing costs and lead times. Both of these novel grades feature high wear resistance and high thermal conductivity to enhance tool durability and cut cycle times in the production process of hot stamped components. Furthermore, one of these new grades reduces tool manufacturing costs through low tool material cost and hardening through readily available gas-quenching, whereas the other new grade enables a faster manufacturing of the tool at reduced cost by eliminating the time and money consuming high temperature hardening altogether. The latter newly developed grade can be hardened from a soft delivery state for easy machining to 52 HRc by way of a simple low temperature precipitation hardening. In this work, these new grades and the role of the tool material’s thermal, mechanical and tribological properties as well as their processing features will be discussed in light of enabling the manufacture of intelligent hot stamping tools.
NASA Astrophysics Data System (ADS)
Ikeura, Takuro; Nozaki, Takayuki; Shiota, Yoichi; Yamamoto, Tatsuya; Imamura, Hiroshi; Kubota, Hitoshi; Fukushima, Akio; Suzuki, Yoshishige; Yuasa, Shinji
2018-04-01
Using macro-spin modeling, we studied the reduction in the write error rate (WER) of voltage-induced dynamic magnetization switching by enhancing the effective thermal stability of the free layer using a voltage-controlled magnetic anisotropy change. Marked reductions in WER can be achieved by introducing reverse bias voltage pulses both before and after the write pulse. This procedure suppresses the thermal fluctuations of magnetization in the initial and final states. The proposed reverse bias method can offer a new way of improving the writing stability of voltage-driven spintronic devices.
A study and simulation of the impact of high-order aberrations to overlay error distribution
NASA Astrophysics Data System (ADS)
Sun, G.; Wang, F.; Zhou, C.
2011-03-01
With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.
NASA Technical Reports Server (NTRS)
Cho, Hyung J.; Sukhatme, Kalyani G.; Mahoney, John C.; Penanen, Konstantin Penanen; Vargas, Rudolph, Jr.
2010-01-01
A method allows combining the functions of a heater and a thermometer in a single device, a thermistor, with minimal temperature read errors. Because thermistors typically have a much smaller thermal mass than the objects they monitor, the thermal time to equilibrate the thermometer to the temperature of the object is typically much shorter than the thermal time of the object to change its temperature in response to an external perturbation.
A Smart Thermal Block Diagram Tool
NASA Technical Reports Server (NTRS)
Tsuyuki, Glenn; Miyake, Robert; Dodge, Kyle
2008-01-01
The presentation describes a Smart Thermal Block Diagram Tool. It is used by JPL's Team X in studying missions during the Pre-Phase A. It helps generate cost and mass estimates using proprietary data bases.
Friction Stir Welding of Tapered Thickness Welds Using an Adjustable Pin Tool
NASA Technical Reports Server (NTRS)
Adams, Glynn; Venable, Richard; Lawless, Kirby
2003-01-01
Friction stir welding (FSW) can be used for joining weld lands that vary in thickness along the length of the weld. An adjustable pin tool mechanism can be used to accomplish this in a single-pass, full-penetration weld by providing for precise changes in the pin length relative to the shoulder face during the weld process. The difficulty with this approach is in accurately adjusting the pin length to provide a consistent penetration ligament throughout the weld. The weld technique, control system, and instrumentation must account for mechanical and thermal compliances of the tooling system to conduct tapered welds successfully. In this study, a combination of static and in-situ measurements, as well as active control, is used to locate the pin accurately and maintain the desired penetration ligament. Frictional forces at the pin/shoulder interface were a source of error that affected accurate pin position. A traditional FSW pin tool design that requires a lead angle was used to join butt weld configurations that included both constant thickness and tapered sections. The pitch axis of the tooling was fixed throughout the weld; therefore, the effective lead angle in the tapered sections was restricted to within the tolerances allowed by the pin tool design. The sensitivity of the FSW process to factors such as thickness offset, joint gap, centerline offset, and taper transition offset were also studied. The joint gap and the thickness offset demonstrated the most adverse affects on the weld quality. Two separate tooling configurations were used to conduct tapered thickness welds successfully. The weld configurations included sections in which the thickness decreased along the weld, as well as sections in which the thickness increased along the weld. The data presented here include weld metallography, strength data, and process load data.
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532
Errata: Response Analysis and Error Diagnosis Tools.
ERIC Educational Resources Information Center
Hart, Robert S.
This guide to ERRATA, a set of HyperCard-based tools for response analysis and error diagnosis in language testing, is intended as a user manual and general reference and designed to be used with the software (not included here). It has three parts. The first is a brief survey of computational techniques available for dealing with student test…
NASA Astrophysics Data System (ADS)
Sousa, Andre R.; Schneider, Carlos A.
2001-09-01
A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.
Landsat-8 TIRS thermal radiometric calibration status
Barsi, Julia A.; Markham, Brian L.; Montanaro, Matthew; Gerace, Aaron; Hook, Simon; Schott, John R.; Raqueno, Nina G.; Morfitt, Ron
2017-01-01
The Thermal Infrared Sensor (TIRS) instrument is the thermal-band imager on the Landsat-8 platform. The initial onorbit calibration estimates of the two TIRS spectral bands indicated large average radiometric calibration errors, -0.29 and -0.51 W/m2 sr μm or -2.1K and -4.4K at 300K in Bands 10 and 11, respectively, as well as high variability in the errors, 0.87K and 1.67K (1-σ), respectively. The average error was corrected in operational processing in January 2014, though, this adjustment did not improve the variability. The source of the variability was determined to be stray light from far outside the field of view of the telescope. An algorithm for modeling the stray light effect was developed and implemented in the Landsat-8 processing system in February 2017. The new process has improved the overall calibration of the two TIRS bands, reducing the residual variability in the calibration from 0.87K to 0.51K at 300K for Band 10 and from 1.67K to 0.84K at 300K for Band 11. There are residual average lifetime bias errors in each band: 0.04 W/m2 sr μm (0.30K) and -0.04 W/m2 sr μm (-0.29K), for Bands 10 and 11, respectively.
Dual-wavelengths photoacoustic temperature measurement
NASA Astrophysics Data System (ADS)
Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao
2017-02-01
Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.
1988-09-01
analysis phase of the software life cycle (16:1-1). While editing a SADT diagram, the tool should be able to check whether or not structured analysis...diag-ams are valid for the SADT’s syntax, produce error messages, do error recovery, and perform editing suggestions. Thus, this tool must have the...directed editors are editors which use the syn- tax of the programming language while editing a program. While text editors treat programs as text, syntax
A Controlled-Phase Gate via Adiabatic Rydberg Dressing of Neutral Atoms
NASA Astrophysics Data System (ADS)
Keating, Tyler; Deutsch, Ivan; Cook, Robert; Biederman, Grant; Jau, Yuan-Yu
2014-05-01
The dipole blockade effect between Rydberg atoms is a promising tool for quantum information processing in neutral atoms. So far, most efforts to perform a quantum logic gate with this effect have used resonant laser pulses to excite the atoms, which makes the system particularly susceptible to decoherence through thermal motional effects. We explore an alternative scheme in which the atomic ground states are adiabatically ``dressed'' by turning on an off-resonant laser. We analyze the implementation of a CPHASE gate using this mechanism and find that fidelities of >99% should be possible with current technology, owing primarily to the suppression of motional errors. We also discuss how such a scheme could be generalized to perform more complicated, multi-qubit gates; in particular, a simple generalization would allow us to perform a Toffoli gate in a single step.
Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana
2012-01-01
The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.
AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.
Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia
2017-03-14
Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.
Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors
ERIC Educational Resources Information Center
Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.
2007-01-01
This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lara-Curzio, Edgar; Rios, Orlando; Marquez-Rossy, Andres Emilio
ORNL collaborated with Faurecia Interior Systems to investigate the feasibility of developing a thermomagnetic preventive maintenance program for nickel tooling used in powder slush molding. It was found that thermal treatments at temperatures greater than 500°C can anneal strain hardening in nickel tooling and a range of temperatures and times for effective thermal annealing were identified. It was also observed that magnetic fields applied during thermal annealing do not alter the kinetics of strain hardening annealing. The results obtained in this investigation provide a foundation for establishing a preventive maintenance program for nickel tooling.
Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties
NASA Astrophysics Data System (ADS)
Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc
2015-12-01
In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).
NASA Astrophysics Data System (ADS)
Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander
2017-03-01
An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.
Vibration characteristics measurement of beam-like structures using infrared thermography
NASA Astrophysics Data System (ADS)
Talai, S. M.; Desai, D. A.; Heyns, P. S.
2016-11-01
Infrared thermography (IRT) has matured and is now widely accepted as a condition monitoring tool where temperature is measured in a non-contact way. Since the late 1970s, it has been extensively used in vibrothermography (Sonic IR) non-destructive technique for the evaluation of surface cracks through the observation of thermal imaging of the vibration-induced crack heat generation. However, it has not received research attention on prediction of structural vibration behaviour, hence; the concept to date is not understood. Therefore, this paper explores its ability to fill the existing knowledge gap. To achieve this, two cantilever beam-like structures couple with a friction rod subjected to a forced excitations while infrared cameras capturing the thermal images on the friction interfaces. The analysed frictional temperature evolution using the Matlab Fast Fourier Transform (FFT) algorithm and the use of the heat conduction equation in conjunction with a finite difference approach successfully identifies the structural vibration characteristics; with maximum error of 0.28% and 20.71% for frequencies and displacements, respectively. These findings are particularly useful in overcoming many limitations inherent in some of the current vibration measuring techniques applied in structural integrity management such as strain gauge failures due to fatigue.
NASA Astrophysics Data System (ADS)
Byeon, J. H.; Ahmed, F.; Ko, T. J.; lee, D. K.; Kim, J. S.
2018-03-01
As the industry develops, miniaturization and refinement of products are important issues. Precise machining is required for cutting, which is a typical method of machining a product. The factor determining the workability of the cutting process is the material of the tool. Tool materials include carbon tool steel, alloy tool steel, high-speed steel, cemented carbide, and ceramics. In the case of a carbide material, the smaller the particle size, the better the mechanical properties with higher hardness, strength and toughness. The specific heat, density, and thermal diffusivity are also changed through finer particle size of the material. In this study, finite element analysis was performed to investigate the change of heat generation and cutting power depending on the physical properties (specific heat, density, thermal diffusivity) of tool material. The thermal conductivity coefficient was obtained by measuring the thermal diffusivity, specific heat, and density of the material (180 nm) in which the particle size was finer and the particle material (0.05 μm) in the conventional size. The coefficient of thermal conductivity was calculated as 61.33 for 180nm class material and 46.13 for 0.05μm class material. As a result of finite element analysis using this value, the average temperature of exothermic heat of micronized particle material (180nm) was 532.75 °C and the temperature of existing material (0.05μm) was 572.75 °C. Cutting power was also compared but not significant. Therefore, if the thermal conductivity is increased through particle refinement, the surface power can be improved and the tool life can be prolonged by lowering the temperature generated in the tool during machining without giving a great influence to the cutting power.
NASA Astrophysics Data System (ADS)
Lau Sheng, Annie; Ismail, Izwan; Nur Aqida, Syarifah
2018-03-01
This study presents the effects of laser parameters on the surface roughness of laser modified tool steel after thermal cyclic loading. Pulse mode Nd:YAG laser was used to perform the laser surface modification process on AISI H13 tool steel samples. Samples were then treated with thermal cyclic loading experiments which involved alternate immersion in molten aluminium (800°C) and water (27°C) for 553 cycles. A full factorial design of experiment (DOE) was developed to perform the investigation. Factors for the DOE are the laser parameter namely overlap rate (η), pulse repetition frequency (f PRF) and peak power (Ppeak ) while the response is the surface roughness after thermal cyclic loading. Results indicate the surface roughness of the laser modified surface after thermal cyclic loading is significantly affected by laser parameter settings.
Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger
2012-09-01
Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.
NASA Astrophysics Data System (ADS)
Kariminia, Shahab; Motamedi, Shervin; Shamshirband, Shahaboddin; Piri, Jamshid; Mohammadi, Kasra; Hashim, Roslan; Roy, Chandrabhushan; Petković, Dalibor; Bonakdari, Hossein
2016-05-01
Visitors utilize the urban space based on their thermal perception and thermal environment. The thermal adaptation engages the user's behavioural, physiological and psychological aspects. These aspects play critical roles in user's ability to assess the thermal environments. Previous studies have rarely addressed the effects of identified factors such as gender, age and locality on outdoor thermal comfort, particularly in hot, dry climate. This study investigated the thermal comfort of visitors at two city squares in Iran based on their demographics as well as the role of thermal environment. Assessing the thermal comfort required taking physical measurement and questionnaire survey. In this study, a non-linear model known as the neural network autoregressive with exogenous input (NN-ARX) was employed. Five indices of physiological equivalent temperature (PET), predicted mean vote (PMV), standard effective temperature (SET), thermal sensation votes (TSVs) and mean radiant temperature ( T mrt) were trained and tested using the NN-ARX. Then, the results were compared to the artificial neural network (ANN) and the adaptive neuro-fuzzy inference system (ANFIS). The findings showed the superiority of the NN-ARX over the ANN and the ANFIS. For the NN-ARX model, the statistical indicators of the root mean square error (RMSE) and the mean absolute error (MAE) were 0.53 and 0.36 for the PET, 1.28 and 0.71 for the PMV, 2.59 and 1.99 for the SET, 0.29 and 0.08 for the TSV and finally 0.19 and 0.04 for the T mrt.
Calibration and Evaluation of Ultrasound Thermography using Infrared Imaging
Hsiao, Yi-Sing; Deng, Cheri X.
2015-01-01
Real-time monitoring of the spatiotemporal evolution of tissue temperature is important to ensure safe and effective treatment in thermal therapies including hyperthermia and thermal ablation. Ultrasound thermography has been proposed as a non-invasive technique for temperature measurement, and accurate calibration of the temperature-dependent ultrasound signal changes against temperature is required. Here we report a method that uses infrared (IR) thermography for calibration and validation of ultrasound thermography. Using phantoms and cardiac tissue specimens subjected to high-intensity focused ultrasound (HIFU) heating, we simultaneously acquired ultrasound and IR imaging data from the same surface plane of a sample. The commonly used echo time shift-based method was chosen to compute ultrasound thermometry. We first correlated the ultrasound echo time shifts with IR-measured temperatures for material-dependent calibration and found that the calibration coefficient was positive for fat-mimicking phantom (1.49 ± 0.27) but negative for tissue-mimicking phantom (− 0.59 ± 0.08) and cardiac tissue (− 0.69 ± 0.18 °C-mm/ns). We then obtained the estimation error of the ultrasound thermometry by comparing against the IR measured temperature and revealed that the error increased with decreased size of the heated region. Consistent with previous findings, the echo time shifts were no longer linearly dependent on temperature beyond 45 – 50 °C in cardiac tissues. Unlike previous studies where thermocouples or water-bath techniques were used to evaluate the performance of ultrasound thermography, our results show that high resolution IR thermography provides a useful tool that can be applied to evaluate and understand the limitations of ultrasound thermography methods. PMID:26547634
Calibration and Evaluation of Ultrasound Thermography Using Infrared Imaging.
Hsiao, Yi-Sing; Deng, Cheri X
2016-02-01
Real-time monitoring of the spatiotemporal evolution of tissue temperature is important to ensure safe and effective treatment in thermal therapies including hyperthermia and thermal ablation. Ultrasound thermography has been proposed as a non-invasive technique for temperature measurement, and accurate calibration of the temperature-dependent ultrasound signal changes against temperature is required. Here we report a method that uses infrared thermography for calibration and validation of ultrasound thermography. Using phantoms and cardiac tissue specimens subjected to high-intensity focused ultrasound heating, we simultaneously acquired ultrasound and infrared imaging data from the same surface plane of a sample. The commonly used echo time shift-based method was chosen to compute ultrasound thermometry. We first correlated the ultrasound echo time shifts with infrared-measured temperatures for material-dependent calibration and found that the calibration coefficient was positive for fat-mimicking phantom (1.49 ± 0.27) but negative for tissue-mimicking phantom (-0.59 ± 0.08) and cardiac tissue (-0.69 ± 0.18°C-mm/ns). We then obtained the estimation error of the ultrasound thermometry by comparing against the infrared-measured temperature and revealed that the error increased with decreased size of the heated region. Consistent with previous findings, the echo time shifts were no longer linearly dependent on temperature beyond 45°C-50°C in cardiac tissues. Unlike previous studies in which thermocouples or water bath techniques were used to evaluate the performance of ultrasound thermography, our results indicate that high-resolution infrared thermography is a useful tool that can be applied to evaluate and understand the limitations of ultrasound thermography methods. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
A Quasiphysics Intelligent Model for a Long Range Fast Tool Servo
Liu, Qiang; Zhou, Xiaoqin; Lin, Jieqiong; Xu, Pengzi; Zhu, Zhiwei
2013-01-01
Accurately modeling the dynamic behaviors of fast tool servo (FTS) is one of the key issues in the ultraprecision positioning of the cutting tool. Herein, a quasiphysics intelligent model (QPIM) integrating a linear physics model (LPM) and a radial basis function (RBF) based neural model (NM) is developed to accurately describe the dynamic behaviors of a voice coil motor (VCM) actuated long range fast tool servo (LFTS). To identify the parameters of the LPM, a novel Opposition-based Self-adaptive Replacement Differential Evolution (OSaRDE) algorithm is proposed which has been proved to have a faster convergence mechanism without compromising with the quality of solution and outperform than similar evolution algorithms taken for consideration. The modeling errors of the LPM and the QPIM are investigated by experiments. The modeling error of the LPM presents an obvious trend component which is about ±1.15% of the full span range verifying the efficiency of the proposed OSaRDE algorithm for system identification. As for the QPIM, the trend component in the residual error of LPM can be well suppressed, and the error of the QPIM maintains noise level. All the results verify the efficiency and superiority of the proposed modeling and identification approaches. PMID:24163627
Inspection of the Math Model Tools for On-Orbit Assessment of Impact Damage Report. Version 1.0
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Raju, Ivatury S.; Piascik, Robert S.; Kramer White, Julie; Labbe, Steve G.; Rotter, Hank A.
2005-01-01
In Spring of 2005, the NASA Engineering Safety Center (NESC) was engaged by the Space Shuttle Program (SSP) to peer review the suite of analytical tools being developed to support the determination of impact and damage tolerance of the Orbiter Thermal Protection Systems (TPS). The NESC formed an independent review team with the core disciplines of materials, flight sciences, structures, mechanical analysis and thermal analysis. The Math Model Tools reviewed included damage prediction and stress analysis, aeroheating analysis, and thermal analysis tools. Some tools are physics-based and other tools are empirically-derived. Each tool was created for a specific use and timeframe, including certification, real-time pre-launch assessments, and real-time on-orbit assessments. The tools are used together in an integrated strategy for assessing the ramifications of impact damage to tile and RCC. The NESC teams conducted a peer review of the engineering data package for each Math Model Tool. This report contains the summary of the team observations and recommendations from these reviews.
Validation of a general practice audit and data extraction tool.
Peiris, David; Agaliotis, Maria; Patel, Bindu; Patel, Anushka
2013-11-01
We assessed how accurately a common general practitioner (GP) audit tool extracts data from two software systems. First, pathology test codes were audited at 33 practices covering nine companies. Second, a manual audit of chronic disease data from 200 random patient records at two practices was compared with audit tool data. Pathology review: all companies assigned correct codes for cholesterol, creatinine and glycated haemoglobin; four companies assigned incorrect codes for albuminuria tests, precluding accurate detection with the audit tool. Case record review: there was strong agreement between the manual audit and the tool for all variables except chronic kidney disease diagnoses, which was due to a tool-related programming error. The audit tool accurately detected most chronic disease data in two GP record systems. The one exception, however, highlights the importance of surveillance systems to promptly identify errors. This will maximise potential for audit tools to improve healthcare quality.
Kalman Filtered MR Temperature Imaging for Laser Induced Thermal Therapies
Fuentes, D.; Yung, J.; Hazle, J. D.; Weinberg, J. S.; Stafford, R. J.
2013-01-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L2 (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, Δt < 10sec, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss Δt > 10sec. PMID:22203706
NASA Astrophysics Data System (ADS)
Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.
2003-12-01
We report new constraints on extra-dimensional models and other physics beyond the standard model based on measurements of the Casimir force between two dissimilar metals for separations in the range 0.2 1.2 μm. The Casimir force between a Au-coated sphere and a Cu-coated plate of a microelectromechanical torsional oscillator was measured statically with an absolute error of 0.3 pN. In addition, the Casimir pressure between two parallel plates was determined dynamically with an absolute error of ≈0.6 mPa. Within the limits of experimental and theoretical errors, the results are in agreement with a theory that takes into account the finite conductivity and roughness of the two metals. The level of agreement between experiment and theory was then used to set limits on the predictions of extra-dimensional physics and thermal quantum field theory. It is shown that two theoretical approaches to the thermal Casimir force which predict effects linear in temperature are ruled out by these experiments. Finally, constraints on Yukawa corrections to Newton’s law of gravity are strengthened by more than an order of magnitude in the range 56 330 nm.
Tang, Xiaolin Charlie; Nail, Steven L; Pikal, Michael J
2006-01-01
The purpose of this work was to study the factors that may cause systematic errors in the manometric temperature measurement (MTM) procedure used to determine product dry-layer resistance to vapor flow. Product temperature and dry-layer resistance were obtained using MTM software installed on a laboratory freeze-dryer. The MTM resistance values were compared with the resistance values obtained using the "vial method." The product dry-layer resistances obtained by MTM, assuming fixed temperature difference (DeltaT; 2 degrees C), were lower than the actual values, especially when the product temperatures and sublimation rates were low, but with DeltaT determined from the pressure rise data, more accurate results were obtained. MTM resistance values were generally lower than the values obtained with the vial method, particularly whenever freeze-drying was conducted under conditions that produced large variations in product temperature (ie, low shelf temperature, low chamber pressure, and without thermal shields). In an experiment designed to magnify temperature heterogeneity, MTM resistance values were much lower than the simple average of the product resistances. However, in experiments where product temperatures were homogenous, good agreement between MTM and "vial-method" resistances was obtained. The reason for the low MTM resistance problem is the fast vapor pressure rise from a few "warm" edge vials or vials with low resistance. With proper use of thermal shields, and the evaluation of DeltaT from the data, MTM resistance data are accurate. Thus, the MTM method for determining dry-layer resistance is a useful tool for freeze-drying process analytical technology.
NASA Astrophysics Data System (ADS)
Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan
2012-06-01
The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.
Variational bounds on the temperature distribution
NASA Astrophysics Data System (ADS)
Kalikstein, Kalman; Spruch, Larry; Baider, Alberto
1984-02-01
Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.
Location precision analysis of stereo thermal anti-sniper detection system
NASA Astrophysics Data System (ADS)
He, Yuqing; Lu, Ya; Zhang, Xiaoyan; Jin, Weiqi
2012-06-01
Anti-sniper detection devices are the urgent requirement in modern warfare. The precision of the anti-sniper detection system is especially important. This paper discusses the location precision analysis of the anti-sniper detection system based on the dual-thermal imaging system. It mainly discusses the following two aspects which produce the error: the digital quantitative effects of the camera; effect of estimating the coordinate of bullet trajectory according to the infrared images in the process of image matching. The formula of the error analysis is deduced according to the method of stereovision model and digital quantitative effects of the camera. From this, we can get the relationship of the detecting accuracy corresponding to the system's parameters. The analysis in this paper provides the theory basis for the error compensation algorithms which are put forward to improve the accuracy of 3D reconstruction of the bullet trajectory in the anti-sniper detection devices.
Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.
Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico
2017-03-01
Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.
Advancing Technology for Starlight Suppression via an External Occulter
NASA Technical Reports Server (NTRS)
Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.;
2011-01-01
External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.
Machine tools error characterization and compensation by on-line measurement of artifact
NASA Astrophysics Data System (ADS)
Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili
2009-11-01
Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.
NASA Astrophysics Data System (ADS)
Akita, T.; Takaki, R.; Shima, E.
2012-04-01
An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.
Thermal behavior of the Medicina 32-meter radio telescope
NASA Astrophysics Data System (ADS)
Pisanu, Tonino; Buffa, Franco; Morsiani, Marco; Pernechele, Claudio; Poppi, Sergio
2010-07-01
We studied the thermal effects on the 32 m diameter radio-telescope managed by the Institute of Radio Astronomy (IRA), Medicina, Bologna, Italy. The preliminary results show that thermal gradients deteriorate the pointing performance of the antenna. Data has been collected by using: a) two inclinometers mounted near the elevation bearing and on the central part of the alidade structure; b) a non contact laser alignment optical system capable of measuring the secondary mirror position; c) twenty thermal sensors mounted on the alidade trusses. Two series of measurements were made, the first series was performed by placing the antenna in stow position, the second series was performed while tracking a circumpolar astronomical source. When the antenna was in stow position we observed a strong correlation between the inclinometer measurements and the differential temperature. The latter was measured with the sensors located on the South and North sides of the alidade, thus indicating that the inclinometers track well the thermal deformation of the alidade. When the antenna pointed at the source we measured: pointing errors, the inclination of the alidade, the temperature of the alidade components and the subreflector position. The pointing errors measured on-source were 15-20 arcsec greater than those measured with the inclinometer.
Graphite fiber reinforced structure for supporting machine tools
Knight, Jr., Charles E.; Kovach, Louis; Hurst, John S.
1978-01-01
Machine tools utilized in precision machine operations require tool support structures which exhibit minimal deflection, thermal expansion and vibration characteristics. The tool support structure of the present invention is a graphite fiber reinforced composite in which layers of the graphite fibers or yarn are disposed in a 0/90.degree. pattern and bonded together with an epoxy resin. The finished composite possesses a low coefficient of thermal expansion and a substantially greater elastic modulus, stiffness-to-weight ratio, and damping factor than a conventional steel tool support utilized in similar machining operations.
Liquid Medication Dosing Errors by Hispanic Parents: Role of Health Literacy and English Proficiency
Harris, Leslie M.; Dreyer, Benard; Mendelsohn, Alan; Bailey, Stacy C.; Sanders, Lee M.; Wolf, Michael S.; Parker, Ruth M.; Patel, Deesha A.; Kim, Kwang Youn A.; Jimenez, Jessica J.; Jacobson, Kara; Smith, Michelle; Yin, H. Shonna
2016-01-01
Objective Hispanic parents in the US are disproportionately affected by low health literacy and limited English proficiency (LEP). We examined associations between health literacy, LEP, and liquid medication dosing errors in Hispanic parents. Methods Cross-sectional analysis of data from a multisite randomized controlled experiment to identify best practices for the labeling/dosing of pediatric liquid medications (SAFE Rx for Kids study); 3 urban pediatric clinics. Analyses were limited to Hispanic parents of children <8 years, with health literacy and LEP data (n=1126). Parents were randomized to 5 groups that varied by pairing of units of measurement on the label/dosing tool. Each parent measured 9 doses [3 amounts (2.5,5,7.5 mL) using 3 tools (2 syringes (0.2,0.5 mL increment), 1 cup)] in random order. Dependent variable: Dosing error=>20% dose deviation. Predictor variables: health literacy (Newest Vital Sign) [limited=0–3; adequate=4–6], LEP (speaks English less than “very well”). Results 83.1% made dosing errors (mean(SD) errors/parent=2.2(1.9)). Parents with limited health literacy and LEP had the greatest odds of making a dosing error compared to parents with adequate health literacy who were English proficient (% trials with errors/parent=28.8 vs. 12.9%; AOR=2.2[1.7–2.8]). Parents with limited health literacy who were English proficient were also more likely to make errors (% trials with errors/parent=18.8%; AOR=1.4[1.1–1.9]). Conclusion Dosing errors are common among Hispanic parents; those with both LEP and limited health literacy are at particular risk. Further study is needed to examine how the redesign of medication labels and dosing tools could reduce literacy and language-associated disparities in dosing errors. PMID:28477800
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
Investigating System Dependability Modeling Using AADL
NASA Technical Reports Server (NTRS)
Hall, Brendan; Driscoll, Kevin R.; Madl, Gabor
2013-01-01
This report describes Architecture Analysis & Design Language (AADL) models for a diverse set of fault-tolerant, embedded data networks and describes the methods and tools used to created these models. It also includes error models per the AADL Error Annex. Some networks were modeled using Error Detection Isolation Containment Types (EDICT). This report gives a brief description for each of the networks, a description of its modeling, the model itself, and evaluations of the tools used for creating the models. The methodology includes a naming convention that supports a systematic way to enumerate all of the potential failure modes.
Simba, Kenneth Renny; Bui, Ba Dinh; Msukwa, Mathew Renny; Uchiyama, Naoki
2018-04-01
In feed drive systems, particularly machine tools, a contour error is more significant than the individual axial tracking errors from the view point of enhancing precision in manufacturing and production systems. The contour error must be within the permissible tolerance of given products. In machining complex or sharp-corner products, large contour errors occur mainly owing to discontinuous trajectories and the existence of nonlinear uncertainties. Therefore, it is indispensable to design robust controllers that can enhance the tracking ability of feed drive systems. In this study, an iterative learning contouring controller consisting of a classical Proportional-Derivative (PD) controller and disturbance observer is proposed. The proposed controller was evaluated experimentally by using a typical sharp-corner trajectory, and its performance was compared with that of conventional controllers. The results revealed that the maximum contour error can be reduced by about 37% on average. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change
NASA Astrophysics Data System (ADS)
Li, Qing; Zhou, P.; Yan, H. J.
2017-12-01
In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037]. First, we emphasize that the replacement of ∇ .(λ ∇ T ) /∇.(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ∂t 0(T v ) +∇ .(T vv ) , which exist in the macroscopic temperature equation recovered from the previous model, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. Moreover, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the model for simulating liquid-vapor phase change. It is shown that the numerical results of the improved model agree well with those of a finite-difference scheme. Meanwhile, it is found that the replacement of ∇ .(λ ∇ T ) /∇ .(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.
Positional reference system for ultraprecision machining
Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.
1982-01-01
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Positional reference system for ultraprecision machining
Arnold, J.B.; Burleson, R.R.; Pardue, R.M.
1980-09-12
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
de Wet, C; Bowie, P
2009-04-01
A multi-method strategy has been proposed to understand and improve the safety of primary care. The trigger tool is a relatively new method that has shown promise in American and secondary healthcare settings. It involves the focused review of a random sample of patient records using a series of "triggers" that alert reviewers to potential errors and previously undetected adverse events. To develop and test a global trigger tool to detect errors and adverse events in primary-care records. Trigger tool development was informed by previous research and content validated by expert opinion. The tool was applied by trained reviewers who worked in pairs to conduct focused audits of 100 randomly selected electronic patient records in each of five urban general practices in central Scotland. Review of 500 records revealed 2251 consultations and 730 triggers. An adverse event was found in 47 records (9.4%), indicating that harm occurred at a rate of one event per 48 consultations. Of these, 27 were judged to be preventable (42%). A further 17 records (3.4%) contained evidence of a potential adverse event. Harm severity was low to moderate for most patients (82.9%). Error and harm rates were higher in those aged > or =60 years, and most were medication-related (59%). The trigger tool was successful in identifying undetected patient harm in primary-care records and may be the most reliable method for achieving this. However, the feasibility of its routine application is open to question. The tool may have greater utility as a research rather than an audit technique. Further testing in larger, representative study samples is required.
Self-sharpening-effect of nickel-diamond coatings sprayed by HVOF
NASA Astrophysics Data System (ADS)
Tillmann, W.; Brinkhoff, A.; Schaak, C.; Zajaczkowski, J.
2017-03-01
The durability of stone working and drilling tools is an increasingly significant requirement in industrial applications. These tools are mainly produced by brazing diamond metal matrix composites inserts to the tool body. These inserts are produced by sintering diamonds and metal powder (e.g. nickel). If the wear is too high, the diamonds will break out of the metal matrix and other diamonds will be uncovered. This effect is called self-sharpening. But diamonds are difficult to handle because of their thermal sensitivity. Due to their high thermal influence, manufacturing costs, and complicate route of manufacturing (first sintering, then brazing), there is a great need for alternative production methods for such tools. One alternative to produce wear-resistant and self-sharpening coatings are thermal spray processes as examined in this paper. An advantage of thermal spray processes is their smaller thermal influence on the diamond, due to the short dwelling time in the flame. To reduce the thermal influence during spraying, nickel coated diamonds were used in the HVOF-process (high velocity oxygen fuel process). The wear resistance was subsequently investigated by means of a standardized ball-on-disc test. Furthermore, a SEM (scanning electron microscope) was used to gain information about the wear-mechanism and the self-sharpening effect of the coating.
NASA Technical Reports Server (NTRS)
Presser, L.
1978-01-01
An integrated set of FORTRAN tools that are commercially available is described. The basic purpose of various tools is summarized and their economic impact highlighted. The areas addressed by these tools include: code auditing, error detection, program portability, program instrumentation, documentation, clerical aids, and quality assurance.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
NASA Astrophysics Data System (ADS)
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
Design of a Pneumatic Tool for Manual Drilling Operations in Confined Spaces
NASA Astrophysics Data System (ADS)
Janicki, Benjamin
This master's thesis describes the design process and testing results for a pneumatically actuated, manually-operated tool for confined space drilling operations. The purpose of this device is to back-drill pilot holes inside a commercial airplane wing. It is lightweight, and a "locator pin" enables the operator to align the drill over a pilot hole. A suction pad stabilizes the system, and an air motor and flexible drive shaft power the drill. Two testing procedures were performed to determine the practicality of this prototype. The first was the "offset drill test", which qualified the exit hole position error due to an initial position error relative to the original pilot hole. The results displayed a linear relationship, and it was determined that position errors of less than .060" would prevent the need for rework, with errors of up to .030" considered acceptable. For the second test, a series of holes were drilled with the pneumatic tool and analyzed for position error, diameter range, and cycle time. The position errors and hole diameter range were within the allowed tolerances. The average cycle time was 45 seconds, 73 percent of which was for drilling the hole, and 27 percent of which was for positioning the device. Recommended improvements are discussed in the conclusion, and include a more durable flexible drive shaft, a damper for drill feed control, and a more stable locator pin.
Thermal and heat flow instrumentation for the space shuttle Thermal Protection System
NASA Technical Reports Server (NTRS)
Hartman, G. J.; Neuner, G. J.; Pavlosky, J.
1974-01-01
The 100 mission lifetime requirement for the space shuttle orbiter vehicle dictates a unique set of requirements for the Thermal Protection System (TPS) thermal and heat flow instrumentation. This paper describes the design and development of such instrumentation with emphasis on assessment of the accuracy of the measurements when the instrumentation is an integral part of the TPS. The temperature and heat flow sensors considered for this application are described and the optimum choices discussed. Installation techniques are explored and the resulting impact on the system error defined.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
Influence of Process Parameters on the Process Efficiency in Laser Metal Deposition Welding
NASA Astrophysics Data System (ADS)
Güpner, Michael; Patschger, Andreas; Bliedtner, Jens
Conventionally manufactured tools are often completely constructed of a high-alloyed, expensive tool steel. An alternative way to manufacture tools is the combination of a cost-efficient, mild steel and a functional coating in the interaction zone of the tool. Thermal processing methods, like laser metal deposition, are always characterized by thermal distortion. The resistance against the thermal distortion decreases with the reduction of the material thickness. As a consequence, there is a necessity of a special process management for the laser based coating of thin parts or tools. The experimental approach in the present paper is to keep the energy and the mass per unit length constant by varying the laser power, the feed rate and the powder mass flow. The typical seam parameters are measured in order to characterize the cladding process, define process limits and evaluate the process efficiency. Ways to optimize dilution, angular distortion and clad height are presented.
Cost minimizing of cutting process for CNC thermal and water-jet machines
NASA Astrophysics Data System (ADS)
Tavaeva, Anastasia; Kurennov, Dmitry
2015-11-01
This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
Senol Cali, Damla; Kim, Jeremie S; Ghose, Saugata; Alkan, Can; Mutlu, Onur
2018-04-02
Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious and effective choices for each step of the genome assembly pipeline using nanopore sequence data. Also, with the help of bottlenecks we have found, developers can improve the current tools or build new ones that are both accurate and fast, to overcome the high error rates of the nanopore sequencing technology.
The next organizational challenge: finding and addressing diagnostic error.
Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H
2014-03-01
Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
ERIC Educational Resources Information Center
Quiroz, Waldo; Rubilar, Cristian Merino
2015-01-01
This study develops a tool to identify errors in the presentation of natural laws based on the epistemology and ontology of the Scientific Realism of Mario Bunge. The tool is able to identify errors of different types: (1) epistemological, in which the law is incorrectly presented as data correlation instead of as a pattern of causality; (2)…
Technique for diamond machining large ZnSe grisms for the Rapid Infrared/Imager Spectrograph (RIMAS)
NASA Astrophysics Data System (ADS)
Kuzmenko, Paul J.; Little, Steve L.; Kutyrev, Alexander S.; Capone, John I.
2016-07-01
The Rapid Infrared Imager/Spectrograph (RIMAS) is an instrument designed to observe gamma ray burst afterglows following initial detection by the SWIFT satellite. Operating in the near infrared between 0.9 and 2.4 μm, it has capabilities for both low resolution (R 25) and moderate resolution (R 4000) spectroscopy. Two zinc selenide (ZnSe) grisms provide dispersion in the moderate resolution mode: one covers the Y and J bands and the other covers the H and K. Each has a clear aperture of 44 mm. The YJ grism has a blaze angle of 49.9° with a 40 μm groove spacing. The HK grism is blazed at 43.1° with a 50 μm grooves spacing. Previous fabrication of ZnSe grisms on the Precision Engineering Research Lathe (PERL II) at LLNL has demonstrated the importance of surface preparation, tool and fixture design, tight thermal control, and backup power sources for the machine. The biggest challenges in machining the RIMAS grisms are the large grooved area, which indicates long machining time, and the relatively steep blaze angle, which means that the grism wavefront error is much more sensitive to lathe metrology errors. Mitigating techniques are described.
Interferometric Techniques for Gravitational Wave Detection in Space
NASA Technical Reports Server (NTRS)
Stebbins, Robin T; Bender, Peter L.
2000-01-01
The Laser Interferometer Space Antenna (LISA) mission will detect gravitational waves from galactic and extragalactic sources, most importantly those involving supermassive black holes. The primary goal of this project is to investigate stability and robustness issues associated with LISA interferometry. We specifically propose to study systematic errors arising from: optical misalignments, optical surface errors, thermal effects and pointing tolerances. This report covers the first fiscal year of the grant, from January 1st to December 31st 1999. We have employed an optical modeling tool to evaluate the effect of misplaced and misaligned optical components. Preliminary results seem to indicate that positional tolerances of one micron and angular tolerances of 0.6 millirad produce no significant effect on the achievable contrast of the interference pattern. This report also outlines research plans for the second fiscal year of the grant, from January 1st to December 31st 2000. Since the work under NAG5-6880 has gone more rapidly than projected, our test bed interferometer is operational, and can be used for measurements of effects that cause beam motion. Hence, we will design, build and characterize a sensor for measuring beam motion, and then install it. We are also planning a differential wavefront sensor based on a quadrant photodiode as a first generation sensor.
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Intelligent demand side management of residential building energy systems
NASA Astrophysics Data System (ADS)
Sinha, Maruti N.
Advent of modern sensing technologies, data processing capabilities and rising cost of energy are driving the implementation of intelligent systems in buildings and houses which constitute 41% of total energy consumption. The primary motivation has been to provide a framework for demand-side management and to improve overall reliability. The entire formulation is to be implemented on NILM (Non-Intrusive Load Monitoring System), a smart meter. This is going to play a vital role in the future of demand side management. Utilities have started deploying smart meters throughout the world which will essentially help to establish communication between utility and consumers. This research is focused on investigation of a suitable thermal model of residential house, building up control system and developing diagnostic and energy usage forecast tool. The present work has considered measurement based approach to pursue. Identification of building thermal parameters is the very first step towards developing performance measurement and controls. The proposed identification technique is PEM (Prediction Error Method) based, discrete state-space model. The two different models have been devised. First model is focused toward energy usage forecast and diagnostics. Here one of the novel idea has been investigated which takes integral of thermal capacity to identify thermal model of house. The purpose of second identification is to build up a model for control strategy. The controller should be able to take into account the weather forecast information, deal with the operating point constraints and at the same time minimize the energy consumption. To design an optimal controller, MPC (Model Predictive Control) scheme has been implemented instead of present thermostatic/hysteretic control. This is a receding horizon approach. Capability of the proposed schemes has also been investigated.
NASA Astrophysics Data System (ADS)
Daffara, C.; Parisotto, S.; Mariotti, P. I.
2015-06-01
Cultural Heritage is discovering how precious is thermal analysis as a tool to improve the restoration, thanks to its ability to inspect hidden details. In this work a novel dual mode imaging approach, based on the integration of thermography and thermal quasi-reflectography (TQR) in the mid-IR is demonstrated for an effective mapping of surface materials and of sub-surface detachments in mural painting. The tool was validated through a unique application: the "Monocromo" by Leonardo da Vinci in Italy. The dual mode acquisition provided two spatially aligned dataset: the TQR image and the thermal sequence. Main steps of the workflow included: 1) TQR analysis to map surface features and 2) to estimate the emissivity; 3) projection of the TQR frame on reference orthophoto and TQR mosaicking; 4) thermography analysis to map detachments; 5) use TQR to solve spatial referencing and mosaicking for the thermal-processed frames. Referencing of thermal images in the visible is a difficult aspect of the thermography technique that the dual mode approach allows to solve in effective way. We finally obtained the TQR and the thermal maps spatially referenced to the mural painting, thus providing the restorer a valuable tool for the restoration of the detachments.
NASA Astrophysics Data System (ADS)
Mazzaracchio, Antonio; Marchetti, Mario
2010-03-01
Implicit ablation and thermal response software was developed to analyse and size charring ablative thermal protection systems for entry vehicles. A statistical monitor integrated into the tool, which uses the Monte Carlo technique, allows a simulation to run over stochastic series. This performs an uncertainty and sensitivity analysis, which estimates the probability of maintaining the temperature of the underlying material within specified requirements. This approach and the associated software are primarily helpful during the preliminary design phases of spacecraft thermal protection systems. They are proposed as an alternative to traditional approaches, such as the Root-Sum-Square method. The developed tool was verified by comparing the results with those from previous work on thermal protection system probabilistic sizing methodologies, which are based on an industry standard high-fidelity ablation and thermal response program. New case studies were analysed to establish thickness margins on sizing heat shields that are currently proposed for vehicles using rigid aeroshells for future aerocapture missions at Neptune, and identifying the major sources of uncertainty in the material response.
Effect of thematic map misclassification on landscape multi-metric assessment.
Kleindl, William J; Powell, Scott L; Hauer, F Richard
2015-06-01
Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.
Wiegmann, D A; Shappell, S A
2001-11-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.
Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.
2006-01-01
Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
An interactive tool for processing sap flux data from thermal dissipation probes
Andrew C. Oishi; Chelcy F. Miniat
2016-01-01
Sap flux sensors are an important tool for estimating tree-level transpiration in forested and urban ecosystems around the world. Thermal dissipation (TD) or Granier-type sap flux probes are among the most commonly used due to their reliability, simplicity, and low cost.
Modified Silicone-Rubber Tooling For Molding Composite Parts
NASA Technical Reports Server (NTRS)
Baucom, Robert M.; Snoha, John J.; Weiser, Erik S.
1995-01-01
Reduced-thermal-expansion, reduced-bulk-modulus silicone rubber for use in mold tooling made by incorporating silica powder into silicone rubber. Pressure exerted by thermal expansion reduced even further by allowing air bubbles to remain in silicone rubber instead of deaerating it. Bubbles reduce bulk modulus of material.
1985-10-01
83K0385 FINAL REPORT D Vol. 4 00 THERMAL EFFECTS ON THE ACCURACY OF LD NUME" 1ICALLY CONTROLLED MACHINE TOOLS PREPARED BY I Raghunath Venugopal and M...OF NUMERICALLY CONTROLLED MACHINE TOOLS 12 PERSONAL AJ’HOR(S) Venunorial, Raghunath and M. M. Barash 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF...TOOLS Prepared by Raghunath Venugopal and M. M. Barash Accesion For Unannounced 0 Justification ........................................... October 1085
A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation
NASA Astrophysics Data System (ADS)
Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.
2008-08-01
This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.
Cause-and-effect mapping of critical events.
Graves, Krisanne; Simmons, Debora; Galley, Mark D
2010-06-01
Health care errors are routinely reported in the scientific and public press and have become a major concern for most Americans. In learning to identify and analyze errors health care can develop some of the skills of a learning organization, including the concept of systems thinking. Modern experts in improving quality have been working in other high-risk industries since the 1920s making structured organizational changes through various frameworks for quality methods including continuous quality improvement and total quality management. When using these tools, it is important to understand systems thinking and the concept of processes within organization. Within these frameworks of improvement, several tools can be used in the analysis of errors. This article introduces a robust tool with a broad analytical view consistent with systems thinking, called CauseMapping (ThinkReliability, Houston, TX, USA), which can be used to systematically analyze the process and the problem at the same time. Copyright 2010 Elsevier Inc. All rights reserved.
Avoiding Human Error in Mission Operations: Cassini Flight Experience
NASA Technical Reports Server (NTRS)
Burk, Thomas A.
2012-01-01
Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.
NASA Astrophysics Data System (ADS)
Ravi, A. M.; Murigendrappa, S. M.
2018-04-01
In recent times, thermally enhanced machining (TEM) slowly gearing up to cut hard metals like high chrome white cast iron (HCWCI) which were impossible in conventional procedures. Also setting up of suitable cutting parameters and positioning of the heat source against the work appears to be critical in order to enhance the machinability characteristics of the work material. In this research work, the Oxy - LPG flame was used as the heat source and HCWCI as the workpiece. ANSYS-CFD-Flow software was used to develop the transient thermal model to analyze the thermal flux distribution on the work surface during TEM of HCWCI using Cubic boron nitride (CBN) tools. Non-contact type Infrared thermo sensor was used to measure the surface temperature continuously at different positions, and is validated with the thermal model results. The result confirms thermal model is a better predictive tool for thermal flux distribution analysis in TEM process.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
Computer-Controlled Cylindrical Polishing Process for Large X-Ray Mirror Mandrels
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
We are developing high-energy grazing incidence shell optics for hard-x-ray telescopes. The resolution of a mirror shells depends on the quality of cylindrical mandrel from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation software is developed to model the residual surface figure errors of a mandrel due to the polishing process parameters and the tools used, as well as to compute the optical performance of the optics. The study carried out using the developed software was focused on establishing a relationship between the polishing process parameters and the mid-spatial-frequency error generation. The process parameters modeled are the speeds of the lap and the mandrel, the tool s influence function, the contour path (dwell) of the tools, their shape and the distribution of the tools on the polishing lap. Using the inputs from the mathematical model, a mandrel having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. The preliminary results of a series of polishing experiments demonstrate a qualitative agreement with the developed model. We report our first experimental results and discuss plans for further improvements in the polishing process. The ability to simulate the polishing process is critical to optimize the polishing process, improve the mandrel quality and significantly reduce the cost of mandrel production
Detecting Spatial Patterns in Biological Array Experiments
ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.
2005-01-01
Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791
Analysis of Meteorological Satellite location and data collection system concepts
NASA Technical Reports Server (NTRS)
Wallace, R. G.; Reed, D. L.
1981-01-01
A satellite system that employs a spaceborne RF interferometer to determine the location and velocity of data collection platforms attached to meteorological balloons is proposed. This meteorological advanced location and data collection system (MALDCS) is intended to fly aboard a low polar orbiting satellite. The flight instrument configuration includes antennas supported on long deployable booms. The platform location and velocity estimation errors introduced by the dynamic and thermal behavior of the antenna booms and the effects of the presence of the booms on the performance of the spacecraft's attitude control system, and the control system design considerations critical to stable operations are examined. The physical parameters of the Astromast type of deployable boom were used in the dynamic and thermal boom analysis, and the TIROS N system was assumed for the attitude control analysis. Velocity estimation error versus boom length was determined. There was an optimum, minimum error, antenna separation distance. A description of the proposed MALDCS system and a discussion of ambiguity resolution are included.
An adaptive optics system for solid-state laser systems used in inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salmon, J.T.; Bliss, E.S.; Byrd, J.L.
1995-09-17
Using adaptive optics the authors have obtained nearly diffraction-limited 5 kJ, 3 nsec output pulses at 1.053 {micro}m from the Beamlet demonstration system for the National Ignition Facility (NIF). The peak Strehl ratio was improved from 0.009 to 0.50, as estimated from measured wavefront errors. They have also measured the relaxation of the thermally induced aberrations in the main beam line over a period of 4.5 hours. Peak-to-valley aberrations range from 6.8 waves at 1.053 {micro}m within 30 minutes after a full system shot to 3.9 waves after 4.5 hours. The adaptive optics system must have enough range to correctmore » accumulated thermal aberrations from several shots in addition to the immediate shot-induced error. Accumulated wavefront errors in the beam line will affect both the design of the adaptive optics system for NIF and the performance of that system.« less
A simulation technique for predicting thickness of thermal sprayed coatings
NASA Technical Reports Server (NTRS)
Goedjen, John G.; Miller, Robert A.; Brindley, William J.; Leissler, George W.
1995-01-01
The complexity of many of the components being coated today using the thermal spray process makes the trial and error approach traditionally followed in depositing a uniform coating inadequate, thereby necessitating a more analytical approach to developing robotic trajectories. A two dimensional finite difference simulation model has been developed to predict the thickness of coatings deposited using the thermal spray process. The model couples robotic and component trajectories and thermal spraying parameters to predict coating thickness. Simulations and experimental verification were performed on a rotating disk to evaluate the predictive capabilities of the approach.
A Thermal Management Systems Model for the NASA GTX RBCC Concept
NASA Technical Reports Server (NTRS)
Traci, Richard M.; Farr, John L., Jr.; Laganelli, Tony; Walker, James (Technical Monitor)
2002-01-01
The Vehicle Integrated Thermal Management Analysis Code (VITMAC) was further developed to aid the analysis, design, and optimization of propellant and thermal management concepts for advanced propulsion systems. The computational tool is based on engineering level principles and models. A graphical user interface (GUI) provides a simple and straightforward method to assess and evaluate multiple concepts before undertaking more rigorous analysis of candidate systems. The tool incorporates the Chemical Equilibrium and Applications (CEA) program and the RJPA code to permit heat transfer analysis of both rocket and air breathing propulsion systems. Key parts of the code have been validated with experimental data. The tool was specifically tailored to analyze rocket-based combined-cycle (RBCC) propulsion systems being considered for space transportation applications. This report describes the computational tool and its development and verification for NASA GTX RBCC propulsion system applications.
2017-01-01
Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584
Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers
Michel, Burlyn E.
1979-01-01
Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685
Numerical and experimental validation for the thermal transmittance of windows with cellular shades
Hart, Robert
2018-02-21
Some highly energy efficient window attachment products are available today, but more rapid market adoption would be facilitated by fair performance metrics. It is important to have validated simulation tools to provide a basis for this analysis. This paper outlines a review and validation of the ISO 15099 center-of-glass zero-solar-load heat transfer correlations for windows with cellular shades. Thermal transmittance was measured experimentally, simulated using computational fluid dynamics (CFD) analysis, and simulated utilizing correlations from ISO 15099 as implemented in Berkeley Lab WINDOW and THERM software. CFD analysis showed ISO 15099 underestimates heat flux of rectangular cavities by up tomore » 60% when aspect ratio (AR) = 1 and overestimates heat flux up to 20% when AR = 0.5. CFD analysis also showed that wave-type surfaces of cellular shades have less than 2% impact on heat flux through the cavities and less than 5% for natural convection of room-side surface. WINDOW was shown to accurately represent heat flux of the measured configurations to a mean relative error of 0.5% and standard deviation of 3.8%. Finally, several shade parameters showed significant influence on correlation accuracy, including distance between shade and glass, inconsistency in cell stretch, size of perimeter gaps, and the mounting hardware.« less
Numerical and experimental validation for the thermal transmittance of windows with cellular shades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Robert
Some highly energy efficient window attachment products are available today, but more rapid market adoption would be facilitated by fair performance metrics. It is important to have validated simulation tools to provide a basis for this analysis. This paper outlines a review and validation of the ISO 15099 center-of-glass zero-solar-load heat transfer correlations for windows with cellular shades. Thermal transmittance was measured experimentally, simulated using computational fluid dynamics (CFD) analysis, and simulated utilizing correlations from ISO 15099 as implemented in Berkeley Lab WINDOW and THERM software. CFD analysis showed ISO 15099 underestimates heat flux of rectangular cavities by up tomore » 60% when aspect ratio (AR) = 1 and overestimates heat flux up to 20% when AR = 0.5. CFD analysis also showed that wave-type surfaces of cellular shades have less than 2% impact on heat flux through the cavities and less than 5% for natural convection of room-side surface. WINDOW was shown to accurately represent heat flux of the measured configurations to a mean relative error of 0.5% and standard deviation of 3.8%. Finally, several shade parameters showed significant influence on correlation accuracy, including distance between shade and glass, inconsistency in cell stretch, size of perimeter gaps, and the mounting hardware.« less
Temperature dosimetry using MR relaxation characteristics of poly(vinyl alcohol) cryogel (PVA-C).
Lukas, L A; Surry, K J; Peters, T M
2001-11-01
Hyperthermic therapy is being used for a variety of medical treatments, such as tumor ablation and the enhancement of radiation therapy. Research in this area requires a tool to record the temperature distribution created by a heat source, similar to the dosimetry gels used in radiation therapy to record dose distribution. Poly(vinyl alcohol) cryogel (PVA-C) is presented as a material capable of recording temperature distributions between 45 and 70 degrees C, with less than a 1 degrees C error. An approximately linear, positive relationship between MR relaxation times and applied temperature is demonstrated, with a maximum of 16.3 ms/ degrees C change in T(1) and 10.2 ms/ degrees C in T(2) for a typical PVA-C gel. Applied heat reduces the amount of cross-linking in PVA-C, which is responsible for a predictable change in T(1) and T(2) times. Temperature distributions in PVA-C volumes may be determined by matching MR relaxation times across the volumes to calibration values produced in samples subjected to known temperatures. Factors such as thermotolerance, perfusion effects, and thermal conductivity of PVA-C are addressed for potentially extending this method to modeling thermal doses in tissue. Copyright 2001 Wiley-Liss, Inc.
Optical device for thermal diffusivity determination in liquids by reflection of a thermal wave
NASA Astrophysics Data System (ADS)
Sánchez-Pérez, C.; De León-Hernández, A.; García-Cadena, C.
2017-08-01
In this work, we present a device for determination of the thermal diffusivity using the oblique reflection of a thermal wave within a solid slab that is in contact with the medium to be characterized. By using the reflection near a critical angle under the assumption that thermal waves obey Snell's law of refraction with the square root of the thermal diffusivities, the unknown thermal diffusivity is obtained by simple formulae. Experimentally, the sensor response is measured using the photothermal beam deflection technique within a slab that results in a compact device with no contact of the laser probing beam with the sample. We describe the theoretical basis and provide experimental results to validate the proposed method. We determine the thermal diffusivity of tridistilled water and glycerin solutions with an error of less than 0.5%.
Predictive Thermal Control Applied to HabEx
NASA Technical Reports Server (NTRS)
Brooks, Thomas E.
2017-01-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10(exp -10) contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Predictive thermal control applied to HabEx
NASA Astrophysics Data System (ADS)
Brooks, Thomas E.
2017-09-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10-10 contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Piezocomposite Actuator Arrays for Correcting and Controlling Wavefront Error in Reflectors
NASA Technical Reports Server (NTRS)
Bradford, Samuel Case; Peterson, Lee D.; Ohara, Catherine M.; Shi, Fang; Agnes, Greg S.; Hoffman, Samuel M.; Wilkie, William Keats
2012-01-01
Three reflectors have been developed and tested to assess the performance of a distributed network of piezocomposite actuators for correcting thermal deformations and total wave-front error. The primary testbed article is an active composite reflector, composed of a spherically curved panel with a graphite face sheet and aluminum honeycomb core composite, and then augmented with a network of 90 distributed piezoelectric composite actuators. The piezoelectric actuator system may be used for correcting as-built residual shape errors, and for controlling low-order, thermally-induced quasi-static distortions of the panel. In this study, thermally-induced surface deformations of 1 to 5 microns were deliberately introduced onto the reflector, then measured using a speckle holography interferometer system. The reflector surface figure was subsequently corrected to a tolerance of 50 nm using the actuators embedded in the reflector's back face sheet. Two additional test articles were constructed: a borosilicate at window at 150 mm diameter with 18 actuators bonded to the back surface; and a direct metal laser sintered reflector with spherical curvature, 230 mm diameter, and 12 actuators bonded to the back surface. In the case of the glass reflector, absolute measurements were performed with an interferometer and the absolute surface was corrected. These test articles were evaluated to determine their absolute surface control capabilities, as well as to assess a multiphysics modeling effort developed under this program for the prediction of active reflector response. This paper will describe the design, construction, and testing of active reflector systems under thermal loads, and subsequent correction of surface shape via distributed peizeoelctric actuation.
Differential multi-MOSFET nuclear radiation sensor
NASA Technical Reports Server (NTRS)
Deoliveira, W. A.
1977-01-01
Circuit allows minimization of thermal-drift errors, low power consumption, operation over wide dynamic range, improved sensitivity and stability with metaloxide-semiconductor field-effect transistor sensors.
Microstructure Evolution during Friction Stir Welding of Mill-Annealed Ti-6Al-4V (Preprint)
2011-05-01
welding . One of the primary concerns regarding FSW of higher temperature materials like titanium is the welding tool. High temperature materials... welds as compared to aluminum alloys. This is related to the low thermal conductivity of titanium alloys which is typically lower than that of the...of the tools and workpieces in aluminum and titanium friction stir welds . Aluminum has a greater conductivity and thermal diffusivity than the tool
Thermal modelling of cooling tool cutting when milling by electrical analogy
NASA Astrophysics Data System (ADS)
Benabid, F.; Arrouf, M.; Assas, M.; Benmoussa, H.
2010-06-01
Measurement temperatures by (some devises) are applied immediately after shut-down and may be corrected for the temperature drop that occurs in the interval between shut-down and measurement. This paper presents a new procedure for thermal modelling of the tool cutting used just after machining; when the tool is out off the chip in order to extrapolate the cutting temperature from the temperature measured when the tool is at stand still. A fin approximation is made in enhancing heat loss (by conduction and convection) to air stream is used. In the modelling we introduce an equivalent thermal network to estimate the cutting temperature as a function of specific energy. In another hand, a local modified element lumped conduction equation is used to predict the temperature gradient with time when the tool is being cooled, with initial and boundary conditions. These predictions provide a detailed view of the global heat transfer coefficient as a function of cutting speed because the heat loss for the tool in air stream is an order of magnitude larger than in normal environment. Finally we deduct the cutting temperature by inverse method.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Reusable bi-directional 3ω sensor to measure thermal conductivity of 100-μm thick biological tissues
NASA Astrophysics Data System (ADS)
Lubner, Sean D.; Choi, Jeunghwan; Wehmeyer, Geoff; Waag, Bastian; Mishra, Vivek; Natesan, Harishankar; Bischof, John C.; Dames, Chris
2015-01-01
Accurate knowledge of the thermal conductivity (k) of biological tissues is important for cryopreservation, thermal ablation, and cryosurgery. Here, we adapt the 3ω method—widely used for rigid, inorganic solids—as a reusable sensor to measure k of soft biological samples two orders of magnitude thinner than conventional tissue characterization methods. Analytical and numerical studies quantify the error of the commonly used "boundary mismatch approximation" of the bi-directional 3ω geometry, confirm that the generalized slope method is exact in the low-frequency limit, and bound its error for finite frequencies. The bi-directional 3ω measurement device is validated using control experiments to within ±2% (liquid water, standard deviation) and ±5% (ice). Measurements of mouse liver cover a temperature ranging from -69 °C to +33 °C. The liver results are independent of sample thicknesses from 3 mm down to 100 μm and agree with available literature for non-mouse liver to within the measurement scatter.
Lubner, Sean D; Choi, Jeunghwan; Wehmeyer, Geoff; Waag, Bastian; Mishra, Vivek; Natesan, Harishankar; Bischof, John C; Dames, Chris
2015-01-01
Accurate knowledge of the thermal conductivity (k) of biological tissues is important for cryopreservation, thermal ablation, and cryosurgery. Here, we adapt the 3ω method-widely used for rigid, inorganic solids-as a reusable sensor to measure k of soft biological samples two orders of magnitude thinner than conventional tissue characterization methods. Analytical and numerical studies quantify the error of the commonly used "boundary mismatch approximation" of the bi-directional 3ω geometry, confirm that the generalized slope method is exact in the low-frequency limit, and bound its error for finite frequencies. The bi-directional 3ω measurement device is validated using control experiments to within ±2% (liquid water, standard deviation) and ±5% (ice). Measurements of mouse liver cover a temperature ranging from -69 °C to +33 °C. The liver results are independent of sample thicknesses from 3 mm down to 100 μm and agree with available literature for non-mouse liver to within the measurement scatter.
On Combining Thermal-Infrared and Radio-Occultation Data of Saturn's Atmosphere
NASA Technical Reports Server (NTRS)
Flasar, F. M.; Schinder, P. J.; Conrath, B. J.
2008-01-01
Radio-occultation and thermal-infrared measurements are complementary investigations for sounding planetary atmospheres. The vertical resolution afforded by radio occultations is typically approximately 1 km or better, whereas that from infrared sounding is often comparable to a scale height. On the other hand, an instrument like CIRS can easily generate global maps of temperature and composition, whereas occultation soundings are usually distributed more sparsely. The starting point for radio-occultation inversions is determining the residual Doppler-shifted frequency, that is the shift in frequency from what it would be in the absence of the atmosphere. Hence the positions and relative velocities of the spacecraft, target atmosphere, and DSN receiving station must be known to high accuracy. It is not surprising that the inversions can be susceptible to sources of systematic errors. Stratospheric temperature profiles on Titan retrieved from Cassini radio occultations were found to be very susceptible to errors in the reconstructed spacecraft velocities (approximately equal to 1 mm/s). Here the ability to adjust the spacecraft ephemeris so that the profiles matched those retrieved from CIRS limb sounding proved to be critical in mitigating this error. A similar procedure can be used for Saturn, although the sensitivity of its retrieved profiles to this type of error seems to be smaller. One issue that has appeared in inverting the Cassini occultations by Saturn is the uncertainty in its equatorial bulge, that is, the shape in its iso-density surfaces at low latitudes. Typically one approximates that surface as a geopotential surface by assuming a barotropic atmosphere. However, the recent controversy in the equatorial winds, i.e., whether they changed between the Voyager (1981) era and later (after 1996) epochs of Cassini and some Hubble observations, has made it difficult to know the exact shape of the surface, and it leads to uncertainties in the retrieved temperature profiles of one to a few kelvins. This propagates into errors in the retrieved helium abundance, which makes use of thermal-infrared spectra and synthetic spectra computed with retrieved radio-occultation temperature profiles. The highest abundances are retrieved with the faster Voyager-era winds, but even these abundances are somewhat smaller than those retrieved from the thermal-infrared data alone (albeit with larger formal errors). The helium abundance determination is most sensitive to temperatures in the upper troposphere. Further progress may include matching the radio-occultation profiles with those from CIRS limb sounding in the upper stratosphere.
Detecting genotyping errors and describing black bear movement in northern Idaho
Michael K. Schwartz; Samuel A. Cushman; Kevin S. McKelvey; Jim Hayden; Cory Engkjer
2006-01-01
Non-invasive genetic sampling has become a favored tool to enumerate wildlife. Genetic errors, caused by poor quality samples, can lead to substantial biases in numerical estimates of individuals. We demonstrate how the computer program DROPOUT can detect amplification errors (false alleles and allelic dropout) in a black bear (Ursus americanus) dataset collected in...
Ablative Thermal Response Analysis Using the Finite Element Method
NASA Technical Reports Server (NTRS)
Dec John A.; Braun, Robert D.
2009-01-01
A review of the classic techniques used to solve ablative thermal response problems is presented. The advantages and disadvantages of both the finite element and finite difference methods are described. As a first step in developing a three dimensional finite element based ablative thermal response capability, a one dimensional computer tool has been developed. The finite element method is used to discretize the governing differential equations and Galerkin's method of weighted residuals is used to derive the element equations. A code to code comparison between the current 1-D tool and the 1-D Fully Implicit Ablation and Thermal Response Program (FIAT) has been performed.
Sharpening method of satellite thermal image based on the geographical statistical model
NASA Astrophysics Data System (ADS)
Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng
2016-04-01
To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.
Kraemer, D; Chen, G
2014-02-01
Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.
Forensic surface metrology: tool mark evidence.
Gambino, Carol; McLaughlin, Patrick; Kuo, Loretta; Kammerman, Frani; Shenkin, Peter; Diaczuk, Peter; Petraco, Nicholas; Hamby, James; Petraco, Nicholas D K
2011-01-01
Over the last several decades, forensic examiners of impression evidence have come under scrutiny in the courtroom due to analysis methods that rely heavily on subjective morphological comparisons. Currently, there is no universally accepted system that generates numerical data to independently corroborate visual comparisons. Our research attempts to develop such a system for tool mark evidence, proposing a methodology that objectively evaluates the association of striated tool marks with the tools that generated them. In our study, 58 primer shear marks on 9 mm cartridge cases, fired from four Glock model 19 pistols, were collected using high-resolution white light confocal microscopy. The resulting three-dimensional surface topographies were filtered to extract all "waviness surfaces"-the essential "line" information that firearm and tool mark examiners view under a microscope. Extracted waviness profiles were processed with principal component analysis (PCA) for dimension reduction. Support vector machines (SVM) were used to make the profile-gun associations, and conformal prediction theory (CPT) for establishing confidence levels. At the 95% confidence level, CPT coupled with PCA-SVM yielded an empirical error rate of 3.5%. Complementary, bootstrap-based computations for estimated error rates were 0%, indicating that the error rate for the algorithmic procedure is likely to remain low on larger data sets. Finally, suggestions are made for practical courtroom application of CPT for assigning levels of confidence to SVM identifications of tool marks recorded with confocal microscopy. Copyright © 2011 Wiley Periodicals, Inc.
[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
[What Surgeons Should Know about Risk Management].
Strametz, R; Tannheimer, M; Rall, M
2017-02-01
Background: The fact that medical treatment is associated with errors has long been recognized. Based on the principle of "first do no harm", numerous efforts have since been made to prevent such errors or limit their impact. However, recent statistics show that these measures do not sufficiently prevent grave mistakes with serious consequences. Preventable mistakes such as wrong patient or wrong site surgery still frequently occur in error statistics. Methods: Based on insight from research on human error, in due consideration of recent legislative regulations in Germany, the authors give an overview of the clinical risk management tools needed to identify risks in surgery, analyse their causes, and determine adequate measures to manage those risks depending on their relevance. The use and limitations of critical incident reporting systems (CIRS), safety checklists and crisis resource management (CRM) are highlighted. Also the rationale for IT systems to support the risk management process is addressed. Results/Conclusion: No single tool of risk management can be effective as a standalone instrument, but unfolds its effect only when embedded in a superordinate risk management system, which integrates tailor-made elements to increase patient safety into the workflows of each organisation. Competence in choosing adequate tools, effective IT systems to support the risk management process as well as leadership and commitment to constructive handling of human error are crucial components to establish a safety culture in surgery. Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.
2009-12-16
Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less
76 FR 43808 - Designation of Biobased Items for Federal Procurement
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... thermal shipping containers, including durable and non-durable thermal shipping containers as... able to utilize this Web site as one tool to determine the availability of qualifying biobased products... containers and the subcategories are (1) durable thermal shipping containers, and (2) non-durable thermal...
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
Decision support tool for diagnosing the source of variation
NASA Astrophysics Data System (ADS)
Masood, Ibrahim; Azrul Azhad Haizan, Mohamad; Norbaya Jumali, Siti; Ghazali, Farah Najihah Mohd; Razali, Hazlin Syafinaz Md; Shahir Yahya, Mohd; Azlan, Mohd Azwir bin
2017-08-01
Identifying the source of unnatural variation (SOV) in manufacturing process is essential for quality control. The Shewhart control chart patterns (CCPs) are commonly used to monitor the SOV. However, a proper interpretation of CCPs associated to its SOV requires a high skill industrial practitioner. Lack of knowledge in process engineering will lead to erroneous corrective action. The objective of this study is to design the operating procedures of computerized decision support tool (DST) for process diagnosis. The DST is an embedded tool in CCPs recognition scheme. Design methodology involves analysis of relationship between geometrical features, manufacturing process and CCPs. The DST contents information about CCPs and its possible root cause error and description on SOV phenomenon such as process deterioration in tool bluntness, offsetting tool, loading error, and changes in materials hardness. The DST will be useful for an industrial practitioner in making effective troubleshooting.
Design of a final approach spacing tool for TRACON air traffic control
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Erzberger, Heinz; Bergeron, Hugh
1989-01-01
This paper describes an automation tool that assists air traffic controllers in the Terminal Radar Approach Control (TRACON) Facilities in providing safe and efficient sequencing and spacing of arrival traffic. The automation tool, referred to as the Final Approach Spacing Tool (FAST), allows the controller to interactively choose various levels of automation and advisory information ranging from predicted time errors to speed and heading advisories for controlling time error. FAST also uses a timeline to display current scheduling and sequencing information for all aircraft in the TRACON airspace. FAST combines accurate predictive algorithms and state-of-the-art mouse and graphical interface technology to present advisory information to the controller. Furthermore, FAST exchanges various types of traffic information and communicates with automation tools being developed for the Air Route Traffic Control Center. Thus it is part of an integrated traffic management system for arrival traffic at major terminal areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
Feedback control of thermal lensing in a high optical power cavity.
Fan, Y; Zhao, C; Degallaix, J; Ju, L; Blair, D G; Slagmolen, B J J; Hosken, D J; Brooks, A F; Veitch, P J; Munch, J
2008-10-01
This paper reports automatic compensation of strong thermal lensing in a suspended 80 m optical cavity with sapphire test mass mirrors. Variation of the transmitted beam spot size is used to obtain an error signal to control the heating power applied to the cylindrical surface of an intracavity compensation plate. The negative thermal lens created in the compensation plate compensates the positive thermal lens in the sapphire test mass, which was caused by the absorption of the high intracavity optical power. The results show that feedback control is feasible to compensate the strong thermal lensing expected to occur in advanced laser interferometric gravitational wave detectors. Compensation allows the cavity resonance to be maintained at the fundamental mode, but the long thermal time constant for thermal lensing control in fused silica could cause difficulties with the control of parametric instabilities.
Welding Development: Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Ding, Jeff
2007-01-01
This paper presents the basic understanding of the friction stir welding process. It covers process description, pin tool operation and materials, metal flow theory, mechanical properties, and materials welded using the process. It also discusses the thermal stir welding process and the differences between thermal stir and friction stir welding. MSFC weld tools used for development are also presented.
Evaluation of thermal data for geologic applications
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Palluconi, F. D.; Levine, C. J.; Abrams, M. J.; Nash, D. B.; Alley, R. E.; Schieldge, J. P.
1982-01-01
Sensitivity studies using thermal models indicated sources of errors in the determination of thermal inertia from HCMM data. Apparent thermal inertia, with only simple atmospheric radiance corrections to the measured surface temperature, would be sufficient for most operational requirements for surface thermal inertia. Thermal data does have additional information about the nature of surface material that is not available in visible and near infrared reflectance data. Color composites of daytime temperature, nighttime temperature, and albedo were often more useful than thermal inertia images alone for discrimination of lithologic boundaries. A modeling study, using the annual heating cycle, indicated the feasibility of looking for geologic features buried under as much as a meter of alluvial material. The spatial resolution of HCMM data is a major limiting factor in the usefulness of the data for geologic applications. Future thermal infrared satellite sensors should provide spatial resolution comparable to that of the LANDSAT data.
Large Deployable Reflector (LDR) thermal characteristics
NASA Technical Reports Server (NTRS)
Miyake, R. N.; Wu, Y. C.
1988-01-01
The thermal support group, which is part of the lightweight composite reflector panel program, developed thermal test and analysis evaluation tools necessary to support the integrated interdisciplinary analysis (IIDA) capability. A detailed thermal mathematical model and a simplified spacecraft thermal math model were written. These models determine the orbital temperature level and variation, and the thermally induced gradients through and across a panel, for inclusion in the IIDA.
Evaluation of a new photomask CD metrology tool
NASA Astrophysics Data System (ADS)
Dubuque, Leonard F.; Doe, Nicholas G.; St. Cin, Patrick
1996-12-01
In the integrated circuit (IC) photomask industry today, dense IC patterns, sub-micron critical dimensions (CD), and narrow tolerances for 64 M technologies and beyond are driving increased demands to minimize and characterize all components of photomask CD variation. This places strict requirements on photomask CD metrology in order to accurately characterize the mask CD error distribution. According to the gauge-maker's rule, measurement error must not exceed 30% of the tolerance on the product dimension measured or the gauge is not considered capable. The traditional single point repeatability tests are a poor measure of overall measurement system error in a dynamic, leading-edge technology environment. In such an environment, measurements may be taken at different points in the field- of-view due to stage in-accuracy, pattern recognition requirements, and throughput considerations. With this in mind, a set of experiments were designed to characterize thoroughly the metrology tool's repeatability and systematic error. Original experiments provided inconclusive results and had to be extended to obtain a full characterization of the system. Tests demonstrated a performance of better than 15 nm total CD error. Using this test as a tool for further development, the authors were able to determine the effects of various system components and measure the improvement with changes in optics, electronics, and software. Optimization of the optical path, electronics, and system software has yielded a new instrument with a total system error of better than 8 nm. Good collaboration between the photomask manufacturer and the equipment supplier has led to a realistic test of system performance and an improved CD measurement instrument.
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)
2000-01-01
This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.
The Watchdog Task: Concurrent error detection using assertions
NASA Technical Reports Server (NTRS)
Ersoz, A.; Andrews, D. M.; Mccluskey, E. J.
1985-01-01
The Watchdog Task, a software abstraction of the Watchdog-processor, is shown to be a powerful error detection tool with a great deal of flexibility and the advantages of watchdog techniques. A Watchdog Task system in Ada is presented; issues of recovery, latency, efficiency (communication) and preprocessing are discussed. Different applications, one of which is error detection on a single processor, are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Pouliot, J
2015-06-15
Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less
Prediction of Thermal Fatigue in Tooling for Die-casting Copper via Finite Element Analysis
NASA Astrophysics Data System (ADS)
Sakhuja, Amit; Brevick, Jerald R.
2004-06-01
Recent research by the Copper Development Association (CDA) has demonstrated the feasibility of die-casting electric motor rotors using copper. Electric motors using copper rotors are significantly more energy efficient relative to motors using aluminum rotors. However, one of the challenges in copper rotor die-casting is low tool life. Experiments have shown that the higher molten metal temperature of copper (1085 °C), as compared to aluminum (660 °C) accelerates the onset of thermal fatigue or heat checking in traditional H-13 tool steel. This happens primarily because the mechanical properties of H-13 tool steel decrease significantly above 650 °C. Potential approaches to mitigate the heat checking problem include: 1) identification of potential tool materials having better high temperature mechanical properties than H-13, and 2) reduction of the magnitude of cyclic thermal excursions experienced by the tooling by increasing the bulk die temperature. A preliminary assessment of alternative tool materials has led to the selection of nickel-based alloys Haynes 230 and Inconel 617 as potential candidates. These alloys were selected based on their elevated temperature physical and mechanical properties. Therefore, the overall objective of this research work was to predict the number of copper rotor die-casting cycles to the onset of heat checking (tool life) as a function of bulk die temperature (up to 650 °C) for Haynes 230 and Inconel 617 alloys. To achieve these goals, a 2D thermo-mechanical FEA was performed to evaluate strain ranges on selected die surfaces. The method of Universal Slopes (Strain Life Method) was then employed for thermal fatigue life predictions.
Thermal sensors to control polymer forming. Challenge and solutions
NASA Astrophysics Data System (ADS)
Lemeunier, F.; Boyard, N.; Sarda, A.; Plot, C.; Lefèvre, N.; Petit, I.; Colomines, G.; Allanic, N.; Bailleul, J. L.
2017-10-01
Many thermal sensors are already used, for many years, to better understand and control material forming processes, especially polymer processing. Due to technical constraints (high pressure, sealing, sensor dimensions…) the thermal measurement is often performed in the tool or close its surface. Thus, it only gives partial and disturbed information. Having reliable information about the heat flux exchanges between the tool and the material during the process would be very helpful to improve the control of the process and to favor the development of new materials. In this work, we present several sensors developed in labs to study the molding steps in forming processes. The analysis of the obtained thermal measurements (temperature, heat flux) shows the required sensitivity threshold of sensitivity of thermal sensors to be able to detect on-line the rate of thermal reaction. Based on these data, we will present new sensor designs which have been patented.
Error management training and simulation education.
Gardner, Aimee; Rich, Michelle
2014-12-01
The integration of simulation into the training of health care professionals provides context for decision making and procedural skills in a high-fidelity environment, without risk to actual patients. It was hypothesised that a novel approach to simulation-based education - error management training - would produce higher performance ratings compared with traditional step-by-step instruction. Radiology technology students were randomly assigned to participate in traditional procedural-based instruction (n = 11) or vicarious error management training (n = 11). All watched an instructional video and discussed how well each incident was handled (traditional instruction group) or identified where the errors were made (vicarious error management training). Students then participated in a 30-minute case-based simulation. Simulations were videotaped for performance analysis. Blinded experts evaluated performance using a predefined evaluation tool created specifically for the scenario. Blinded experts evaluated performance using a predefined evaluation tool created specifically for the scenario The vicarious error management group scored higher on observer-rated performance (Mean = 9.49) than students in the traditional instruction group (Mean = 9.02; p < 0.01). These findings suggest that incorporating the discussion of errors and how to handle errors during the learning session will better equip students when performing hands-on procedures and skills. This pilot study provides preliminary evidence for integrating error management skills into medical curricula and for the design of learning goals in simulation-based education. © 2014 John Wiley & Sons Ltd.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Combustion Device Failures During Space Shuttle Main Engine Development
NASA Technical Reports Server (NTRS)
Goetz, Otto K.; Monk, Jan C.
2005-01-01
Major Causes: Limited Initial Materials Properties. Limited Structural Models - especially fatigue. Limited Thermal Models. Limited Aerodynamic Models. Human Errors. Limited Component Test. High Pressure. Complicated Control.
NASA Astrophysics Data System (ADS)
Danilova-Tret'yak, S. M.; Evseeva, L. E.; Tanaeva, S. A.
2014-11-01
Experimental investigations of the thermophysical properties of traditional and modified asbestos-reinforced laminates depending on the type of their carbon nanofiller have been carried out in the range of temperatures from -150 to 150°C. It has been shown that the largest (nearly twofold) increase in the thermal-conductivity and thermal-diffusivity coefficients of the indicated materials is observed when they are modified with a small-scale fraction of a nanofiller (carbon nanotubes). The specific heats of the modified and traditional asbestos-reinforced laminates turned out to be identical, in practice, within the measurement error.
Tool use and mechanical problem solving in apraxia.
Goldenberg, G; Hagmann, S
1998-07-01
Moorlaas (1928) proposed that apraxic patients can identify objects and can remember the purpose they have been made for but do not know the way in which they must be used to achieve that purpose. Knowledge about the use of objects and tools can have two sources: It can be based on retrieval of instructions of use from semantic memory or on a direct inference of function from structure. The ability to infer function from structure enables subjects to use unfamiliar tools and to detect alternative uses of familiar tools. It is the basis of mechanical problem solving. The purpose of the present study was to analyze retrieval of instruction of use, mechanical problem solving, and actual tool use in patients with apraxia due to circumscribed lesions of the left hemisphere. For assessing mechanical problem solving we developed a test of selection and application of novel tools. Access to instruction of use was tested by pantomime of tool use. Actual tool use was examined for the same familiar tools. Forty two patients with left brain damage (LBD) and aphasia, 22 patients with right brain damage (RBD) and 22 controls were examined. Only LBD patients differed from controls on all tests. RBD patients had difficulties with the use but not with the selection of novel tools. In LBD patients there was a significant correlation between pantomime of tool use and novel tool selection but there were single cases who scored in the defective range on one of these tests and normally on the other. Analysis of LBD patients' lesions suggested that frontal lobe damage does not disturb novel tool selection. Only LBD patients who failed on pantomime of object use and on novel tool selection committed errors in actual use of familiar tools. The finding that mechanical problem solving is invariably defective in apraxic patients who commit errors with familiar tools is in good accord with clinical observations, as the gravity of their errors goes beyond what one would expect as a mere sequel of loss of access to instruction of use.
Robust optimization of a tandem grating solar thermal absorber
NASA Astrophysics Data System (ADS)
Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae
2018-04-01
Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.
Porous tooling process for manufacture of graphite/polyimide composites
NASA Technical Reports Server (NTRS)
Smiser, L. W.; Orr, K. K.; Araujo, S. M.
1981-01-01
A porous tooling system was selected for the processing of Graphite/PMR-15 Polyimide laminates in thickness up to 3.2 mm. (0.125 inch). This tool system must have a reasonable strength, permeability dimensional stability, and thermal conductivity to accomplish curing at 600 F and 200 psi and 200 psi autoclave temperature and pressure. A permeability measuring apparatus was constructed and permeability vs. casting water level determined to produce tools at three different permeability levels. On these tools, laminates of 5, 11, and 22 plies (.027, .060, and 0.121 inch) were produced and evaluated by ultrasonic, mechanical, and thermal tests to determine the effect of the tool permeability on the cured laminates. All tools produced acceptable laminates at 5 and 11 plies but only the highest permeability produced acceptable clear ultrasonic C-Scans. Recommendations are made for future investigations of design geometry, and strengthening techniques for porous ceramic tooling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakos, James Thomas
2004-04-01
It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less
Thermal Model Development for an X-Ray Mirror Assembly
NASA Technical Reports Server (NTRS)
Bonafede, Joseph A.
2015-01-01
Space-based x-ray optics require stringent thermal environmental control to achieve the desired image quality. Future x-ray telescopes will employ hundreds of nearly cylindrical, thin mirror shells to maximize effective area, with each shell built from small azimuthal segment pairs for manufacturability. Thermal issues with these thin optics are inevitable because the mirrors must have a near unobstructed view of space while maintaining near uniform 20 C temperature to avoid thermal deformations. NASA Goddard has been investigating the thermal characteristics of a future x-ray telescope with an image requirement of 5 arc-seconds and only 1 arc-second focusing error allocated for thermal distortion. The telescope employs 135 effective mirror shells formed from 7320 individual mirror segments mounted in three rings of 18, 30, and 36 modules each. Thermal requirements demand a complex thermal control system and detailed thermal modeling to verify performance. This presentation introduces innovative modeling efforts used for the conceptual design of the mirror assembly and presents results demonstrating potential feasibility of the thermal requirements.
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert
2018-05-31
The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrigan, C.; McBirney, A.
1997-07-01
A comparative evaluation is made of various experiments and techniques for measuring the thermal properties of molten silicates. Sources of errors for measurements of Snyder et al are discussed. (AIP) {copyright} {ital 1997 American Geophysical Union.}
Development of an air flow thermal balance calorimeter
NASA Technical Reports Server (NTRS)
Sherfey, J. M.
1972-01-01
An air flow calorimeter, based on the idea of balancing an unknown rate of heat evolution with a known rate of heat evolution, was developed. Under restricted conditions, the prototype system is capable of measuring thermal wattages from 10 milliwatts to 1 watt, with an error no greater than 1 percent. Data were obtained which reveal system weaknesses and point to modifications which would effect significant improvements.
Modeling and Error Analysis of a Superconducting Gravity Gradiometer.
1979-08-01
fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the
Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation
NASA Astrophysics Data System (ADS)
Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad
2017-12-01
Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.
Step - wise transient method - Influence of heat source inertia
NASA Astrophysics Data System (ADS)
Malinarič, Svetozár; Dieška, Peter
2016-07-01
Step-wise transient (SWT) method is an experimental technique for measuring the thermal diffusivity and conductivity of materials. Theoretical models and experimental apparatus are presented and the influence of the heat source capacity are investigated using the experiment simulation. The specimens from low density polyethylene (LDPE) were measured yielding the thermal diffusivity 0.165 mm2/s and thermal conductivity 0.351 W/mK with the coefficient of variation less than 1.4 %. The heat source capacity caused the systematic error of the results smaller than 1 %.
Engineering evaluations and studies. Report for IUS studies
NASA Technical Reports Server (NTRS)
1981-01-01
The reviews, investigations, and analyses of the Inertial Upper Stage (IUS) Spacecraft Tracking and Data Network (STDN) transponder are reviewed. Carrier lock detector performance for Tracking and Data Relay Satellite System (TDRSS) dual-mode operation is discussed, as is the problem of predicting instantaneous frequency error in the carrier loop. Coastal loop performance analysis is critiqued and the static tracking phase error induced by thermal noise biases is discussed.
Wear behavior of carbide tool coated with Yttria-stabilized zirconia nano particles.
NASA Astrophysics Data System (ADS)
Jadhav, Pavandatta M.; Reddy, Narala Suresh Kumar
2018-04-01
Wear mechanism takes predominant role in reducing the tool life during machining of Titanium alloy. Challenges of wear mechanisms such as variation in chip, high pressure loads and spring back are responsible for tool wear. In addition, many tool materials are inapt for machining due to low thermal conductivity and volume specific heat of these materials results in high cutting temperature during machining. To confront this issue Electrostatic Spray Coating (ESC) coating technique is utilized to enhance the tool life to an acceptable level. The Yttria Stabilized Zirconia (YSZ) acts as a thermal barrier coating having high thermal expansion coefficient and thermal shock resistance. This investigation focuses on the influence of YSZ nanocoating on the tungsten carbide tool material and improve the machinability of Ti-6Al-4V alloy. YSZ nano powder was coated on the tungsten carbide pin by using ESC technique. The coatings have been tested for wear and friction behavior by using a pin-on-disc tribological tester. The dry sliding wear test was performed on Titanium alloy (Ti-6Al-4V) disc and YSZ coated tungsten carbide (pin) at ambient atmosphere. The performance parameters like wear rate and temperature rise were considered upon performing the dry sliding test on Ti-6Al-4V alloy disc. The performance parameters were calculated by using coefficient of friction and frictional force values which were obtained from the pin on disc test. Substantial resistance to wear was achieved by the coating.
Thermal protection system (TPS) monitoring using acoustic emission
NASA Astrophysics Data System (ADS)
Hurley, D. A.; Huston, D. R.; Fletcher, D. G.; Owens, W. P.
2011-04-01
This project investigates acoustic emission (AE) as a tool for monitoring the degradation of thermal protection systems (TPS). The AE sensors are part of an array of instrumentation on an inductively coupled plasma (ICP) torch designed for testing advanced thermal protection aerospace materials used for hypervelocity vehicles. AE are generated by stresses within the material, propagate as elastic stress waves, and can be detected with sensitive instrumentation. Graphite (POCO DFP-2) is used to study gas-surface interaction during degradation of thermal protection materials. The plasma is produced by a RF magnetic field driven by a 30kW power supply at 3.5 MHz, which creates a noisy environment with large spikes when powered on or off. AE are waveguided from source to sensor by a liquid-cooled copper probe used to position the graphite sample in the plasma stream. Preliminary testing was used to set filters and thresholds on the AE detection system (Physical Acoustics PCI-2) to minimize the impact of considerable operating noise. Testing results show good correlation between AE data and testing environment, which dictates the physics and chemistry of the thermal breakdown of the sample. Current efforts for the project are expanding the dataset and developing statistical analysis tools. This study shows the potential of AE as a powerful tool for analysis of thermal protection material thermal degradations with the unique capability of real-time, in-situ monitoring.
Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection
NASA Astrophysics Data System (ADS)
Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.
2014-02-01
Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.
Design and study of water supply system for supercritical unit boiler in thermal power station
NASA Astrophysics Data System (ADS)
Du, Zenghui
2018-04-01
In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.
NASA Astrophysics Data System (ADS)
Kovács, Attila; Unger, János; Gál, Csilla V.; Kántor, Noémi
2016-07-01
This study introduces new methodological concepts for integrating seasonal subjective thermal assessment patterns of people into the thermal components of two tourism climatological evaluation tools: the Tourism Climatic Index (TCI) and the Climate-Tourism/Transfer-Information-Scheme (CTIS). In the case of the TCI, we replaced the air temperature and relative humidity as the basis of the initial rating system with the physiologically equivalent temperature (PET)—a complex human biometeorological index. This modification improves the TCI's potential to evaluate the thermal aspects of climate. The major accomplishments of this study are (a) the development of a new, PET-based rating system and its integration into the thermal sub-indices of the TCI and (b) the regionalization of the thermal components of CTIS to reflect both the thermal sensation and preference patterns of people. A 2-year-long (2011-2012) thermal comfort survey conducted in Szeged, Hungary, from spring to autumn was utilized to demonstrate the implementation of the introduced concepts. We found considerable differences between the thermal perception and preference patterns of Hungarians, with additional variations across the evaluated seasons. This paper describes the proposed methodology for the integration of the new seasonal, perception-based, and preference-based PET rating systems into the TCI, and presents the incorporation of new PET thresholds into the CTIS. In order to demonstrate the utility of the modified evaluation tools, we performed case study climate analyses for three Hungarian tourist destinations. The additional adjustments introduced during the course of those analyses include the reduction of TCI's temporal resolution to 10-day intervals and the exclusion of nocturnal and winter periods from the investigation.
Optical system components for navigation grade fiber optic gyroscopes
NASA Astrophysics Data System (ADS)
Heimann, Marcus; Liesegang, Maximilian; Arndt-Staufenbiel, Norbert; Schröder, Henning; Lang, Klaus-Dieter
2013-10-01
Interferometric fiber optic gyroscopes belong to the class of inertial sensors. Due to their high accuracy they are used for absolute position and rotation measurement in manned/unmanned vehicles, e.g. submarines, ground vehicles, aircraft or satellites. The important system components are the light source, the electro optical phase modulator, the optical fiber coil and the photodetector. This paper is focused on approaches to realize a stable light source and fiber coil. Superluminescent diode and erbium doped fiber laser were studied to realize an accurate and stable light source. Therefor the influence of the polarization grade of the source and the effects due to back reflections to the source were studied. During operation thermal working conditions severely affect accuracy and stability of the optical fiber coil, which is the sensor element. Thermal gradients that are applied to the fiber coil have large negative effects on the achievable system accuracy of the optic gyroscope. Therefore a way of calculating and compensating the rotation rate error of a fiber coil due to thermal change is introduced. A simplified 3 dimensional FEM of a quadrupole wound fiber coil is used to determine the build-up of thermal fields in the polarization maintaining fiber due to outside heating sources. The rotation rate error due to these sources is then calculated and compared to measurement data. A simple regression model is used to compensate the rotation rate error with temperature measurement at the outside of the fiber coil. To realize a compact and robust optical package for some of the relevant optical system components an approach based on ion exchanged waveguides in thin glass was developed. This waveguides are used to realize 1x2 and 1x4 splitter with fiber coupling interface or direct photodiode coupling.
NASA Technical Reports Server (NTRS)
Troy, B. E., Jr.; Maier, E. J.
1975-01-01
The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.
Tool Preloads Screw and Applies Locknut
NASA Technical Reports Server (NTRS)
Wood, K. E.
1982-01-01
Special tool reaches through structural members inside Space Shuttle fasten nut on preloaded screw that holds thermal protection tile against outside skin of vehicle. Tool attaches tiles with accuratelycontrolled tensile loading.
Automated Classification of Phonological Errors in Aphasic Language
Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.
1984-01-01
Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.
Importance of interpolation and coincidence errors in data fusion
NASA Astrophysics Data System (ADS)
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Integrated Modeling Tools for Thermal Analysis and Applications
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Needels, Laura; Papalexandris, Miltiadis
1999-01-01
Integrated modeling of spacecraft systems is a rapidly evolving area in which multidisciplinary models are developed to design and analyze spacecraft configurations. These models are especially important in the early design stages where rapid trades between subsystems can substantially impact design decisions. Integrated modeling is one of the cornerstones of two of NASA's planned missions in the Origins Program -- the Next Generation Space Telescope (NGST) and the Space Interferometry Mission (SIM). Common modeling tools for control design and opto-mechanical analysis have recently emerged and are becoming increasingly widely used. A discipline that has been somewhat less integrated, but is nevertheless of critical concern for high precision optical instruments, is thermal analysis and design. A major factor contributing to this mild estrangement is that the modeling philosophies and objectives for structural and thermal systems typically do not coincide. Consequently the tools that are used in these discplines suffer a degree of incompatibility, each having developed along their own evolutionary path. Although standard thermal tools have worked relatively well in the past. integration with other disciplines requires revisiting modeling assumptions and solution methods. Over the past several years we have been developing a MATLAB based integrated modeling tool called IMOS (Integrated Modeling of Optical Systems) which integrates many aspects of structural, optical, control and dynamical analysis disciplines. Recent efforts have included developing a thermal modeling and analysis capability, which is the subject of this article. Currently, the IMOS thermal suite contains steady state and transient heat equation solvers, and the ability to set up the linear conduction network from an IMOS finite element model. The IMOS code generates linear conduction elements associated with plates and beams/rods of the thermal network directly from the finite element structural model. Conductances for temperature varying materials are accommodated. This capability both streamlines the process of developing the thermal model from the finite element model, and also makes the structural and thermal models compatible in the sense that each structural node is associated with a thermal node. This is particularly useful when the purpose of the analysis is to predict structural deformations due to thermal loads. The steady state solver uses a restricted step size Newton method, and the transient solver is an adaptive step size implicit method applicable to general differential algebraic systems. Temperature dependent conductances and capacitances are accommodated by the solvers. In addition to discussing the modeling and solution methods. applications where the thermal modeling is "in the loop" with sensitivity analysis, optimization and optical performance drawn from our experiences with the Space Interferometry Mission (SIM), and the Next Generation Space Telescope (NGST) are presented.
NASA Astrophysics Data System (ADS)
Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.
2018-01-01
The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.
Acoustic evidence for phonologically mismatched speech errors.
Gormley, Andrea
2015-04-01
Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.
MOD Tool (Microwave Optics Design Tool)
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Borgioli, Andrea; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.
1999-01-01
The Jet Propulsion Laboratory (JPL) is currently designing and building a number of instruments that operate in the microwave and millimeter-wave bands. These include MIRO (Microwave Instrument for the Rosetta Orbiter), MLS (Microwave Limb Sounder), and IMAS (Integrated Multispectral Atmospheric Sounder). These instruments must be designed and built to meet key design criteria (e.g., beamwidth, gain, pointing) obtained from the scientific goals for the instrument. These criteria are frequently functions of the operating environment (both thermal and mechanical). To design and build instruments which meet these criteria, it is essential to be able to model the instrument in its environments. Currently, a number of modeling tools exist. Commonly used tools at JPL include: FEMAP (meshing), NASTRAN (structural modeling), TRASYS and SINDA (thermal modeling), MACOS/IMOS (optical modeling), and POPO (physical optics modeling). Each of these tools is used by an analyst, who models the instrument in one discipline. The analyst then provides the results of this modeling to another analyst, who continues the overall modeling in another discipline. There is a large reengineering task in place at JPL to automate and speed-up the structural and thermal modeling disciplines, which does not include MOD Tool. The focus of MOD Tool (and of this paper) is in the fields unique to microwave and millimeter-wave instrument design. These include initial design and analysis of the instrument without thermal or structural loads, the automation of the transfer of this design to a high-end CAD tool, and the analysis of the structurally deformed instrument (due to structural and/or thermal loads). MOD Tool is a distributed tool, with a database of design information residing on a server, physical optics analysis being performed on a variety of supercomputer platforms, and a graphical user interface (GUI) residing on the user's desktop computer. The MOD Tool client is being developed using Tcl/Tk, which allows the user to work on a choice of platforms (PC, Mac, or Unix) after downloading the Tcl/Tk binary, which is readily available on the web. The MOD Tool server is written using Expect, and it resides on a Sun workstation. Client/server communications are performed over a socket, where upon a connection from a client to the server, the server spawns a child which is be dedicated to communicating with that client. The server communicates with other machines, such as supercomputers using Expect with the username and password being provided by the user on the client.
NASA Astrophysics Data System (ADS)
Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien
2015-04-01
Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media. Springer- Verlag Berlin Heidelberg ,996p. Doherty J., 2010, PEST: Model-Independent Parameter Estimation. user manual 5th Edition. Watermark, Brisbane, Australia Magri, F., Inbar, N., Siebert C., Rosenthal, E., Guttman, J., Möller, P., 2015. Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520(0), 342-355.
Optimization of Thermal Neutron Converter in SiC Sensors for Spectral Radiation Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krolikowski, Igor; Cetnar, Jerzy; Issa, Fatima
2015-07-01
Optimization of the neutron converter in SiC sensors is presented. The sensors are used for spectral radiation measurements of thermal and fast neutrons and optionally gamma ray at elevated temperature in harsh radiation environment. The neutron converter, which is based on 10B, allows to detect thermal neutrons by means of neutron capture reaction. Two construction of the sensors were used to measure radiation in experiments. Sensor responses collected in experiments have been reproduced by the computer tool created by authors, it allows to validate the tool. The tool creates the response matrix function describing the characteristic of the sensors andmore » it was used for detailed analyses of the sensor responses. Obtained results help to optimize the neutron converter in order to increase thermal neutron detection. Several enhanced construction of the sensors, which includes the neutron converter based on {sup 10}B or {sup 6}Li, were proposed. (authors)« less
Monitoring Method of Cutting Force by Using Additional Spindle Sensors
NASA Astrophysics Data System (ADS)
Sarhan, Ahmed Aly Diaa; Matsubara, Atsushi; Sugihara, Motoyuki; Saraie, Hidenori; Ibaraki, Soichi; Kakino, Yoshiaki
This paper describes a monitoring method of cutting forces for end milling process by using displacement sensors. Four eddy-current displacement sensors are installed on the spindle housing of a machining center so that they can detect the radial motion of the rotating spindle. Thermocouples are also attached to the spindle structure in order to examine the thermal effect in the displacement sensing. The change in the spindle stiffness due to the spindle temperature and the speed is investigated as well. Finally, the estimation performance of cutting forces using the spindle displacement sensors is experimentally investigated by machining tests on carbon steel in end milling operations under different cutting conditions. It is found that the monitoring errors are attributable to the thermal displacement of the spindle, the time lag of the sensing system, and the modeling error of the spindle stiffness. It is also shown that the root mean square errors between estimated and measured amplitudes of cutting forces are reduced to be less than 20N with proper selection of the linear stiffness.
Experimental Verification of Sparse Aperture Mask for Low Order Wavefront Sensing
NASA Astrophysics Data System (ADS)
Subedi, Hari; Kasdin, N. Jeremy
2017-01-01
To directly image exoplanets, future space-based missions are equipped with coronagraphs which manipulate the diffraction of starlight and create regions of high contrast called dark holes. Theoretically, coronagraphs can be designed to achieve the high level of contrast required to image exoplanets, which are billions of times dimmer than their host stars, however the aberrations caused by optical imperfections and thermal fluctuations cause the degradation of contrast in the dark holes. Focal plane wavefront control (FPWC) algorithms using deformable mirrors (DMs) are used to mitigate the quasi-static aberrations caused by optical imperfections. Although the FPWC methods correct the quasi-static aberrations, they are blind to dynamic errors caused by telescope jitter and thermal fluctuations. At Princeton's High Contrast Imaging Lab we have developed a new technique that integrates a sparse aperture mask with the coronagraph to estimate these low-order dynamic wavefront errors. This poster shows the effectiveness of a SAM Low-Order Wavefront Sensor in estimating and correcting these errors via simulation and experiment and compares the results to other methods, such as the Zernike Wavefront Sensor planned for WFIRST.
Thermally stratified squeezed flow between two vertical Riga plates with no slip conditions
NASA Astrophysics Data System (ADS)
Farooq, M.; Mansoor, Zahira; Ijaz Khan, M.; Hayat, T.; Anjum, A.; Mir, N. A.
2018-04-01
This paper demonstrates the mixed convective squeezing nanomaterials flow between two vertical plates, one of which is a Riga plate embedded in a thermally stratified medium subject to convective boundary conditions. Heat transfer features are elaborated with viscous dissipation. Single-wall and multi-wall carbon nanotubes are taken as nanoparticles to form a homogeneous solution in the water. A non-linear system of differential equations is obtained for the considered flow by using suitable transformations. Convergence analysis for velocity and temperature is computed and discussed explicitly through BVPh 2.0. Residual errors are also computed by BVPh 2.0 for the dimensionless governing equations. We introduce two undetermined convergence control parameters, i.e. \\hslash_{θ} and \\hslashf , to compute the lowest entire error. The average residual error for the k -th-order approximation is given in a table. The effects of different flow variables on temperature and velocity distributions are sketched graphically and discussed comprehensively. Furthermore the coefficient of skin friction and the Nusselt number are also analyzed through graphical data.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Thermal Property Measurement of Semiconductor Melt using Modified Laser Flash Method
NASA Technical Reports Server (NTRS)
Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalla N.; Su, Ching-Hua; Lehoczky, Sandor L.
2003-01-01
This study further developed standard laser flash method to measure multiple thermal properties of semiconductor melts. The modified method can determine thermal diffusivity, thermal conductivity, and specific heat capacity of the melt simultaneously. The transient heat transfer process in the melt and its quartz container was numerically studied in detail. A fitting procedure based on numerical simulation results and the least root-mean-square error fitting to the experimental data was used to extract the values of specific heat capacity, thermal conductivity and thermal diffusivity. This modified method is a step forward from the standard laser flash method, which is usually used to measure thermal diffusivity of solids. The result for tellurium (Te) at 873 K: specific heat capacity 300.2 Joules per kilogram K, thermal conductivity 3.50 Watts per meter K, thermal diffusivity 2.04 x 10(exp -6) square meters per second, are within the range reported in literature. The uncertainty analysis showed the quantitative effect of sample geometry, transient temperature measured, and the energy of the laser pulse.
Numerical analysis of thermal drilling technique on titanium sheet metal
NASA Astrophysics Data System (ADS)
Kumar, R.; Hynes, N. Rajesh Jesudoss
2018-05-01
Thermal drilling is a technique used in drilling of sheet metal for various applications. It involves rotating conical tool with high speed in order to drill the sheet metal and formed a hole with bush below the surface of sheet metal. This article investigates the finite element analysis of thermal drilling on Ti6Al4Valloy sheet metal. This analysis was carried out by means of DEFORM-3D simulation software to simulate the performance characteristics of thermal drilling technique. Due to the contribution of high temperature deformation in this technique, the output performances which are difficult to measure by the experimental approach, can be successfully achieved by finite element method. Therefore, the modeling and simulation of thermal drilling is an essential tool to predict the strain rate, stress distribution and temperature of the workpiece.
Evaluation of Phantom-Based Education System for Acupuncture Manipulation
Lee, In-Seon; Lee, Ye-Seul; Park, Hi-Joon; Lee, Hyejung; Chae, Younbyoung
2015-01-01
Background Although acupuncture manipulation has been regarded as one of the important factors in clinical outcome, it has been difficult to train novice students to become skillful experts due to a lack of adequate educational program and tools. Objectives In the present study, we investigated whether newly developed phantom acupoint tools would be useful to practice-naïve acupuncture students for practicing the three different types of acupuncture manipulation to enhance their skills. Methods We recruited 12 novice students and had them practice acupuncture manipulations on the phantom acupoint (5% agarose gel). We used the Acusensor 2 and compared their acupuncture manipulation techniques, for which the target criteria were depth and time factors, at acupoint LI11 in the human body before and after 10 training sessions. The outcomes were depth of needle insertion, depth error from target criterion, time of rotating, lifting, and thrusting, time error from target criteria and the time ratio. Results After 10 training sessions, the students showed significantly improved outcomes in depth of needle, depth error (rotation, reducing lifting/thrusting), thumb-forward time error, thumb-backward time error (rotation), and lifting time (reinforcing lifting/thrusting). Conclusions The phantom acupoint tool could be useful in a phantom-based education program for acupuncture-manipulation training for students. For advanced education programs for acupuncture manipulation, we will need to collect additional information, such as patient responses, acupoint-specific anatomical characteristics, delicate tissue-like modeling, haptic and visual feedback, and data from an acupuncture practice simulator. PMID:25689598
Evaluation of phantom-based education system for acupuncture manipulation.
Lee, In-Seon; Lee, Ye-Seul; Park, Hi-Joon; Lee, Hyejung; Chae, Younbyoung
2015-01-01
Although acupuncture manipulation has been regarded as one of the important factors in clinical outcome, it has been difficult to train novice students to become skillful experts due to a lack of adequate educational program and tools. In the present study, we investigated whether newly developed phantom acupoint tools would be useful to practice-naïve acupuncture students for practicing the three different types of acupuncture manipulation to enhance their skills. We recruited 12 novice students and had them practice acupuncture manipulations on the phantom acupoint (5% agarose gel). We used the Acusensor 2 and compared their acupuncture manipulation techniques, for which the target criteria were depth and time factors, at acupoint LI11 in the human body before and after 10 training sessions. The outcomes were depth of needle insertion, depth error from target criterion, time of rotating, lifting, and thrusting, time error from target criteria and the time ratio. After 10 training sessions, the students showed significantly improved outcomes in depth of needle, depth error (rotation, reducing lifting/thrusting), thumb-forward time error, thumb-backward time error (rotation), and lifting time (reinforcing lifting/thrusting). The phantom acupoint tool could be useful in a phantom-based education program for acupuncture-manipulation training for students. For advanced education programs for acupuncture manipulation, we will need to collect additional information, such as patient responses, acupoint-specific anatomical characteristics, delicate tissue-like modeling, haptic and visual feedback, and data from an acupuncture practice simulator.
A Model for Hydrogen Thermal Conductivity and Viscosity Including the Critical Point
NASA Technical Reports Server (NTRS)
Wagner, Howard A.; Tunc, Gokturk; Bayazitoglu, Yildiz
2001-01-01
In order to conduct a thermal analysis of heat transfer to liquid hydrogen near the critical point, an accurate understanding of the thermal transport properties is required. A review of the available literature on hydrogen transport properties identified a lack of useful equations to predict the thermal conductivity and viscosity of liquid hydrogen. The tables published by the National Bureau of Standards were used to perform a series of curve fits to generate the needed correlation equations. These equations give the thermal conductivity and viscosity of hydrogen below 100 K. They agree with the published NBS tables, with less than a 1.5 percent error for temperatures below 100 K and pressures from the triple point to 1000 KPa. These equations also capture the divergence in the thermal conductivity at the critical point
Thermal properties of nonstoichiometry uranium dioxide
NASA Astrophysics Data System (ADS)
Kavazauri, R.; Pokrovskiy, S. A.; Baranov, V. G.; Tenishev, A. V.
2016-04-01
In this paper, was developed a method of oxidation pure uranium dioxide to a predetermined deviation from the stoichiometry. Oxidation was carried out using the thermogravimetric method on NETZSCH STA 409 CD with a solid electrolyte galvanic cell for controlling the oxygen potential of the environment. 4 samples uranium oxide were obtained with a different ratio of oxygen-to-metal: O / U = 2.002, O / U = 2.005, O / U = 2.015, O / U = 2.033. For the obtained samples were determined basic thermal characteristics of the heat capacity, thermal diffusivity, thermal conductivity. The error of heat capacity determination is equal to 5%. Thermal diffusivity and thermal conductivity of the samples decreased with increasing deviation from stoichiometry. For the sample with O / M = 2.033, difference of both values with those of stoichiometric uranium dioxide is close to 50%.
Hysteresis of thin film IPRTs in the range 100 °C to 600 °C
NASA Astrophysics Data System (ADS)
Zvizdić, D.; Šestan, D.
2013-09-01
As opposed to SPRTs, the IPRTs succumb to hysteresis when submitted to change of temperature. This uncertainty component, although acknowledged as omnipresent at many other types of sensors (pressure, electrical, magnetic, humidity, etc.) has often been disregarded in their calibration certificates' uncertainty budgets in the past, its determination being costly, time-consuming and not appreciated by customers and manufacturers. In general, hysteresis is a phenomenon that results in a difference in an item's behavior when approached from a different path. Thermal hysteresis results in a difference in resistance at a given temperature based on the thermal history to which the PRTs were exposed. The most prominent factor that contributes to the hysteresis error in an IPRT is a strain within the sensing element caused by the thermal expansion and contraction. The strains that cause hysteresis error are closely related to the strains that cause repeatability error. Therefore, it is typical that PRTs that exhibit small hysteresis also exhibit small repeatability error, and PRTs that exhibit large hysteresis have poor repeatability. Aim of this paper is to provide hysteresis characterization of a batch of IPRTs using the same type of thin-film sensor, encapsulated by same procedure and same company and to estimate to what extent the thermal hysteresis obtained by testing one single thermometer (or few thermometers) can serve as representative of other thermometers of the same type and manufacturer. This investigation should also indicate the range of hysteresis departure between IPRTs of the same type. Hysteresis was determined by cycling IPRTs temperature from 100 °C through intermediate points up to 600 °C and subsequently back to 100 °C. Within that range several typical sub-ranges are investigated: 100 °C to 400 °C, 100 °C to 500 °C, 100 °C to 600 °C, 300 °C to 500 °C and 300 °C to 600 °C . The hysteresis was determined at various temperatures by comparison calibration with SPRT. The results of investigation are presented in a graphical form for all IPRTs, ranges and calibration points.
Nonlinear dynamic modeling of a V-shaped metal based thermally driven MEMS actuator for RF switches
NASA Astrophysics Data System (ADS)
Bakri-Kassem, Maher; Dhaouadi, Rached; Arabi, Mohamed; Estahbanati, Shahabeddin V.; Abdel-Rahman, Eihab
2018-05-01
In this paper, we propose a new dynamic model to describe the nonlinear characteristics of a V-shaped (chevron) metallic-based thermally driven MEMS actuator. We developed two models for the thermal actuator with two configurations. The first MEMS configuration has a small tip connected to the shuttle, while the second configuration has a folded spring and a wide beam attached to the shuttle. A detailed finite element model (FEM) and a lumped element model (LEM) are proposed for each configuration to completely characterize the electro-thermal and thermo-mechanical behaviors. The nonlinear resistivity of the polysilicon layer is extracted from the measured current-voltage (I-V) characteristics of the actuator and the simulated corresponding temperatures in the FEM model, knowing the resistivity of the polysilicon at room temperature from the manufacture’s handbook. Both developed models include the nonlinear temperature-dependent material properties. Numerical simulations in comparison with experimental data using a dedicated MEMS test apparatus verify the accuracy of the proposed LEM model to represent the complex dynamics of the thermal MEMS actuator. The LEM and FEM simulation results show an accuracy ranging from a maximum of 13% error down to a minimum of 1.4% error. The actuator with the lower thermal load to air that includes a folded spring (FS), also known as high surface area actuator is compared to the actuator without FS, also known as low surface area actuator, in terms of the I-V characteristics, power consumption, and experimental static and dynamic responses of the tip displacement.
Theoretical and experimental errors for in situ measurements of plant water potential.
Shackel, K A
1984-07-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
NASA Astrophysics Data System (ADS)
Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew
2017-08-01
Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette
Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less
NASA Astrophysics Data System (ADS)
Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.
2018-02-01
To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.
Ballesteros, Rocío
2017-01-01
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m. PMID:28946606
Ribeiro-Gomes, Krishna; Hernández-López, David; Ortega, José F; Ballesteros, Rocío; Poblete, Tomás; Moreno, Miguel A
2017-09-23
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.
Comparison of a single-view and a double-view aerosol optical depth retrieval algorithm
NASA Astrophysics Data System (ADS)
Henderson, Bradley G.; Chylek, Petr
2003-11-01
We compare the results of a single-view and a double-view aerosol optical depth (AOD) retrieval algorithm applied to image pairs acquired over NASA Stennis Space Center, Mississippi. The image data were acquired by the Department of Energy's (DOE) Multispectral Thermal Imager (MTI), a pushbroom satellite imager with 15 bands from the visible to the thermal infrared. MTI has the ability to acquire imagery in pairs in which the first image is a near-nadir view and the second image is off-nadir with a zenith angle of approximately 60°. A total of 15 image pairs were used in the analysis. For a given image pair, AOD retrieval is performed twice---once using a single-view algorithm applied to the near-nadir image, then again using a double-view algorithm. Errors for both retrievals are computed by comparing the results to AERONET AOD measurements obtained at the same time and place. The single-view algorithm showed an RMS error about the mean of 0.076 in AOD units, whereas the double-view algorithm showed a modest improvement with an RMS error of 0.06. The single-view errors show a positive bias which is presumed to be a result of the empirical relationship used to determine ground reflectance in the visible. A plot of AOD error of the double-view algorithm versus time shows a noticeable trend which is interpreted to be a calibration drift. When this trend is removed, the RMS error of the double-view algorithm drops to 0.030. The single-view algorithm qualitatively appears to perform better during the spring and summer whereas the double-view algorithm seems to be less sensitive to season.
Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes
NASA Astrophysics Data System (ADS)
Crowell, J. "; Gosnold, W. D.
2012-12-01
Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface resulted in thermal conductivity values which were too high. The cause of the error with the smaller samples is being examined as is the relationship between the amount of error in the thermal conductivity and the difference in surface areas. As more measurements are made an equation to mathematically correct for the error is being developed on in case a way to physically correct the problem cannot be determined.
Commentary: Reducing diagnostic errors: another role for checklists?
Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J
2011-03-01
Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.
Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction
Laehnemann, David; Borkhardt, Arndt
2016-01-01
Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159
Bennett, Gloria A.; Elder, Michael G.; Kemme, Joseph E.
1985-01-01
An apparatus which thermally protects sensitive components in tools used in a geothermal borehole. The apparatus comprises a Dewar within a housing. The Dewar contains heat pipes such as brass heat pipes for thermally conducting heat from heat sensitive components to a heat sink such as ice.
Development of a Rolling Process Design Tool for Use in Improving Hot Roll Slab Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, R; Wang, P
2003-05-06
In this quarter, our primary effort has been focused on model verification, emphasizing on consistency in result for parallel and serial simulation runs, Progress has been made in refining the parallel thermal algorithms and in diminishing discretization effects in the contact region between the rollers and slab. We have received the metrology data of the ingot profile at the end of the fifth pass from Alcoa. Detailed comparisons between the data and the initial simulation result are being performed. Forthcoming from Alcoa are modifications to the fracture model based on additional experiments at lower strain rates. The original fracture model,more » was implemented in the finite element code, but damage in the rolling simulation was not correct due to the modeling errors at lower strain rates and high stress triaxiality. Validation simulations for the fracture model will continue when the experimentally-based adjustments to the parameter values become available.« less
Gran Telescopio Canarias Commissioning Instrument Optomechanics
NASA Astrophysics Data System (ADS)
Espejo, Carlos; Cuevas, Salvador; Sanchez, Beatriz; Flores, Ruben; Lara, Gerardo; Farah, Alejandro; Godoy, Javier; Bringas, Vicente; Chavoya, Armando; Dorantes, Ariel; Manuel Montoya, Juan; Rangel, Juan Carlos; Devaney, Nicholas; Castro, Javier; Cavaller, Luis
2003-02-01
Under a contract with the GRANTECAN, the Commissioning Instrument is a project developed by a team of Mexican scientists and engineers from the Instrumentation Department of the Astronomy Institute at the UNAM and the CIDESI Engineering Center. This paper will discuss in some detail the final Commissioning Instrument (CI) mechanical design and fabrication. We will also explain the error budget and the barrels design as well as their thermal compensation. The optical design and the control system are discussed in other papers. The CI will just act as a diagnostic tool for image quality verification during the GTC Commissioning Phase. This phase is a quality control process for achieving, verifying, and documenting the performance of each GTC sub-systems. This is a very important step for the telescope life. It will begin on starting day and will last for a year. The CI project started in December 2000. The critical design phase was reviewed in July 2001. The CI manufacturing is currently in progress and most parts are finished. We are now approaching the factory acceptance stage.
NASA Astrophysics Data System (ADS)
Braiek, A.; Adili, A.; Albouchi, F.; Karkri, M.; Ben Nasrallah, S.
2016-06-01
The aim of this work is to simultaneously identify the conductive and radiative parameters of a semitransparent sample using a photothermal method associated with an inverse problem. The identification of the conductive and radiative proprieties is performed by the minimization of an objective function that represents the errors between calculated temperature and measured signal. The calculated temperature is obtained from a theoretical model built with the thermal quadrupole formalism. Measurement is obtained in the rear face of the sample whose front face is excited by a crenel of heat flux. For identification procedure, a genetic algorithm is developed and used. The genetic algorithm is a useful tool in the simultaneous estimation of correlated or nearly correlated parameters, which can be a limiting factor for the gradient-based methods. The results of the identification procedure show the efficiency and the stability of the genetic algorithm to simultaneously estimate the conductive and radiative properties of clear glass.
3D shape measurements with a single interferometric sensor for in-situ lathe monitoring
NASA Astrophysics Data System (ADS)
Kuschmierz, R.; Huang, Y.; Czarske, J.; Metschke, S.; Löffler, F.; Fischer, A.
2015-05-01
Temperature drifts, tool deterioration, unknown vibrations as well as spindle play are major effects which decrease the achievable precision of computerized numerically controlled (CNC) lathes and lead to shape deviations between the processed work pieces. Since currently no measurement system exist for fast, precise and in-situ 3d shape monitoring with keyhole access, much effort has to be made to simulate and compensate these effects. Therefore we introduce an optical interferometric sensor for absolute 3d shape measurements, which was integrated into a working lathe. According to the spindle rotational speed, a measurement rate of 2,500 Hz was achieved. In-situ absolute shape, surface profile and vibration measurements are presented. While thermal drifts of the sensor led to errors of several mµm for the absolute shape, reference measurements with a coordinate machine show, that the surface profile could be measured with an uncertainty below one micron. Additionally, the spindle play of 0.8 µm was measured with the sensor.
A hybrid model for river water temperature as a function of air temperature and discharge
NASA Astrophysics Data System (ADS)
Toffolon, Marco; Piccolroaz, Sebastiano
2015-11-01
Water temperature controls many biochemical and ecological processes in rivers, and theoretically depends on multiple factors. Here we formulate a model to predict daily averaged river water temperature as a function of air temperature and discharge, with the latter variable being more relevant in some specific cases (e.g., snowmelt-fed rivers, rivers impacted by hydropower production). The model uses a hybrid formulation characterized by a physically based structure associated with a stochastic calibration of the parameters. The interpretation of the parameter values allows for better understanding of river thermal dynamics and the identification of the most relevant factors affecting it. The satisfactory agreement of different versions of the model with measurements in three different rivers (root mean square error smaller than 1oC, at a daily timescale) suggests that the proposed model can represent a useful tool to synthetically describe medium- and long-term behavior, and capture the changes induced by varying external conditions.
Spectra as windows into exoplanet atmospheres
Burrows, Adam S.
2014-01-01
Understanding a planet’s atmosphere is a necessary condition for understanding not only the planet itself, but also its formation, structure, evolution, and habitability. This requirement puts a premium on obtaining spectra and developing credible interpretative tools with which to retrieve vital planetary information. However, for exoplanets, these twin goals are far from being realized. In this paper, I provide a personal perspective on exoplanet theory and remote sensing via photometry and low-resolution spectroscopy. Although not a review in any sense, this paper highlights the limitations in our knowledge of compositions, thermal profiles, and the effects of stellar irradiation, focusing on, but not restricted to, transiting giant planets. I suggest that the true function of the recent past of exoplanet atmospheric research has been not to constrain planet properties for all time, but to train a new generation of scientists who, by rapid trial and error, are fast establishing a solid future foundation for a robust science of exoplanets. PMID:24613929
Al-ajmi, F F; Loveday, D L; Bedwell, K H; Havenith, G
2008-05-01
The thermal insulation of clothing is one of the most important parameters used in the thermal comfort model adopted by the International Standards Organisation (ISO) [BS EN ISO 7730, 2005. Ergonomics of the thermal environment. Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices and local thermal comfort criteria. International Standardisation Organisation, Geneva.] and by ASHRAE [ASHRAE Handbook, 2005. Fundamentals. Chapter 8. American Society of Heating Refrigeration and Air-conditioning Engineers, Inc., 1791 Tullie Circle N.E., Atlanta, GA.]. To date, thermal insulation values of mainly Western clothing have been published with only minimal data being available for non-Western clothing. Thus, the objective of the present study is to measure and present the thermal insulation (clo) values of a number of Arabian Gulf garments as worn by males and females. The clothing ensembles and garments of Arabian Gulf males and females presented in this study are representative of those typically worn in the region during both summer and winter seasons. Measurements of total thermal insulation values (clo) were obtained using a male and a female shape thermal manikin in accordance with the definition of insulation as given in ISO 9920. In addition, the clothing area factors (f cl) determined in two different ways were compared. The first method used a photographic technique and the second a regression equation as proposed in ISO 9920, based on the insulation values of Arabian Gulf male and female garments and ensembles as they were determined in this study. In addition, fibre content, descriptions and weights of Arabian Gulf clothing have been recorded and tabulated in this study. The findings of this study are presented as additions to the existing knowledge base of clothing insulation, and provide for the first time data for Arabian Gulf clothing. The analysis showed that for these non-Western clothing designs, the most widely used regression calculation of f cl is not valid. However, despite the very large errors in f cl made with the regression method, the errors this causes in the intrinsic clothing insulation value, I cl, are limited.
Coupled Mechanical-Electrochemical-Thermal Modeling for Accelerated Design of EV Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanagopalan, Shriram; Zhang, Chao; Kim, Gi-Heon
2015-05-03
This presentation provides an overview of the mechanical electrochemical-thermal (M-ECT) modeling efforts. The physical phenomena occurring in a battery are many and complex and operate at different scales (particle, electrodes, cell, and pack). A better understanding of the interplay between different physics occurring at different scales through modeling could provide insight to design improved batteries for electric vehicles. Work funded by the U.S. DOE has resulted in development of computer-aided engineering (CAE) tools to accelerate electrochemical and thermal design of batteries; mechanical modeling is under way. Three competitive CAE tools are now commercially available.
High Reliability Organizations--Medication Safety.
Yip, Luke; Farmer, Brenna
2015-06-01
High reliability organizations (HROs), such as the aviation industry, successfully engage in high-risk endeavors and have low incidence of adverse events. HROs have a preoccupation with failure and errors. They analyze each event to effect system wide change in an attempt to mitigate the occurrence of similar errors. The healthcare industry can adapt HRO practices, specifically with regard to teamwork and communication. Crew resource management concepts can be adapted to healthcare with the use of certain tools such as checklists and the sterile cockpit to reduce medication errors. HROs also use The Swiss Cheese Model to evaluate risk and look for vulnerabilities in multiple protective barriers, instead of focusing on one failure. This model can be used in medication safety to evaluate medication management in addition to using the teamwork and communication tools of HROs.
NASA Astrophysics Data System (ADS)
Bhattad, Srikanth; Escoto, Abelardo; Malthaner, Richard; Patel, Rajni
2015-03-01
Brachytherapy and thermal ablation are relatively new approaches in robot-assisted minimally invasive interventions for treating malignant tumors. Ultrasound remains the most favored choice for imaging feedback, the benefits being cost effectiveness, radiation free, and easy access in an OR. However it does not generally provide high contrast, noise free images. Distortion occurs when the sound waves pass through a medium that contains air and/or when the target organ is deep within the body. The distorted images make it quite difficult to recognize and localize tumors and surgical tools. Often tools, such as a bevel-tipped needle, deflect from its path during insertion, making it difficult to detect the needle tip using a single perspective view. The shifting of the target due to cardiac and/or respiratory motion can add further errors in reaching the target. This paper describes a comprehensive system that uses robot dexterity to capture 2D ultrasound images in various pre-determined modes for generating 3D ultrasound images and assists in maneuvering a surgical tool. An interactive 3D virtual reality environment is developed that visualizes various artifacts present in the surgical site in real-time. The system helps to avoid image distortion by grabbing images from multiple positions and orientation to provide a 3D view. Using the methods developed for this application, an accuracy of 1.3 mm was achieved in target attainment in an in-vivo experiment subjected to tissue motion. An accuracy of 1.36 mm and 0.93 mm respectively was achieved for the ex-vivo experiments with and without external induced motion. An ablation monitor widget that visualizes the changes during the complete ablation process and enables evaluation of the process in its entirety is integrated.
Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field
NASA Technical Reports Server (NTRS)
Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.
2012-01-01
We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.
The thermal expansion of hard magnetic materials of the Nd-Fe-B system
NASA Astrophysics Data System (ADS)
Savchenko, Igor; Kozlovskii, Yurii; Samoshkin, Dmitriy; Yatsuk, Oleg
2017-10-01
The results of dilatometric measurement of the thermal expansion of hard magnetic materials brands N35M, N35H and N35SH containing as a main component the crystalline phase of Nd2Fe14B type are presented. The temperature range from 200 to 750 K has been investigated by the method of dilatometry with an error of 1.5-2×10-7 K-1. The approximation dependences of the linear thermal expansion coefficient have been obtained. The character of changes of the thermal coefficient of linear expansion in the region of the Curie point has been specified, its critical indices and critical amplitudes have been defined.
SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.
Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine
2016-06-01
To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
GenomePeek—an online tool for prokaryotic genome and metagenome analysis
McNair, Katelyn; Edwards, Robert A.
2015-06-16
As increases in prokaryotic sequencing take place, a method to quickly and accurately analyze this data is needed. Previous tools are mainly designed for metagenomic analysis and have limitations; such as long runtimes and significant false positive error rates. The online tool GenomePeek (edwards.sdsu.edu/GenomePeek) was developed to analyze both single genome and metagenome sequencing files, quickly and with low error rates. GenomePeek uses a sequence assembly approach where reads to a set of conserved genes are extracted, assembled and then aligned against the highly specific reference database. GenomePeek was found to be faster than traditional approaches while still keeping errormore » rates low, as well as offering unique data visualization options.« less
Thermal imbalance force modelling for a GPS satellite using the finite element method
NASA Technical Reports Server (NTRS)
Vigue, Yvonne; Schutz, Bob E.
1991-01-01
Methods of analyzing the perturbation due to thermal radiation and determining its effects on the orbits of GPS satellites are presented, with emphasis on the FEM technique to calculate satellite solar panel temperatures which are used to determine the magnitude and direction of the thermal imbalance force. Although this force may not be responsible for all of the force mismodeling, conditions may work in combination with the thermal imbalance force to produce such accelerations on the order of 1.e-9 m/sq s. If submeter accurate orbits and centimeter-level accuracy for geophysical applications are desired, a time-dependent model of the thermal imbalance force should be used, especially when satellites are eclipsing, where the observed errors are larger than for satellites in noneclipsing orbits.
Fuel thermal conductivity (FTHCON). Status report. [PWR; BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagrman, D. L.
1979-02-01
An improvement of the fuel thermal conductivity subcode is described which is part of the fuel rod behavior modeling task performed at EG and G Idaho, Inc. The original version was published in the Materials Properties (MATPRO) Handbook, Section A-2 (Fuel Thermal Conductivity). The improved version incorporates data which were not included in the previous work and omits some previously used data which are believed to come from cracked specimens. The models for the effect of porosity on thermal conductivity and for the electronic contribution to thermal coductivity have been completely revised in order to place these models on amore » more mechanistic basis. As a result of modeling improvements the standard error of the model with respect to its data base has been significantly reduced.« less
Medical simulation: a tool for recognition of and response to risk.
Ruddy, Richard M; Patterson, Mary Deffner
2008-11-01
The use of simulation and team training has become an excellent tool to reduce errors in high-risk industry such as the commercial airlines and in the nuclear energy field. The health care industry has begun to use similar tools to improve the outcome of high-risk areas where events are relatively rare but where practice with a tactical team can significantly reduce the chance of bad outcome. There are two parts to this review: first, we review the rationale of why simulation is a key element in improving our error rate, and second, we describe specific tools that have great use at the clinical bedside for improving the care of patients. These cross different (i.e. medical and surgical) specialties and practices within specialties in the health care setting. Tools described will include the pinch, brief/debriefing, read-backs, call-outs, dynamic skepticism, assertive statements, two-challenge rules, checklists and step back (hold points). Examples will assist the clinician in practical daily use to improve their bedside care of children.
NASA Technical Reports Server (NTRS)
Pronchick, Stephen W.
1998-01-01
Materials that pyrolyze at elevated temperature have been commonly used as thermal protection materials in hypersonic flight, and advanced pyrolyzing materials for this purpose continue to be developed. Because of the large temperature gradients that can arise in thermal protection materials, significant thermal stresses can develop. Advanced applications of pyrolytic materials are calling for more complex heatshield configurations, making accurate thermal stress analysis more important, and more challenging. For non-pyrolyzing materials, many finite element codes are available and capable of performing coupled thermal-mechanical analyses. These codes do not, however, have a built-in capability to perform analyses that include pyrolysis effects. When a pyrolyzing material is heated, one or more components of the original virgin material pyrolyze and create a gas. This gas flows away from the pyrolysis zone to the surface, resulting in a reduction in surface heating. A porous residue, referred to as char, remains in place of the virgin material. While the processes involved can be complex, it has been found that a simple physical model in which virgin material reacts to form char and pyrolysis gas, will yield satisfactory analytical results. Specifically, the effects that must be modeled include: (1) Variation of thermal properties (density, specific heat, thermal conductivity) as the material composition changes; (2) Energy released or absorbed by the pyrolysis reactions; (3) Energy convected by the flow of pyrolysis gas from the interior to the surface; (4) The reduction in surface heating due to surface blowing; and (5) Chemical and mass diffusion effects at the surface between the pyrolysis gas and edge gas Computational tools for the one-dimensional thermal analysis these materials exist and have proven to be reliable design tools. The objective of the present work is to extend the analysis capabilities of pyrolyzing materials to axisymmetric configurations, and to couple thermal and mechanical analyses so that thermal stresses may be efficiently and accurately calculated.
NASA Astrophysics Data System (ADS)
Bunai, Tasya; Rokhmatuloh; Wibowo, Adi
2018-05-01
In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.
Automatic Generation Control Study in Two Area Reheat Thermal Power System
NASA Astrophysics Data System (ADS)
Pritam, Anita; Sahu, Sibakanta; Rout, Sushil Dev; Ganthia, Sibani; Prasad Ganthia, Bibhu
2017-08-01
Due to industrial pollution our living environment destroyed. An electric grid system has may vital equipment like generator, motor, transformers and loads. There is always be an imbalance between sending end and receiving end system which cause system unstable. So this error and fault causing problem should be solved and corrected as soon as possible else it creates faults and system error and fall of efficiency of the whole power system. The main problem developed from this fault is deviation of frequency cause instability to the power system and may cause permanent damage to the system. Therefore this mechanism studied in this paper make the system stable and balance by regulating frequency at both sending and receiving end power system using automatic generation control using various controllers taking a two area reheat thermal power system into account.
NASA Astrophysics Data System (ADS)
Meng, Chao; Zhou, Hong; Zhou, Ying; Gao, Ming; Tong, Xin; Cong, Dalong; Wang, Chuanwei; Chang, Fang; Ren, Luquan
2014-04-01
Three kinds of biomimetic non-smooth shapes (spot-shape, striation-shape and reticulation-shape) were fabricated on the surface of H13 hot-work tool steel by laser. We investigated the thermal fatigue behavior of biomimetic non-smooth samples with three kinds of shapes at different thermal cycle temperature. Moreover, the evolution of microstructure, as well as the variations of hardness of laser affected area and matrix were studied and compared. The results showed that biomimetic non-smooth samples had better thermal fatigue behavior compared to the untreated samples at different thermal cycle temperatures. For a given maximal temperature, the biomimetic non-smooth sample with reticulation-shape had the optimum thermal fatigue behavior, than with striation-shape which was better than that with the spot-shape. The microstructure observations indicated that at different thermal cycle temperatures the coarsening degrees of microstructures of laser affected area were different and the microstructures of laser affected area were still finer than that of the untreated samples. Although the resistance to thermal cycling softening of laser affected area was lower than that of the untreated sample, laser affected area had higher microhardness than the untreated sample at different thermal cycle temperature.
NASA Astrophysics Data System (ADS)
Hu, Guojie; Wu, Xiaodong; Zhao, Lin; Li, Ren; Wu, Tonghua; Xie, Changwei; Pang, Qiangqiang; Cheng, Guodong
2017-08-01
Soil temperature plays a key role in hydro-thermal processes in environments and is a critical variable linking surface structure to soil processes. There is a need for more accurate temperature simulation models, particularly in Qinghai-Xizang (Tibet) Plateau (QXP). In this study, a model was developed for the simulation of hourly soil surface temperatures with air temperatures. The model incorporated the thermal properties of the soil, vegetation cover, solar radiation, and water flux density and utilized field data collected from Qinghai-Xizang (Tibet) Plateau (QXP). The model was used to simulate the thermal regime at soil depths of 5 cm, 10 cm and 20 cm and results were compared with those from previous models and with experimental measurements of ground temperature at two different locations. The analysis showed that the newly developed model provided better estimates of observed field temperatures, with an average mean absolute error (MAE), root mean square error (RMSE), and the normalized standard error (NSEE) of 1.17 °C, 1.30 °C and 13.84 %, 0.41 °C, 0.49 °C and 5.45 %, 0.13 °C, 0.18 °C and 2.23 % at 5 cm, 10 cm and 20 cm depths, respectively. These findings provide a useful reference for simulating soil temperature and may be incorporated into other ecosystem models requiring soil temperature as an input variable for modeling permafrost changes under global warming.
Thermal Insulation System Analysis Tool (TISTool) User's Manual. Version 1.0.0
NASA Technical Reports Server (NTRS)
Johnson, Wesley; Fesmire, James; Leucht, Kurt; Demko, Jonathan
2010-01-01
The Thermal Insulation System Analysis Tool (TISTool) was developed starting in 2004 by Jonathan Demko and James Fesmire. The first edition was written in Excel and Visual BasIc as macros. It included the basic shapes such as a flat plate, cylinder, dished head, and sphere. The data was from several KSC tests that were already in the public literature realm as well as data from NIST and other highly respectable sources. More recently, the tool has been updated with more test data from the Cryogenics Test Laboratory and the tank shape was added. Additionally, the tool was converted to FORTRAN 95 to allow for easier distribution of the material and tool. This document reviews the user instructions for the operation of this system.
NASA Astrophysics Data System (ADS)
Valdes, Raymond
The characterization of thermal barrier coating (TBC) systems is increasingly important because they enable gas turbine engines to operate at high temperatures and efficiency. Phase of photothermal emission analysis (PopTea) has been developed to analyze the thermal behavior of the ceramic top-coat of TBCs, as a nondestructive and noncontact method for measuring thermal diffusivity and thermal conductivity. Most TBC allocations are on actively-cooled high temperature turbine blades, which makes it difficult to precisely model heat transfer in the metallic subsystem. This reduces the ability of rote thermal modeling to reflect the actual physical conditions of the system and can lead to higher uncertainty in measured thermal properties. This dissertation investigates fundamental issues underpinning robust thermal property measurements that are adaptive to non-specific, complex, and evolving system characteristics using the PopTea method. A generic and adaptive subsystem PopTea thermal model was developed to account for complex geometry beyond a well-defined coating and substrate system. Without a priori knowledge of the subsystem characteristics, two different measurement techniques were implemented using the subsystem model. In the first technique, the properties of the subsystem were resolved as part of the PopTea parameter estimation algorithm; and, the second technique independently resolved the subsystem properties using a differential "bare" subsystem. The confidence in thermal properties measured using the generic subsystem model is similar to that from a standard PopTea measurement on a "well-defined" TBC system. Non-systematic bias-error on experimental observations in PopTea measurements due to generic thermal model discrepancies was also mitigated using a regression-based sensitivity analysis. The sensitivity analysis reported measurement uncertainty and was developed into a data reduction method to filter out these "erroneous" observations. It was found that the adverse impact of bias-error can be greatly reduced, leaving measurement observations with only random Gaussian noise in PopTea thermal property measurements. Quantifying the influence of the coating-substrate interface in PopTea measurements is important to resolving the thermal conductivity of the coating. However, the reduced significance of this interface in thicker coating systems can give rise to large uncertainties in thermal conductivity measurements. A first step towards improving PopTea measurements for such circumstances has been taken by implementing absolute temperature measurements using harmonically-sustained two-color pyrometry. Although promising, even small uncertainties in thermal emission observations were found to lead to significant noise in temperature measurements. However, PopTea analysis on bulk graphite samples were able to resolve its thermal conductivity to the expected literature values.
Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng
2016-12-01
In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.
Continued Development of a Precision Cryogenic Dilatometer for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Karlmann, Paul B.; Dudik, Matthew J.; Halverson, Peter G.; Levine, Marie; Marcin, Martin; Peters, Robert D.; Shaklan, Stuart; VanBuren, David
2004-01-01
As part of the James Webb Space Telescope (JWST) materials working group, a novel cryogenic dilatometer was designed and built at NASA Jet Propulsion Laboratory to help address stringent coefficient of thermal expansion (CTE) knowledge requirements. Previously reported results and error analysis have estimated a CTE measurement accuracy for ULE of 1.7 ppb/K with a 20K thermal load and 0.1 ppb/K with a 280K thermal load. Presented here is a further discussion of the cryogenic dilatometer system and a description of recent work including system modifications and investigations.
Use of failure mode effect analysis (FMEA) to improve medication management process.
Jain, Khushboo
2017-03-13
Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.
Forces associated with pneumatic power screwdriver operation: statics and dynamics.
Lin, Jia-Hua; Radwin, Robert G; Fronczak, Frank J; Richard, Terry G
2003-10-10
The statics and dynamics of pneumatic power screwdriver operation were investigated in the context of predicting forces acting against the human operator. A static force model is described in the paper, based on tool geometry, mass, orientation in space, feed force, torque build up, and stall torque. Three common power hand tool shapes are considered, including pistol grip, right angle, and in-line. The static model estimates handle force needed to support a power nutrunner when it acts against the tightened fastener with a constant torque. A system of equations for static force and moment equilibrium conditions are established, and the resultant handle force (resolved in orthogonal directions) is calculated in matrix form. A dynamic model is formulated to describe pneumatic motor torque build-up characteristics dependent on threaded fastener joint hardness. Six pneumatic tools were tested to validate the deterministic model. The average torque prediction error was 6.6% (SD = 5.4%) and the average handle force prediction error was 6.7% (SD = 6.4%) for a medium-soft threaded fastener joint. The average torque prediction error was 5.2% (SD = 5.3%) and the average handle force prediction error was 3.6% (SD = 3.2%) for a hard threaded fastener joint. Use of these equations for estimating handle forces based on passive mechanical elements representing the human operator is also described. These models together should be useful for considering tool handle force in the selection and design of power screwdrivers, particularly for minimizing handle forces in the prevention of injuries and work related musculoskeletal disorders.
Nondimensional parameter for conformal grinding: combining machine and process parameters
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.
1999-11-01
Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.; ,
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors.
Wang, Shuang; Geng, Yunhai; Jin, Rongyu
2015-12-12
In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF) and Least Square Methods (LSM) is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.
DOT National Transportation Integrated Search
2001-02-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...
Evaluation of a UMLS Auditing Process of Semantic Type Assignments
Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua
2007-01-01
The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845
Overview of Boundary Layer Transition Research in Support of Orbiter Return To Flight
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Horvath, Thomas J.; Greene, Francis A.; Kinder, Gerald R.; Wang, K. C.
2006-01-01
A predictive tool for estimating the onset of boundary layer transition resulting from damage to and/or repair of the thermal protection system was developed in support of Shuttle Return to Flight. The boundary layer transition tool is part of a suite of tools that analyze the aerothermodynamic environment to the local thermal protection system to allow informed disposition of damage for making recommendations to fly as is or to repair. Using mission specific trajectory information and details of each damage site or repair, the expected time (and thus Mach number) at transition onset is predicted to help define the aerothermodynamic environment to use in the subsequent thermal and stress analysis of the local thermal protection system and structure. The boundary layer transition criteria utilized for the tool was developed from ground-based measurements to account for the effect of both protuberances and cavities and has been calibrated against select flight data. Computed local boundary layer edge conditions were used to correlate the results, specifically the momentum thickness Reynolds number over the edge Mach number and the boundary layer thickness. For the initial Return to Flight mission, STS-114, empirical curve coefficients of 27, 100, and 900 were selected to predict transition onset for protuberances based on height, and cavities based on depth and length, respectively.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lee, Hong-Tao
1989-01-01
A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.
NASA Astrophysics Data System (ADS)
Radziukynas, V.; Klementavičius, A.
2016-04-01
The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Smart Infrared Inspection System Field Operational Test Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siekmann, Adam; Capps, Gary J; Franzese, Oscar
2011-06-01
The Smart InfraRed Inspection System (SIRIS) is a tool designed to assist inspectors in determining which vehicles passing through the SIRIS system are in need of further inspection by measuring the thermal data from the wheel components. As a vehicle enters the system, infrared cameras on the road measure temperatures of the brakes, tires, and wheel bearings on both wheel ends of commercial motor vehicles (CMVs) in motion. This thermal data is then presented to enforcement personal inside of the inspection station on a user friendly interface. Vehicles that are suspected to have a violation are automatically alerted to themore » enforcement staff. The main goal of the SIRIS field operational test (FOT) was to collect data to evaluate the performance of the prototype system and determine the viability of such a system being used for commercial motor vehicle enforcement. From March 2010 to September 2010, ORNL facilitated the SIRIS FOT at the Greene County Inspection Station (IS) in Greeneville, Tennessee. During the course of the FOT, 413 CMVs were given a North American Standard (NAS) Level-1 inspection. Of those 413 CMVs, 384 were subjected to a SIRIS screening. A total of 36 (9.38%) of the vehicles were flagged by SIRIS as having one or more thermal issues; with brakes issues making up 33 (91.67%) of those. Of the 36 vehicles flagged as having thermal issues, 31 (86.11%) were found to have a violation and 30 (83.33%) of those vehicles were placed out-of-service (OOS). Overall the enforcement personnel who have used SIRIS for screening purposes have had positive feedback on the potential of SIRIS. With improvements in detection algorithms and stability, the system will be beneficial to the CMV enforcement community and increase overall trooper productivity by accurately identifying a higher percentage of CMVs to be placed OOS with minimal error. No future evaluation of SIRIS has been deemed necessary and specifications for a production system will soon be drafted.« less
Thermal vacuum test of space equipment: tests of SIR-2 instrument Chandrayaan-1 mission
NASA Astrophysics Data System (ADS)
Sitek, P.
2008-11-01
We describe the reasons of proceeding Thermal-Vacuum tests for space electronic. We will answer on following questions: why teams are doing TV tests, what kind of phases should be simulated, which situations are the most critical during TV tests, what kind of results should be expected, which errors can be detect. As an example, will be shown TV-test of SIR-2 instrument for Chandrayaan-1 moon mission.
Hirotani, Jun; Ikuta, Tatsuya; Nishiyama, Takashi; Takahashi, Koji
2013-01-16
Interfacial thermal transport via van der Waals interaction is quantitatively evaluated using an individual multi-walled carbon nanotube bonded on a platinum hot-film sensor. The thermal boundary resistance per unit contact area was obtained at the interface between the closed end or sidewall of the nanotube and platinum, gold, or a silicon dioxide surface. When taking into consideration the surface roughness, the thermal boundary resistance at the sidewall is found to coincide with that at the closed end. A new finding is that the thermal boundary resistance between a carbon nanotube and a solid surface is independent of the materials within the experimental errors, which is inconsistent with a traditional phonon mismatch model, which shows a clear material dependence of the thermal boundary resistance. Our data indicate the inapplicability of existing phonon models when weak van der Waals forces are dominant at the interfaces.
NASA Technical Reports Server (NTRS)
Campbell, Anthony B.; Nair, Satish S.; Miles, John B.; Iovine, John V.; Lin, Chin H.
1998-01-01
The present NASA space suit (the Shuttle EMU) is a self-contained environmental control system, providing life support, environmental protection, earth-like mobility, and communications. This study considers the thermal dynamics of the space suit as they relate to astronaut thermal comfort control. A detailed dynamic lumped capacitance thermal model of the present space suit is used to analyze the thermal dynamics of the suit with observations verified using experimental and flight data. Prior to using the model to define performance characteristics and limitations for the space suit, the model is first evaluated and improved. This evaluation includes determining the effect of various model parameters on model performance and quantifying various temperature prediction errors in terms of heat transfer and heat storage. The observations from this study are being utilized in two future design efforts, automatic thermal comfort control design for the present space suit and design of future space suit systems for Space Station, Lunar, and Martian missions.
Bennett, Simon A
2017-01-01
A National Health Service (NHS) contingent liability for medical error claims of over £26 billion. To evaluate the safety management and educational benefits of adapting aviation's Normal Operations Safety Audit (NOSA) to health care. In vivo research, a NOSA was performed by medical students at an English NHS Trust. After receiving training from the author, the students spent 6 days gathering data under his supervision. The data revealed a threat-rich environment, where errors - some consequential - were made (359 threats and 86 errors were recorded over 2 weeks). The students claimed that the exercise improved their observational, investigative, communication, teamworking and other nontechnical skills. NOSA is potentially an effective safety management and educational tool for health care. It is suggested that 1) the UK General Medical Council mandates that all medical students perform a NOSA in fulfillment of their degree; 2) the participating NHS Trusts be encouraged to act on students' findings; and 3) the UK Department of Health adopts NOSA as a cornerstone risk assessment and management tool.
NASA Technical Reports Server (NTRS)
Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua
2013-01-01
A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.
Air and smear sample calculational tool for Fluor Hanford Radiological control
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAUMANN, B.L.
2003-07-11
A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alphamore » and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.« less
Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakos, James T.; Suo-Anttila, Jill M.; Zepper, Ethan T.
Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of themore » data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .« less
Common errors in multidrug-resistant tuberculosis management.
Monedero, Ignacio; Caminero, Jose A
2014-02-01
Multidrug-resistant tuberculosis (MDR-TB), defined as being resistant to at least rifampicin and isoniazid, has an increasing burden and threatens TB control. Diagnosis is limited and usually delayed while treatment is long lasting, toxic and poorly effective. MDR-TB management in scarce-resource settings is demanding however it is feasible and extremely necessary. In these settings, cure rates do not usually exceed 60-70% and MDR-TB management is novel for many TB programs. In this challenging scenario, both clinical and programmatic errors are likely to occur. The majority of these errors may be prevented or alleviated with appropriate and timely training in addition to uninterrupted procurement of high-quality drugs, updated national guidelines and laws and an overall improvement in management capacities. While new tools for diagnosis and shorter and less toxic treatment are not available in developing countries, MDR-TB management will remain complex in scarce resource settings. Focusing special attention on the common errors in diagnosis, regimen design and especially treatment delivery may benefit patients and programs with current outdated tools. The present article is a compilation of typical errors repeatedly observed by the authors in a wide range of countries during technical assistant missions and trainings.
Designing and Developing Web-Based Administrative Tools for Program Management
NASA Technical Reports Server (NTRS)
Gutensohn, Michael
2017-01-01
The task assigned for this internship was to develop a new tool for tracking projects, their subsystems, the leads, backups, and other employees assigned to them, as well as all the relevant information related to the employee (WBS (time charge) codes, time distribution, certifications, and assignments). Currently, this data is tracked manually using a number of different spreadsheets and other tools simultaneously by a number of different people; some of these documents are then merged into one large document. This often leads to inconsistencies and loss in data due to human error. By simplifying the process of tracking this data and aggregating it into a single tool, it is possible to significantly decrease the potential for human error and time spent collecting and checking this information. II. Objective The main objective of this internship is to develop a web-based tool using Ruby on Rails to serve as a method of easily tracking projects, subsystems, and points of contact, along with employees, their assignments, time distribution, certifications, and contact information. Additionally, this tool must be capable of generating a number of different reports based on the data collected. It was important that this tool deliver all of this information using a readable and intuitive interface.
NASA Astrophysics Data System (ADS)
Fedonin, O. N.; Petreshin, D. I.; Ageenko, A. V.
2018-03-01
In the article, the issue of increasing a CNC lathe accuracy by compensating for the static and dynamic errors of the machine is investigated. An algorithm and a diagnostic system for a CNC machine tool are considered, which allows determining the errors of the machine for their compensation. The results of experimental studies on diagnosing and improving the accuracy of a CNC lathe are presented.
García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D
2016-04-01
It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.
Assesment of longwave radiation effects on air quality modelling in street canyons
NASA Astrophysics Data System (ADS)
Soucasse, L.; Buchan, A.; Pain, C.
2016-12-01
Computational Fluid Dynamics is widely used as a predictive tool to evaluate people's exposure to pollutants in urban street canyons. However, in low-wind conditions, flow and pollutant dispersion in the canyons are driven by thermal effects and may be affected by longwave (infrared) radiation due to the absorption and emission of water vapor contained in the air. These effects are mostly ignored in the literature dedicated to air quality modelling at this scale. This study aims at quantifying the uncertainties due to neglecting thermal radiation in air quality models. The Large-Eddy-Simulation of air flow in a single 2D canyon with a heat source on the ground is considered for Rayleigh and Reynolds numbers in the range of [10e8-10e10] and [5.10e3-5.10e4] respectively. The dispersion of a tracer is monitored once the statistically steady regime is reached. Incoming radiation is computed for a mid-latitude summer atmosphere and canyon surfaces are assumed to be black. Water vapour is the only radiating molecule considered and a global model is used to treat the spectral dependancy of its absorption coefficient. Flow and radiation fields are solved in a coupled way using the finite element solvers Fluidity and Fetch which have the capability of adapting their space and angular resolution according to an estimate of the solution error. Results show significant effects of thermal radiation on flow patterns and tracer dispersion. When radiation is taken into account, the air is heated far from the heat source leading to a stronger natural convection flow. The tracer is then dispersed faster out of the canyon potentially decreasing people's exposure to pollution within the street canyon.
Thermal infrared data of active lava surfaces using a newly-developed camera system
NASA Astrophysics Data System (ADS)
Thompson, J. O.; Ramsey, M. S.
2017-12-01
Our ability to acquire accurate data during lava flow emplacement greatly improves models designed to predict their dynamics and down-flow hazard potential. For example, better constraint on the physical property of emissivity as a lava cools improves the accuracy of the derived temperature, a critical parameter for flow models that estimate at-vent eruption rate, flow length, and distribution. Thermal infrared (TIR) data are increasingly used as a tool to determine eruption styles and cooling regimes by measuring temperatures at high temporal resolutions. Factors that control the accurate measurement of surface temperatures include both material properties (e.g., emissivity and surface texture) as well as external factors (e.g., camera geometry and the intervening atmosphere). We present a newly-developed, field-portable miniature multispectral thermal infrared camera (MMT-Cam) to measure both temperature and emissivity of basaltic lava surfaces at up to 7 Hz. The MMT-Cam acquires emitted radiance in six wavelength channels in addition to the broadband temperature. The instrument was laboratory calibrated for systematic errors and fully field tested at the Overlook Crater lava lake (Kilauea, HI) in January 2017. The data show that the major emissivity absorption feature (around 8.5 to 9.0 µm) transitions to higher wavelengths and the depth of the feature decreases as a lava surface cools, forming a progressively thicker crust. This transition occurs over a temperature range of 758 to 518 K. Constraining the relationship between this spectral change and temperature derived from this data will provide more accurate temperatures and therefore, more accurate modeling results. This is the first time that emissivity and its link to temperature has been measured in situ on active lava surfaces, which will improve input parameters of flow propagation models and possibly improve flow forecasting.
Status of Technology Development to enable Large Stable UVOIR Space Telescopes
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; MSFC AMTD Team
2017-01-01
NASA MSFC has two funded Strategic Astrophysics Technology projects to develop technology for potential future large missions: AMTD and PTC. The Advanced Mirror Technology Development (AMTD) project is developing technology to make mechanically stable mirrors for a 4-meter or larger UVOIR space telescope. AMTD is demonstrating this technology by making a 1.5 meter diameter x 200 mm thick ULE(C) mirror that is 1/3rd scale of a full size 4-m mirror. AMTD is characterizing the mechanical and thermal performance of this mirror and of a 1.2-meter Zerodur(R) mirror to validate integrate modeling tools. Additionally, AMTD has developed integrated modeling tools which are being used to evaluate primary mirror systems for a potential Habitable Exoplanet Mission and analyzed the interaction between optical telescope wavefront stability and coronagraph contrast leakage. Predictive Thermal Control (PTC) project is developing technology to enable high stability thermal wavefront performance by using integrated modeling tools to predict and actively control the thermal environment of a 4-m or larger UVOIR space telescope.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
NASA Technical Reports Server (NTRS)
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Information systems as a tool to improve legal metrology activities
NASA Astrophysics Data System (ADS)
Rodrigues Filho, B. A.; Soratto, A. N. R.; Gonçalves, R. F.
2016-07-01
This study explores the importance of information systems applied to legal metrology as a tool to improve the control of measuring instruments used in trade. The information system implanted in Brazil has also helped to understand and appraise the control of the measurements due to the behavior of the errors and deviations of instruments used in trade, allowing the allocation of resources wisely, leading to a more effective planning and control on the legal metrology field. A study case analyzing the fuel sector is carried out in order to show the conformity of fuel dispersers according to maximum permissible errors. The statistics of measurement errors of 167,310 fuel dispensers of gasoline, ethanol and diesel used in the field were analyzed demonstrating the accordance of the fuel market in Brazil to the legal requirements.
Fault Injection Techniques and Tools
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.
1997-01-01
Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
A system dynamic simulation model for managing the human error in power tools industries
NASA Astrophysics Data System (ADS)
Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd
2017-10-01
In the era of modern and competitive life of today, every organization will face the situations in which the work does not proceed as planned when there is problems occur in which it had to be delay. However, human error is often cited as the culprit. The error that made by the employees would cause them have to spend additional time to identify and check for the error which in turn could affect the normal operations of the company as well as the company's reputation. Employee is a key element of the organization in running all of the activities of organization. Hence, work performance of the employees is a crucial factor in organizational success. The purpose of this study is to identify the factors that cause the increasing errors make by employees in the organization by using system dynamics approach. The broadly defined targets in this study are employees in the Regional Material Field team from purchasing department in power tools industries. Questionnaires were distributed to the respondents to obtain their perceptions on the root cause of errors make by employees in the company. The system dynamics model was developed to simulate the factor of the increasing errors make by employees and its impact. The findings of this study showed that the increasing of error make by employees was generally caused by the factors of workload, work capacity, job stress, motivation and performance of employees. However, this problem could be solve by increased the number of employees in the organization.
Active management of plant canopy temperature as a tool for modifying plant metabolic activity
USDA-ARS?s Scientific Manuscript database
The relationship between a plant and its thermal environment is a major determiner of its growth and development. Since plants grow and develop within continuously variable thermal environments, they are subjected to continuous thermal variation over their life cycle. Transpiration serves to uncoupl...
Bennett, G.A.; Elder, M.G.; Kemme, J.E.
1984-03-20
The disclosure is directed to an apparatus for thermally protecting sensitive components in tools used in a geothermal borehole. The apparatus comprises a Dewar within a housing. The Dewar contains heat pipes such as brass heat pipes for thermally conducting heat from heat sensitive components such as electronics to a heat sink such as ice.
Is pulsar timing a hopeful tool for detection of relic gravitational waves by using GW150914 data?
NASA Astrophysics Data System (ADS)
Ghayour, Basem
2018-04-01
The inflation stage has a behaviour as power law expansion like S(η )∝ η ^{1+β } where β constrained on the 1+β <0. If the inflation were preceded by a radiation era, then there would be thermal spectrum of relic gravitational waves at the time of inflation. Based on this idea we find new upper bound on β by comparison the thermal spectrum with strain sensitivity of single pulsar timing. Also we show that sensitivity curve of single pulsar timing may be hopeful tool for detection of the spectrum in usual and thermal case by using the GW150914 data.
Spalding, Steven J; Kwoh, C Kent; Boudreau, Robert; Enama, Joseph; Lunich, Julie; Huber, Daniel; Denes, Louis; Hirsch, Raphael
2008-01-01
Introduction The assessment of joints with active arthritis is a core component of widely used outcome measures. However, substantial variability exists within and across examiners in assessment of these active joint counts. Swelling and temperature changes, two qualities estimated during active joint counts, are amenable to quantification using noncontact digital imaging technologies. We sought to explore the ability of three dimensional (3D) and thermal imaging to reliably measure joint shape and temperature. Methods A Minolta 910 Vivid non-contact 3D laser scanner and a Meditherm med2000 Pro Infrared camera were used to create digital representations of wrist and metacarpalphalangeal (MCP) joints. Specialized software generated 3 quantitative measures for each joint region: 1) Volume; 2) Surface Distribution Index (SDI), a marker of joint shape representing the standard deviation of vertical distances from points on the skin surface to a fixed reference plane; 3) Heat Distribution Index (HDI), representing the standard error of temperatures. Seven wrists and 6 MCP regions from 5 subjects with arthritis were used to develop and validate 3D image acquisition and processing techniques. HDI values from 18 wrist and 9 MCP regions were obtained from 17 patients with active arthritis and compared to data from 10 wrist and MCP regions from 5 controls. Standard deviation (SD), coefficient of variation (CV), and intraclass correlation coefficients (ICC) were calculated for each quantitative measure to establish their reliability. CVs for volume and SDI were <1.3% and ICCs were greater than 0.99. Results Thermal measures were less reliable than 3D measures. However, significant differences were observed between control and arthritis HDI values. Two case studies of arthritic joints demonstrated quantifiable changes in swelling and temperature corresponding with changes in symptoms and physical exam findings. Conclusion 3D and thermal imaging provide reliable measures of joint volume, shape, and thermal patterns. Further refinement may lead to the use of these technologies to improve the assessment of disease activity in arthritis. PMID:18215307
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R.M.; Harding, J.M.; Pollak, K.D.
1992-02-01
Global-scale analyses of ocean thermal structure produced operationally at the U.S. Navy`s Fleet Numerical Oceanography Center are verified, along with an ocean thermal climatology, against unassimilated bathythermograph (bathy), satellite multichannel sea surface temperature (MCSST), and ship sea surface temperature (SST) data. Verification statistics are calculated from the three types of data for February-April of 1988 and February-April of 1990 in nine verification areas covering most of the open ocean in the Northern Hemisphere. The analyzed thermal fields were produced by version 1.0 of the Optimum Thermal Interpolation System (OTIS 1.0) in 1988, but by an upgraded version of this model,more » referred to as OTIS 1.1, in 1990. OTIS 1.1 employs exactly the same analysis methodology as OTIS 1.0. The principal difference is that OTIS 1.1 has twice the spatial resolution of OTIS 1.0 and consequently uses smaller spatial decorrelation scales and noise-to-signal ratios. As a result, OTIS 1.1 is able to represent more horizontal detail in the ocean thermal fields than its predecessor. Verification statistics for the SST fields derived from bathy and MCSST data are consistent with each other, showing similar trends and error levels. These data indicate that the analyzed SST fields are more accurate in 1990 than in 1988, and generally more accurate than climatology for both years. Verification statistics for the SST fields derived from ship data are inconsistent with those derived from the bathy and MCSST data, and show much higher error levels indicative of observational noise.« less
Estimated Viscosities and Thermal Conductivities of Gases at High Temperatures
NASA Technical Reports Server (NTRS)
Svehla, Roger A.
1962-01-01
Viscosities and thermal conductivities, suitable for heat-transfer calculations, were estimated for about 200 gases in the ground state from 100 to 5000 K and 1-atmosphere pressure. Free radicals were included, but excited states and ions were not. Calculations for the transport coefficients were based upon the Lennard-Jones (12-6) potential for all gases. This potential was selected because: (1) It is one of the most realistic models available and (2) intermolecular force constants can be estimated from physical properties or by other techniques when experimental data are not available; such methods for estimating force constants are not as readily available for other potentials. When experimental viscosity data were available, they were used to obtain the force constants; otherwise the constants were estimated. These constants were then used to calculate both the viscosities and thermal conductivities tabulated in this report. For thermal conductivities of polyatomic gases an Eucken-type correction was made to correct for exchange between internal and translational energies. Though this correction may be rather poor at low temperatures, it becomes more satisfactory with increasing temperature. It was not possible to obtain force constants from experimental thermal conductivity data except for the inert atoms, because most conductivity data are available at low temperatures only (200 to 400 K), the temperature range where the Eucken correction is probably most in error. However, if the same set of force constants is used for both viscosity and thermal conductivity, there is a large degree of cancellation of error when these properties are used in heat-transfer equations such as the Dittus-Boelter equation. It is therefore concluded that the properties tabulated in this report are suitable for heat-transfer calculations of gaseous systems.
Neural bases of imitation and pantomime in acute stroke patients: distinct streams for praxis.
Hoeren, Markus; Kümmerer, Dorothee; Bormann, Tobias; Beume, Lena; Ludwig, Vera M; Vry, Magnus-Sebastian; Mader, Irina; Rijntjes, Michel; Kaller, Christoph P; Weiller, Cornelius
2014-10-01
Apraxia is a cognitive disorder of skilled movements that characteristically affects the ability to imitate meaningless gestures, or to pantomime the use of tools. Despite substantial research, the neural underpinnings of imitation and pantomime have remained debated. An influential model states that higher motor functions are supported by different processing streams. A dorso-dorsal stream may mediate movements based on physical object properties, like reaching or grasping, whereas skilled tool use or pantomime rely on action representations stored within a ventro-dorsal stream. However, given variable results of past studies, the role of the two streams for imitation of meaningless gestures has remained uncertain, and the importance of the ventro-dorsal stream for pantomime of tool use has been questioned. To clarify the involvement of ventral and dorsal streams in imitation and pantomime, we performed voxel-based lesion-symptom mapping in a sample of 96 consecutive left-hemisphere stroke patients (mean age ± SD, 63.4 ± 14.8 years, 56 male). Patients were examined in the acute phase after ischaemic stroke (after a mean of 5.3, maximum 10 days) to avoid interference of brain reorganization with a reliable lesion-symptom mapping as best as possible. Patients were asked to imitate 20 meaningless hand and finger postures, and to pantomime the use of 14 common tools depicted as line drawings. Following the distinction between movement engrams and action semantics, pantomime errors were characterized as either movement or content errors, respectively. Whereas movement errors referred to incorrect spatio-temporal features of overall recognizable movements, content errors reflected an inability to associate tools with their prototypical actions. Both imitation and pantomime deficits were associated with lesions within the lateral occipitotemporal cortex, posterior inferior parietal lobule, posterior intraparietal sulcus and superior parietal lobule. However, the areas specifically related to the dorso-dorsal stream, i.e. posterior intraparietal sulcus and superior parietal lobule, were more strongly associated with imitation. Conversely, in contrast to imitation, pantomime deficits were associated with ventro-dorsal regions such as the supramarginal gyrus, as well as brain structures counted to the ventral stream, such as the extreme capsule. Ventral stream involvement was especially clear for content errors which were related to anterior temporal damage. However, movement errors were not consistently associated with a specific lesion location. In summary, our results indicate that imitation mainly relies on the dorso-dorsal stream for visuo-motor conversion and on-line movement control. Conversely, pantomime additionally requires ventro-dorsal and ventral streams for access to stored action engrams and retrieval of tool-action relationships. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
owl-qa | Informatics Technology for Cancer Research (ITCR)
owl-qa is an OWL-based QA tool for cancer study CDEs. The tool uses the combination of the NCI Thesaurus and additional disjointness axioms to detect potential errors and duplications in the data element definitions. The tool comprises three modules: Data Integration and Services Module; Compositional Expression Transformation Module; and OWL-based Quality Assurance Module.
Computational homogenisation for thermoviscoplasticity: application to thermally sprayed coatings
NASA Astrophysics Data System (ADS)
Berthelsen, Rolf; Denzer, Ralf; Oppermann, Philip; Menzel, Andreas
2017-11-01
Metal forming processes require wear-resistant tool surfaces in order to ensure a long life cycle of the expensive tools together with a constant high quality of the produced components. Thermal spraying is a relatively widely applied coating technique for the deposit of wear protection coatings. During these coating processes, heterogeneous coatings are deployed at high temperatures followed by quenching where residual stresses occur which strongly influence the performance of the coated tools. The objective of this article is to discuss and apply a thermo-mechanically coupled simulation framework which captures the heterogeneity of the deposited coating material. Therefore, a two-scale finite element framework for the solution of nonlinear thermo-mechanically coupled problems is elaborated and applied to the simulation of thermoviscoplastic material behaviour including nonlinear thermal softening in a geometrically linearised setting. The finite element framework and material model is demonstrated by means of numerical examples.
Wachs, Juan P; Frenkel, Boaz; Dori, Dov
2014-11-01
Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at various levels of detail, each level is depicted in a separate diagram, and all the diagrams are "aware" of each other as part of the whole model. Providing ontology of verbal and non-verbal modalities of communication in the OR, the resulting conceptual model is a solid basis for analyzing and understanding the source of the large variety of errors occurring in the course of an operation, providing an opportunity to decrease the quantity and severity of mistakes related to the use and misuse of surgical instrumentations. Since the model is event driven, rather than person driven, the focus is on the factors causing the errors, rather than the specific person. This approach advocates searching for technological solutions to alleviate tool-related errors rather than finger-pointing. Concretely, the model was validated through a structured questionnaire and it was found that surgeons agreed that the conceptual model was flexible (3.8 of 5, std=0.69), accurate, and it generalizable (3.7 of 5, std=0.37 and 3.7 of 5, std=0.85, respectively). The detailed conceptual model of the tools handling subsystem of the operation performed in an OR focuses on the details of the communication and the interactions taking place between the surgeon and the surgical technician during an operation, with the objective of pinpointing the exact circumstances in which errors can happen. Exact and concise specification of the communication events in general and the surgical instrument requests in particular is a prerequisite for a methodical analysis of the various modes of errors and the circumstances under which they occur. This has significant potential value in both reduction in tool-handling-related errors during an operation and providing a solid formal basis for designing a cybernetic agent which can replace a surgical technician in routine tool handling activities during an operation, freeing the technician to focus on quality assurance, monitoring and control of the cybernetic agent activities. This is a critical step in designing the next generation of cybernetic OR assistants. Copyright © 2014 Elsevier B.V. All rights reserved.
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.
Zheng, Yuanshui; Johnson, Randall; Larson, Gary
2016-06-01
Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.
Simultaneous Measurement of Thermal Conductivity and Specific Heat in a Single TDTR Experiment
NASA Astrophysics Data System (ADS)
Sun, Fangyuan; Wang, Xinwei; Yang, Ming; Chen, Zhe; Zhang, Hang; Tang, Dawei
2018-01-01
Time-domain thermoreflectance (TDTR) technique is a powerful thermal property measurement method, especially for nano-structures and material interfaces. Thermal properties can be obtained by fitting TDTR experimental data with a proper thermal transport model. In a single TDTR experiment, thermal properties with different sensitivity trends can be extracted simultaneously. However, thermal conductivity and volumetric heat capacity usually have similar trends in sensitivity for most materials; it is difficult to measure them simultaneously. In this work, we present a two-step data fitting method to measure the thermal conductivity and volumetric heat capacity simultaneously from a set of TDTR experimental data at single modulation frequency. This method takes full advantage of the information carried by both amplitude and phase signals; it is a more convenient and effective solution compared with the frequency-domain thermoreflectance method. The relative error is lower than 5 % for most cases. A silicon wafer sample was measured by TDTR method to verify the two-step fitting method.
Using Laser Scanners to Augment the Systematic Error Pointing Model
NASA Astrophysics Data System (ADS)
Wernicke, D. R.
2016-08-01
The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.
MARSTHERM: A Web-based System Providing Thermophysical Analysis Tools for Mars Research
NASA Astrophysics Data System (ADS)
Putzig, N. E.; Barratt, E. M.; Mellon, M. T.; Michaels, T. I.
2013-12-01
We introduce MARSTHERM, a web-based system that will allow researchers access to a standard numerical thermal model of the Martian near-surface and atmosphere. In addition, the system will provide tools for the derivation, mapping, and analysis of apparent thermal inertia from temperature observations by the Mars Global Surveyor Thermal Emission Spectrometer (TES) and the Mars Odyssey Thermal Emission Imaging System (THEMIS). Adjustable parameters for the thermal model include thermal inertia, albedo, surface pressure, surface emissivity, atmospheric dust opacity, latitude, surface slope angle and azimuth, season (solar longitude), and time steps for calculations and output. The model computes diurnal surface and brightness temperatures for either a single day or a full Mars year. Output options include text files and plots of seasonal and diurnal surface, brightness, and atmospheric temperatures. The tools for the derivation and mapping of apparent thermal inertia from spacecraft data are project-based, wherein the user provides an area of interest (AOI) by specifying latitude and longitude ranges. The system will then extract results within the AOI from prior global mapping of elevation (from the Mars Orbiter Laser Altimeter, for calculating surface pressure), TES annual albedo, and TES seasonal and annual-mean 2AM and 2PM apparent thermal inertia (Putzig and Mellon, 2007, Icarus 191, 68-94). In addition, a history of TES dust opacity within the AOI is computed. For each project, users may then provide a list of THEMIS images to process for apparent thermal inertia, optionally overriding the TES-derived dust opacity with a fixed value. Output from the THEMIS derivation process includes thumbnail and context images, GeoTIFF raster data, and HDF5 files containing arrays of input and output data (radiance, brightness temperature, apparent thermal inertia, elevation, quality flag, latitude, and longitude) and ancillary information. As a demonstration of capabilities, we will present results from a thermophysical study of Gale Crater (Barratt and Putzig, 2013, EPSC abstract 613), for which TES and THEMIS mapping has been carried out during system development. Public access to the MARSTHERM system will be provided in conjunction with the 2013 AGU Fall Meeting and will feature the numerical thermal model and thermal-inertia derivation algorithm developed by Mellon et al. (2000, Icarus 148, 437-455) as modified by Putzig and Mellon (2007, Icarus 191, 68-94). Updates to the thermal model and derivation algorithm that include a more sophisticated representation of the atmosphere and a layered subsurface are presently in development, and these will be incorporated into the system when they are available. Other planned enhancements include tools for modeling temperatures from horizontal mixtures of materials and slope facets, for comparing heterogeneity modeling results to TES and THEMIS results, and for mosaicking THEMIS images.
The Sixth Annual Thermal and Fluids Analysis Workshop
NASA Technical Reports Server (NTRS)
1995-01-01
The Sixth Annual Thermal and Fluids Analysis Workshop consisted of classes, vendor demonstrations, and paper sessions. The classes and vendor demonstrations provided participants with the information on widely used tools for thermal and fluids analysis. The paper sessions provided a forum for the exchange of information and ideas among thermal and fluids analysis. Paper topics included advances an uses of established thermal and fluids computer codes (such as SINDA and TRASYS) as well as unique modeling techniques and applications.
Lee, Jing-Nang; Lin, Tsung-Min; Chen, Chien-Chih
2014-01-01
This study constructs an energy based model of thermal system for controlled temperature and humidity air conditioning system, and introduces the influence of the mass flow rate, heater and humidifier for proposed control criteria to achieve the controlled temperature and humidity of air conditioning system. Then, the reliability of proposed thermal system model is established by both MATLAB dynamic simulation and the literature validation. Finally, the PID control strategy is applied for controlling the air mass flow rate, humidifying capacity, and heating, capacity. The simulation results show that the temperature and humidity are stable at 541 sec, the disturbance of temperature is only 0.14 °C, 0006 kg(w)/kg(da) in steady-state error of humidity ratio, and the error rate is only 7.5%. The results prove that the proposed system is an effective controlled temperature and humidity of an air conditioning system.
Temperature feedback control for long-term carrier-envelope phase locking
Chang, Zenghu [Manhattan, KS; Yun, Chenxia [Manhattan, KS; Chen, Shouyuan [Manhattan, KS; Wang, He [Manhattan, KS; Chini, Michael [Manhattan, KS
2012-07-24
A feedback control module for stabilizing a carrier-envelope phase of an output of a laser oscillator system comprises a first photodetector, a second photodetector, a phase stabilizer, an optical modulator, and a thermal control element. The first photodetector may generate a first feedback signal corresponding to a first portion of a laser beam from an oscillator. The second photodetector may generate a second feedback signal corresponding to a second portion of the laser beam filtered by a low-pass filter. The phase stabilizer may divide the frequency of the first feedback signal by a factor and generate an error signal corresponding to the difference between the frequency-divided first feedback signal and the second feedback signal. The optical modulator may modulate the laser beam within the oscillator corresponding to the error signal. The thermal control unit may change the temperature of the oscillator corresponding to a signal operable to control the optical modulator.
Zhang, Tangtang; Wen, Jun; van der Velde, Rogier; Meng, Xianhong; Li, Zhenchao; Liu, Yuanyong; Liu, Rong
2008-01-01
The total atmospheric water vapor content (TAWV) and land surface temperature (LST) play important roles in meteorology, hydrology, ecology and some other disciplines. In this paper, the ENVISAT/AATSR (The Advanced Along-Track Scanning Radiometer) thermal data are used to estimate the TAWV and LST over the Loess Plateau in China by using a practical split window algorithm. The distribution of the TAWV is accord with that of the MODIS TAWV products, which indicates that the estimation of the total atmospheric water vapor content is reliable. Validations of the LST by comparing with the ground measurements indicate that the maximum absolute derivation, the maximum relative error and the average relative error is 4.0K, 11.8% and 5.0% respectively, which shows that the retrievals are believable; this algorithm can provide a new way to estimate the LST from AATSR data. PMID:27879795
Lee, Jing-Nang; Lin, Tsung-Min
2014-01-01
This study constructs an energy based model of thermal system for controlled temperature and humidity air conditioning system, and introduces the influence of the mass flow rate, heater and humidifier for proposed control criteria to achieve the controlled temperature and humidity of air conditioning system. Then, the reliability of proposed thermal system model is established by both MATLAB dynamic simulation and the literature validation. Finally, the PID control strategy is applied for controlling the air mass flow rate, humidifying capacity, and heating, capacity. The simulation results show that the temperature and humidity are stable at 541 sec, the disturbance of temperature is only 0.14°C, 0006 kgw/kgda in steady-state error of humidity ratio, and the error rate is only 7.5%. The results prove that the proposed system is an effective controlled temperature and humidity of an air conditioning system. PMID:25250390
Simple Forest Canopy Thermal Exitance Model
NASA Technical Reports Server (NTRS)
Smith J. A.; Goltz, S. M.
1999-01-01
We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.
NASA Astrophysics Data System (ADS)
Zheng, Yong; Chen, Yan
2013-10-01
To realize the design of dynamic acquisition system for real-time detection of transmission chain error is very important to improve the machining accuracy of machine tool. In this paper, the USB controller and FPGA is used for hardware platform design, combined with LabVIEW to design user applications, NI-VISA is taken for develop USB drivers, and ultimately achieve the dynamic acquisition system design of transmission error
1988-01-01
AD-A 199 117 . fNOooN - 6 -JS/_~ Learning from Error Colleen M. Seifert . . i’ UCSDand NPRDC L" , Edwin L. Hutchins UCSD -,- -" Introduction Most...always rely on learning on the job, and where there is the need for learning , there is potential for error. A naturally situated system of cooperative work...reorganized, change the things they do, and change the technology they utilize to do the job. Even if tasks and tools could be somehow frozen, changes in
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
NASA Astrophysics Data System (ADS)
Zhong, Xianyun; Fan, Bin; Wu, Fan
2017-08-01
The corrective calibration of the removal function plays an important role in the magnetorheological finishing (MRF) high-accuracy process. This paper mainly investigates the asymmetrical characteristic of the MRF removal function shape and further analyzes its influence on the surface residual error by means of an iteration algorithm and simulations. By comparing the ripple errors and convergence ratios based on the ideal MRF tool function and the deflected tool function, the mathematical models for calibrating the deviation of horizontal and flowing directions are presented. Meanwhile, revised mathematical models for the coordinate transformation of an MRF machine is also established. Furthermore, a Ø140-mm fused silica plane and a Ø196 mm, f/1∶1, fused silica concave sphere samples are taken as the experiments. After two runs, the plane mirror final surface error reaches PV 17.7 nm, RMS 1.75 nm, and the polishing time is 16 min in total; after three runs, the sphere mirror final surfer error reaches RMS 2.7 nm and the polishing time is 70 min in total. The convergence ratios are 96.2% and 93.5%, respectively. The spherical simulation error and the polishing result are almost consistent, which fully validate the efficiency and feasibility of the calibration method of MRF removal function error using for the high-accuracy subaperture optical manufacturing.
Performance Evaluation of Dual-axis Tracking System of Parabolic Trough Solar Collector
NASA Astrophysics Data System (ADS)
Ullah, Fahim; Min, Kang
2018-01-01
A parabolic trough solar collector with the concentration ratio of 24 was developed in the College of Engineering; Nanjing Agricultural University, China with the using of the TracePro software an optical model built. Effects of single-axis and dual-axis tracking modes, azimuth and elevating angle tracking errors on the optical performance were investigated and the thermal performance of the solar collector was experimentally measured. The results showed that the optical efficiency of the dual-axis tracking was 0.813% and its year average value was 14.3% and 40.9% higher than that of the eat-west tracking mode and north-south tracking mode respectively. Further, form the results of the experiment, it was concluded that the optical efficiency was affected significantly by the elevation angle tracking errors which should be kept below 0.6o. High optical efficiency could be attained by using dual-tracking mode even though the tracking precision of one axis was degraded. The real-time instantaneous thermal efficiency of the collector reached to 0.775%. In addition, the linearity of the normalized efficiency was favorable. The curve of the calculated thermal efficiency agreed well with the normalized instantaneous efficiency curve derived from the experimental data and the maximum difference between them was 10.3%. This type of solar collector should be applied in middle-scale thermal collection systems.
Thermal transport at the nanoscale: A Fourier's law vs. phonon Boltzmann equation study
NASA Astrophysics Data System (ADS)
Kaiser, J.; Feng, T.; Maassen, J.; Wang, X.; Ruan, X.; Lundstrom, M.
2017-01-01
Steady-state thermal transport in nanostructures with dimensions comparable to the phonon mean-free-path is examined. Both the case of contacts at different temperatures with no internal heat generation and contacts at the same temperature with internal heat generation are considered. Fourier's law results are compared to finite volume method solutions of the phonon Boltzmann equation in the gray approximation. When the boundary conditions are properly specified, results obtained using Fourier's law without modifying the bulk thermal conductivity are in essentially exact quantitative agreement with the phonon Boltzmann equation in the ballistic and diffusive limits. The errors between these two limits are examined in this paper. For the four cases examined, the error in the apparent thermal conductivity as deduced from a correct application of Fourier's law is less than 6%. We also find that the Fourier's law results presented here are nearly identical to those obtained from a widely used ballistic-diffusive approach but analytically much simpler. Although limited to steady-state conditions with spatial variations in one dimension and to a gray model of phonon transport, the results show that Fourier's law can be used for linear transport from the diffusive to the ballistic limit. The results also contribute to an understanding of how heat transport at the nanoscale can be understood in terms of the conceptual framework that has been established for electron transport at the nanoscale.
Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.
Jeong, Sanghyup; Marks, Bradley P; James, Michael K
2017-01-01
Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.
NASA Astrophysics Data System (ADS)
Hancock, S.; Armston, J.; Tang, H.; Patterson, P. L.; Healey, S. P.; Marselis, S.; Duncanson, L.; Hofton, M. A.; Kellner, J. R.; Luthcke, S. B.; Sun, X.; Blair, J. B.; Dubayah, R.
2017-12-01
NASA's Global Ecosystem Dynamics Investigation will mount a multi-track, full-waveform lidar on the International Space Station (ISS) that is optimised for the measurement of forest canopy height and structure. GEDI will use ten laser tracks, two 10 mJ "power beams" and eight 5 mJ "coverage beams" to produce global (51.5oS to 51.5oN) maps of above ground biomass (AGB), canopy height, vegetation structure and other biophysical parameters. The mission has a requirement to generate a 1 km AGB map with 80% of pixels with ≤ 20% standard error or 20 Mg·ha-1, whichever is greater. To assess performance and compare to mission requirements, an end-to-end simulator has been developed. The simulator brings together tools to propagate the effects of measurement and sampling error on GEDI data products. The simulator allows us to evaluate the impact of instrument performance, ISS orbits, processing algorithms and losses of data that may occur due to clouds, snow, leaf-off conditions, and areas with an insufficient signal-to-noise ratio (SNR). By evaluating the consequences of operational decisions on GEDI data products, this tool provides a quantitative framework for decision-making and mission planning. Here we demonstrate the performance tool by using it to evaluate the trade-off between measurement and sampling error on the 1 km AGB data product. Results demonstrate that the use of coverage beams during the day (lowest GEDI SNR case) over very dense forests (>95% canopy cover) will result in some measurement bias. Omitting these low SNR cases increased the sampling error. Through this an SNR threshold for a given expected canopy cover can be set. The other applications of the performance tool are also discussed, such as assessing the impact of decisions made in the AGB modelling and signal processing stages on the accuracy of final data products.
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris
2014-01-01
Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.
Boundary Layer Transition Results From STS-114
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Horvath, Thomas J.; Cassady, Amy M.; Kirk, Benjamin S.; Wang, K. C.; Hyatt, Andrew J.
2006-01-01
The tool for predicting the onset of boundary layer transition from damage to and/or repair of the thermal protection system developed in support of Shuttle Return to Flight is compared to the STS-114 flight results. The Boundary Layer Transition (BLT) Tool is part of a suite of tools that analyze the aerothermodynamic environment of the local thermal protection system to allow informed disposition of damage for making recommendations to fly as is or to repair. Using mission specific trajectory information and details of each damage site or repair, the expected time of transition onset is predicted to help determine the proper aerothermodynamic environment to use in the subsequent thermal and stress analysis of the local structure. The boundary layer transition criteria utilized for the tool was developed from ground-based measurements to account for the effect of both protuberances and cavities and has been calibrated against flight data. Computed local boundary layer edge conditions provided the means to correlate the experimental results and then to extrapolate to flight. During STS-114, the BLT Tool was utilized and was part of the decision making process to perform an extravehicular activity to remove the large gap fillers. The role of the BLT Tool during this mission, along with the supporting information that was acquired for the on-orbit analysis, is reviewed. Once the large gap fillers were removed, all remaining damage sites were cleared for reentry as is. Post-flight analysis of the transition onset time revealed excellent agreement with BLT Tool predictions.
The Development of Dispatcher Training Simulator in a Thermal Energy Generation System
NASA Astrophysics Data System (ADS)
Hakim, D. L.; Abdullah, A. G.; Mulyadi, Y.; Hasan, B.
2018-01-01
A dispatcher training simulator (DTS) is a real-time Human Machine Interface (HMI)-based control tool that is able to visualize industrial control system processes. The present study was aimed at developing a simulator tool for boilers in a thermal power station. The DTS prototype was designed using technical data of thermal power station boilers in Indonesia. It was then designed and implemented in Wonderware Intouch 10. The resulting simulator came with component drawing, animation, control display, alarm system, real-time trend, historical trend. This application used 26 tagnames and was equipped with a security system. The test showed that the principles of real-time control worked well. It is expected that this research could significantly contribute to the development of thermal power station, particularly in terms of its application as a training simulator for beginning dispatchers.
A thermal biosensor based on enzyme reaction.
Zheng, Yi-Hua; Hua, Tse-Chao; Xu, Fei
2005-01-01
Application of the thermal biosensor as analytical tool is promising due to advantages as universal, simplicity and quick response. A novel thermal biosensor based on enzyme reaction has been developed. This biosensor is a flow injection analysis system and consists of two channels with enzyme reaction column and reference column. The reference column, which is set for eliminating the unspecific heat, is inactived on special enzyme reaction of the ingredient to be detected. The special enzyme reaction takes places in the enzyme reaction column at a constant temperature realizing by a thermoelectric thermostat. Thermal sensor based on the thermoelectric module containing 127 serial BiTe-thermocouples is used to monitor the temperature difference between two streams from the enzyme reaction column and the reference column. The analytical example for dichlorvos shows that this biosensor can be used as analytical tool in medicine and biology.
NASA Technical Reports Server (NTRS)
1974-01-01
Field measurements performed simultaneous with Skylab overpass in order to provide comparative calibration and performance evaluation measurements for the EREP sensors are presented. Wavelength region covered include: solar radiation (400 to 1300 nanometer), and thermal radiation (8 to 14 micrometer). Measurements consisted of general conditions and near surface meteorology, atmospheric temperature and humidity vs altitude, the thermal brightness temperature, total and diffuse solar radiation, direct solar radiation (subsequently analyzed for optical depth/transmittance), and target reflectivity/radiance. The particular instruments used are discussed along with analyses performed. Detailed instrument operation, calibrations, techniques, and errors are given.
Feasibility of infrared Earth tracking for deep-space optical communications.
Chen, Yijiang; Hemmati, Hamid; Ortiz, Gerry G
2012-01-01
Infrared (IR) Earth thermal tracking is a viable option for optical communications to distant planet and outer-planetary missions. However, blurring due to finite receiver aperture size distorts IR Earth images in the presence of Earth's nonuniform thermal emission and limits its applicability. We demonstrate a deconvolution algorithm that can overcome this limitation and reduce the error from blurring to a negligible level. The algorithm is applied successfully to Earth thermal images taken by the Mars Odyssey spacecraft. With the solution to this critical issue, IR Earth tracking is established as a viable means for distant planet and outer-planetary optical communications. © 2012 Optical Society of America
Development of TPS flight test and operational instrumentation
NASA Technical Reports Server (NTRS)
Carnahan, K. R.; Hartman, G. J.; Neuner, G. J.
1975-01-01
Thermal and flow sensor instrumentation was developed for use as an integral part of the space shuttle orbiter reusable thermal protection system. The effort was performed in three tasks: a study to determine the optimum instruments and instrument installations for the space shuttle orbiter RSI and RCC TPS; tests and/or analysis to determine the instrument installations to minimize measurement errors; and analysis using data from the test program for comparison to analytical methods. A detailed review of existing state of the art instrumentation in industry was performed to determine the baseline for the departure of the research effort. From this information, detailed criteria for thermal protection system instrumentation were developed.
RCC Plug Repair Thermal Tools for Shuttle Mission Support
NASA Technical Reports Server (NTRS)
Rodriguez, Alvaro C.; Anderson, Brian P.
2010-01-01
A thermal math model for the Space Shuttle Reinforced Carbon-Carbon (RCC) Plug Repair was developed to increase the confidence in the repair entry performance and provide a real-time mission support tool. The thermal response of the plug cover plate, local RCC, and metallic attach hardware can be assessed with this model for any location on the wing leading edge. The geometry and spatial location of the thermal mesh also matches the structural mesh which allows for the direct mapping of temperature loads and computation of the thermoelastic stresses. The thermal model was correlated to a full scale plug repair radiant test. To utilize the thermal model for flight analyses, accurate predictions of protuberance heating were required. Wind tunnel testing was performed at CUBRC to characterize the heat flux in both the radial and angular directions. Due to the complexity of the implementation of the protuberance heating, an intermediate program was developed to output the heating per nodal location for all OML surfaces in SINDA format. Three Design Reference Cases (DRC) were evaluated with the correlated plug thermal math model to bound the environments which the plug repair would potentially be used.
Quasi-static shape adjustment of a 15 meter diameter space antenna
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Herstrom, Catherine L.; Edighoffer, Harold H.
1987-01-01
A 15 meter diameter Hoop-Column antenna has been analyzed and tested to study shape adjustment of the reflector surface. The Hoop-Column antenna concept employs pretensioned cables and mesh to produce a paraboloidal reflector surface. Fabrication errors and thermal distortions may significantly reduce surface accuracy and consequently degrade electromagnetic performance. Thus, the ability to adjust the surface shape is desirable. The shape adjustment algorithm consisted of finite element and least squares error analyses to minimize the surface distortions. Experimental results verified the analysis. Application of the procedure resulted in a reduction of surface error by 38 percent. Quasi-static shape adjustment has the potential for on-orbit compensation for a variety of surface shape distortions.
Effect of Slice Error of Glass on Zero Offset of Capacitive Accelerometer
NASA Astrophysics Data System (ADS)
Hao, R.; Yu, H. J.; Zhou, W.; Peng, B.; Guo, J.
2018-03-01
Packaging process had been studied on capacitance accelerometer. The silicon-glass bonding process had been adopted on sensor chip and glass, and sensor chip and glass was adhered on ceramic substrate, the three-layer structure was curved due to the thermal mismatch, the slice error of glass lead to asymmetrical curve of sensor chip. Thus, the sensitive mass of accelerometer deviated along the sensitive direction, which was caused in zero offset drift. It was meaningful to confirm the influence of slice error of glass, the simulation results showed that the zero output drift was 12.3×10-3 m/s2 when the deviation was 40μm.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.
Evaluation of Erosion Resistance of Advanced Turbine Thermal Barrier Coatings
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Kuczmarski, Maria A.; Miller, Robert A.; Cuy, Michael D.
2007-01-01
The erosion resistant turbine thermal barrier coating system is critical to aircraft engine performance and durability. By demonstrating advanced turbine material testing capabilities, we will be able to facilitate the critical turbine coating and subcomponent development and help establish advanced erosion-resistant turbine airfoil thermal barrier coatings design tools. The objective of this work is to determine erosion resistance of advanced thermal barrier coating systems under simulated engine erosion and/or thermal gradient environments, validating advanced turbine airfoil thermal barrier coating systems based on nano-tetragonal phase toughening design approaches.
Consciousness-Raising, Error Correction and Proofreading
ERIC Educational Resources Information Center
O'Brien, Josephine
2015-01-01
The paper discusses the impact of developing a consciousness-raising approach in error correction at the sentence level to improve students' proofreading ability. Learners of English in a foreign language environment often rely on translation as a composing tool and while this may act as a scaffold and provide some support, it frequently leads to…
Reliability Estimation for Aggregated Data: Applications for Organizational Research.
ERIC Educational Resources Information Center
Hart, Roland J.; Bradshaw, Stephen C.
This report provides the statistical tools necessary to measure the extent of error that exists in organizational record data and group survey data. It is felt that traditional methods of measuring error are inappropriate or incomplete when applied to organizational groups, especially in studies of organizational change when the same variables are…
Methods as Tools: A Response to O'Keefe.
ERIC Educational Resources Information Center
Hewes, Dean E.
2003-01-01
Tries to distinguish the key insights from some distortions by clarifying the goals of experiment-wise error control that D. O'Keefe correctly identifies as vague and open to misuse. Concludes that a better understanding of the goal of experiment-wise error correction erases many of these "absurdities," but the clarifications necessary…
Addressing Misconceptions in Geometry through Written Error Analyses
ERIC Educational Resources Information Center
Kembitzky, Kimberle A.
2009-01-01
This study examined the improvement of students' comprehension of geometric concepts through analytical writing about their own misconceptions using a reflective tool called an ERNIe (acronym for ERror aNalyIsis). The purpose of this study was to determine whether the ERNIe process could be used to correct geometric misconceptions, as well as how…
A Simplified Shuttle Payload Thermal Analyzer /SSPTA/ program
NASA Technical Reports Server (NTRS)
Bartoszek, J. T.; Huckins, B.; Coyle, M.
1979-01-01
A simple thermal analysis program for Space Shuttle payloads has been developed to accommodate the user who requires an easily understood but dependable analytical tool. The thermal analysis program includes several thermal subprograms traditionally employed in spacecraft thermal studies, a data management system for data generated by the subprograms, and a master program to coordinate the data files and thermal subprograms. The language and logic used to run the thermal analysis program are designed for the small user. In addition, analytical and storage techniques which conserve computer time and minimize core requirements are incorporated into the program.
Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J
2015-03-01
There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly traffic rates across nights. The strength of these correlations generally increased throughout the night, peaking 2-3 hours before morning twilight. Given the convergence of measures by different tools at this time, we suggest that researchers consider sampling flight activity in the hours before morning twilight when differences due to detection biases among sampling tools appear to be minimized.
NASA Astrophysics Data System (ADS)
Kim, Younsu; Audigier, Chloé; Dillow, Austin; Cheng, Alexis; Boctor, Emad M.
2017-03-01
Thermal monitoring for ablation therapy has high demands for preserving healthy tissues while removing malignant ones completely. Various methods have been investigated. However, exposure to radiation, cost-effectiveness, and inconvenience hinder the use of X-ray or MRI methods. Due to the non-invasiveness and real-time capabilities of ultrasound, it is widely used in intraoperative procedures. Ultrasound thermal monitoring methods have been developed for affordable monitoring in real-time. We propose a new method for thermal monitoring using an ultrasound element. By inserting a Lead-zirconate-titanate (PZT) element to generate the ultrasound signal in the liver tissues, the single travel time of flight is recorded from the PZT element to the ultrasound transducer. We detect the speed of sound change caused by the increase in temperature during ablation therapy. We performed an ex vivo experiment with liver tissues to verify the feasibility of our speed of sound estimation technique. The time of flight information is used in an optimization method to recover the speed of sound maps during the ablation, which are then converted into temperature maps. The result shows that the trend of temperature changes matches with the temperature measured at a single point. The estimation error can be decreased by using a proper curve linking the speed of sound to the temperature. The average error over time was less than 3 degrees Celsius for a bovine liver. The speed of sound estimation using a single PZT element can be used for thermal monitoring.
[INVITED] Luminescent QR codes for smart labelling and sensing
NASA Astrophysics Data System (ADS)
Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.
2018-05-01
QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.
A device for high-throughput monitoring of degradation in soft tissue samples.
Tzeranis, D S; Panagiotopoulos, I; Gkouma, S; Kanakaris, G; Georgiou, N; Vaindirlis, N; Vasileiou, G; Neidlin, M; Gkousioudi, A; Spitas, V; Macheras, G A; Alexopoulos, L G
2018-06-06
This work describes the design and validation of a novel device, the High-Throughput Degradation Monitoring Device (HDD), for monitoring the degradation of 24 soft tissue samples over incubation periods of several days inside a cell culture incubator. The device quantifies sample degradation by monitoring its deformation induced by a static gravity load. Initial instrument design and experimental protocol development focused on quantifying cartilage degeneration. Characterization of measurement errors, caused mainly by thermal transients and by translating the instrument sensor, demonstrated that HDD can quantify sample degradation with <6 μm precision and <10 μm temperature-induced errors. HDD capabilities were evaluated in a pilot study that monitored the degradation of fresh ex vivo human cartilage samples by collagenase solutions over three days. HDD could robustly resolve the effects of collagenase concentration as small as 0.5 mg/ml. Careful sample preparation resulted in measurements that did not suffer from donor-to-donor variation (coefficient of variance <70%). Due to its unique combination of sample throughput, measurement precision, temporal sampling and experimental versality, HDD provides a novel biomechanics-based experimental platform for quantifying the effects of proteins (cytokines, growth factors, enzymes, antibodies) or small molecules on the degradation of soft tissues or tissue engineering constructs. Thereby, HDD can complement established tools and in vitro models in important applications including drug screening and biomaterial development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-01-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287
Ground-based remote sensing of thin clouds in the Arctic
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Zhao, C.
2012-11-01
This paper describes a method for using interferometer measurements of downwelling thermal radiation to retrieve the properties of single-layer clouds. Cloud phase is determined from ratios of thermal emission in three "micro-windows" where absorption by water vapor is particularly small. Cloud microphysical and optical properties are retrieved from thermal emission in two micro-windows, constrained by the transmission through clouds of stratospheric ozone emission. Assuming a cloud does not approximate a blackbody, the estimated 95% confidence retrieval errors in effective radius, visible optical depth, number concentration, and water path are, respectively, 10%, 20%, 38% (55% for ice crystals), and 16%. Applied to data from the Atmospheric Radiation Measurement program (ARM) North Slope of Alaska - Adjacent Arctic Ocean (NSA-AAO) site near Barrow, Alaska, retrievals show general agreement with ground-based microwave radiometer measurements of liquid water path. Compared to other retrieval methods, advantages of this technique include its ability to characterize thin clouds year round, that water vapor is not a primary source of retrieval error, and that the retrievals of microphysical properties are only weakly sensitive to retrieved cloud phase. The primary limitation is the inapplicability to thicker clouds that radiate as blackbodies.
NREL Multiphysics Modeling Tools and ISC Device for Designing Safer Li-Ion Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesaran, Ahmad A.; Yang, Chuanbo
2016-03-24
The National Renewable Energy Laboratory has developed a portfolio of multiphysics modeling tools to aid battery designers better understand the response of lithium ion batteries to abusive conditions. We will discuss this portfolio, which includes coupled electrical, thermal, chemical, electrochemical, and mechanical modeling. These models can simulate the response of a cell to overheating, overcharge, mechanical deformation, nail penetration, and internal short circuit. Cell-to-cell thermal propagation modeling will be discussed.
1991-01-01
degrade due to thermal cyclingmultiple repairs, and/or corrosion. Depending on the service history and alloy, reduced properties result from carbide U...subsystem 91 titanitun alloys 85 cleanliness gas turbine engines 85 titani •-n aluminide 81 tolerance control 92 tool breaqkage 95 tool fracture 95 tools
GOCE and Future Gravity Missions for Geothermal Energy Exploitation
NASA Astrophysics Data System (ADS)
Pastorutti, Alberto; Braitenberg, Carla; Pivetta, Tommaso; Mariani, Patrizia
2016-08-01
Geothermal energy is a valuable renewable energy source the exploitation of which contributes to the worldwide reduction of consumption of fossil fuels oil and gas. The exploitation of geothermal energy is facilitated where the thermal gradient is higher than average leading to increased surface heat flow. Apart from the hydrologic circulation properties which depend on rock fractures and are important due to the heat transportation from the hotter layers to the surface, essential properties that increase the thermal gradient are crustal thinning and radiogenic heat producing rocks. Crustal thickness and rock composition form the link to the exploration with the satellite derived gravity field, because both induce subsurface mass changes that generate observable gravity anomalies. The recognition of gravity as a useful investigation tool for geothermal energy lead to a cooperation with ESA and the International Renewable Energy Agency (IRENA) that included the GOCE derived gravity field in the online geothermal energy investigation tool of the IRENA database. The relation between the gravity field products as the free air gravity anomaly, the Bouguer and isostatic anomalies and the heat flow values is though not straightforward and has not a unique relationship. It is complicated by the fact that it depends on the geodynamical context, on the geologic context and the age of the crustal rocks. Globally the geological context and geodynamical history of an area is known close to everywhere, so that a specific known relationship between gravity and geothermal potential can be applied. In this study we show the results of a systematic analysis of the problem, including some simulations of the key factors. The study relies on the data of GOCE and the resolution and accuracy of this satellite. We also give conclusions on the improved exploration power of a gravity mission with higher spatial resolution and reduced data error, as could be achieved in principle by flying an atom interferometer sensor on board a satellite.
Operational characteristics of the NNPB plunger in the glass container industry
NASA Astrophysics Data System (ADS)
Penlington, Roger
Although glass containers are an everyday item the process responsible for their production is not scientifically understood. Developments have occurred slowly over many years, mostly on a trial and error basis and in response to economic pressures. The narrow neck press and blow (NNPB) process has evolved in recent years as a result of attempts to reduce container weight. The fundamental component of the NNPB process is the plunger which is responsible for the initiation of the cavity and control of glass distribution within the container.The NNPB plunger functions as a form tool and as a heat exchanger, thus requiring a carefully selected range of properties. The Engineer responsible for tooling selection and operation has a limited resource of scientific knowledge to enable the performance of the process to be optimised.The current NNPB plunger is subject to high rates of wear and is directly responsible for product defects, thermal instability and limits process speed.The work presented here is a scientific study of current NNPB plunger technology. The plunger has been investigated in relation to the requirements of the glass container forming process. The materials used have been examined, before and after use and their wear modes explained. The thermal properties of the plunger have, as far as is possible, been examined during the forming cycle. When combined with results from the characterisation of transformations occurring in the material, during its service life, operational requirements have been explained. The ability of the NNPB plunger to remove heat from the glass has been investigated, and has illustrated significant deficiencies in the current arrangement. Details are given as to how these deficiencies may be overcome to enable the Engineer to regain control of the process.As a result of the study many phenomena exhibited by the NNPB plunger are now understood and may be related to the performance of the process.
Prediction of protein mutant stability using classification and regression tool.
Huang, Liang-Tsung; Saraboji, K; Ho, Shinn-Ying; Hwang, Shiow-Fen; Ponnuswamy, M N; Gromiha, M Michael
2007-02-01
Prediction of protein stability upon amino acid substitutions is an important problem in molecular biology and the solving of which would help for designing stable mutants. In this work, we have analyzed the stability of protein mutants using two different datasets of 1396 and 2204 mutants obtained from ProTherm database, respectively for free energy change due to thermal (DeltaDeltaG) and denaturant denaturations (DeltaDeltaG(H(2)O)). We have used a set of 48 physical, chemical energetic and conformational properties of amino acid residues and computed the difference of amino acid properties for each mutant in both sets of data. These differences in amino acid properties have been related to protein stability (DeltaDeltaG and DeltaDeltaG(H(2)O)) and are used to train with classification and regression tool for predicting the stability of protein mutants. Further, we have tested the method with 4 fold, 5 fold and 10 fold cross validation procedures. We found that the physical properties, shape and flexibility are important determinants of protein stability. The classification of mutants based on secondary structure (helix, strand, turn and coil) and solvent accessibility (buried, partially buried, partially exposed and exposed) distinguished the stabilizing/destabilizing mutants at an average accuracy of 81% and 80%, respectively for DeltaDeltaG and DeltaDeltaG(H(2)O). The correlation between the experimental and predicted stability change is 0.61 for DeltaDeltaG and 0.44 for DeltaDeltaG(H(2)O). Further, the free energy change due to the replacement of amino acid residue has been predicted within an average error of 1.08 kcal/mol and 1.37 kcal/mol for thermal and chemical denaturation, respectively. The relative importance of secondary structure and solvent accessibility, and the influence of the dataset on prediction of protein mutant stability have been discussed.
A semi-automatic annotation tool for cooking video
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe
2013-03-01
In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
Development of Bio-impedance Analyzer (BIA) for Body Fat Calculation
NASA Astrophysics Data System (ADS)
Riyadi, Munawar A.; Nugraha, A.; Santoso, M. B.; Septaditya, D.; Prakoso, T.
2017-04-01
Common weight scales cannot assess body composition or determine fat mass and fat-fress mass that make up the body weight. This research propose bio-impedance analysis (BIA) tool capable to body composition assessment. This tool uses four electrodes, two of which are used for 50 kHz sine wave current flow to the body and the rest are used to measure the voltage produced by the body for impedance analysis. Parameters such as height, weight, age, and gender are provided individually. These parameters together with impedance measurements are then in the process to produce a body fat percentage. The experimental result shows impressive repeatability for successive measurements (stdev ≤ 0.25% fat mass). Moreover, result on the hand to hand node scheme reveals average absolute difference of total subjects between two analyzer tools of 0.48% (fat mass) with maximum absolute discrepancy of 1.22% (fat mass). On the other hand, the relative error normalized to Omron’s HBF-306 as comparison tool reveals less than 2% relative error. As a result, the system performance offers good evaluation tool for fat mass in the body.
Aerodynamics and thermal physics of helicopter ice accretion
NASA Astrophysics Data System (ADS)
Han, Yiqiang
Ice accretion on aircraft introduces significant loss in airfoil performance. Reduced lift-to- drag ratio reduces the vehicle capability to maintain altitude and also limits its maneuverability. Current ice accretion performance degradation modeling approaches are calibrated only to a limited envelope of liquid water content, impact velocity, temperature, and water droplet size; consequently inaccurate aerodynamic performance degradations are estimated. The reduced ice accretion prediction capabilities in the glaze ice regime are primarily due to a lack of knowledge of surface roughness induced by ice accretion. A comprehensive understanding of the ice roughness effects on airfoil heat transfer, ice accretion shapes, and ultimately aerodynamics performance is critical for the design of ice protection systems. Surface roughness effects on both heat transfer and aerodynamic performance degradation on airfoils have been experimentally evaluated. Novel techniques, such as ice molding and casting methods and transient heat transfer measurement using non-intrusive thermal imaging methods, were developed at the Adverse Environment Rotor Test Stand (AERTS) facility at Penn State. A novel heat transfer scaling method specifically for turbulent flow regime was also conceived. A heat transfer scaling parameter, labeled as Coefficient of Stanton and Reynolds Number (CSR = Stx/Rex --0.2), has been validated against reference data found in the literature for rough flat plates with Reynolds number (Re) up to 1x107, for rough cylinders with Re ranging from 3x104 to 4x106, and for turbine blades with Re from 7.5x105 to 7x106. This is the first time that the effect of Reynolds number is shown to be successfully eliminated on heat transfer magnitudes measured on rough surfaces. Analytical models for ice roughness distribution, heat transfer prediction, and aerodynamics performance degradation due to ice accretion have also been developed. The ice roughness prediction model was developed based on a set of 82 experimental measurements and also compared to existing predictions tools. Two reference predictions found in the literature yielded 76% and 54% discrepancy with respect to experimental testing, whereas the proposed ice roughness prediction model resulted in a 31% minimum accuracy in prediction. It must be noted that the accuracy of the proposed model is within the ice shape reproduction uncertainty of icing facilities. Based on the new ice roughness prediction model and the CSR heat transfer scaling method, an icing heat transfer model was developed. The approach achieved high accuracy in heat transfer prediction compared to experiments conducted at the AERTS facility. The discrepancy between predictions and experimental results was within +/-15%, which was within the measurement uncertainty range of the facility. By combining both the ice roughness and heat transfer predictions, and incorporating the modules into an existing ice prediction tool (LEWICE), improved prediction capability was obtained, especially for the glaze regime. With the available ice shapes accreted at the AERTS facility and additional experiments found in the literature, 490 sets of experimental ice shapes and corresponding aerodynamics testing data were available. A physics-based performance degradation empirical tool was developed and achieved a mean absolute deviation of 33% when compared to the entire experimental dataset, whereas 60% to 243% discrepancies were observed using legacy drag penalty prediction tools. Rotor torque predictions coupling Blade Element Momentum Theory and the proposed drag performance degradation tool was conducted on a total of 17 validation cases. The coupled prediction tool achieved a 10% predicting error for clean rotor conditions, and 16% error for iced rotor conditions. It was shown that additional roughness element could affect the measured drag by up to 25% during experimental testing, emphasizing the need of realistic ice structures during aerodynamics modeling and testing for ice accretion.
A Digital Lock-In Amplifier for Use at Temperatures of up to 200 °C
Cheng, Jingjing; Xu, Yingjun; Wu, Lei; Wang, Guangwei
2016-01-01
Weak voltage signals cannot be reliably measured using currently available logging tools when these tools are subject to high-temperature (up to 200 °C) environments for prolonged periods. In this paper, we present a digital lock-in amplifier (DLIA) capable of operating at temperatures of up to 200 °C. The DLIA contains a low-noise instrument amplifier and signal acquisition and the corresponding signal processing electronics. The high-temperature stability of the DLIA is achieved by designing system-in-package (SiP) and multi-chip module (MCM) components with low thermal resistances. An effective look-up-table (LUT) method was developed for the lock-in amplifier algorithm, to decrease the complexity of the calculations and generate less heat than the traditional way. The performance of the design was tested by determining the linearity, gain, Q value, and frequency characteristic of the DLIA between 25 and 200 °C. The maximal nonlinear error in the linearity of the DLIA working at 200 °C was about 1.736% when the equivalent input was a sine wave signal with an amplitude of between 94.8 and 1896.0 nV and a frequency of 800 kHz. The tests showed that the DLIA proposed could work effectively in high-temperature environments up to 200 °C. PMID:27845710
ERIC Educational Resources Information Center
Carroll, Erin Ashley
2013-01-01
Creativity is understood intuitively, but it is not easily defined and therefore difficult to measure. This makes it challenging to evaluate the ability of a digital tool to support the creative process. When evaluating creativity support tools (CSTs), it is critical to look beyond traditional time, error, and other productivity measurements that…
NASA Astrophysics Data System (ADS)
Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine
2018-07-01
This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.
Automation Bias: Decision Making and Performance in High-Tech Cockpits
NASA Technical Reports Server (NTRS)
Mosier, Kathleen L.; Skitka, Linda J.; Heers, Susan; Burdick, Mark; Rosekind, Mark R. (Technical Monitor)
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensible tools in high-technology cockpits, and are assuming increasing control of "cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate "automation bias," a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation "events," or opportunities for automation-related omission and commission errors. Pilots who perceived themselves as "accountable" for their performance and strategies of interaction with the automation were more likely to double-check automated functioning against other cues, and less likely to commit errors. Pilots were also likely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
A Measuring System for Well Logging Attitude and a Method of Sensor Calibration
Ren, Yong; Wang, Yangdong; Wang, Mijian; Wu, Sheng; Wei, Biao
2014-01-01
This paper proposes an approach for measuring the azimuth angle and tilt angle of underground drilling tools with a MEMS three-axis accelerometer and a three-axis fluxgate sensor. A mathematical model of well logging attitude angle is deduced based on combining space coordinate transformations and algebraic equations. In addition, a system implementation plan of the inclinometer is given in this paper, which features low cost, small volume and integration. Aiming at the sensor and assembly errors, this paper analyses the sources of errors, and establishes two mathematical models of errors and calculates related parameters to achieve sensor calibration. The results show that this scheme can obtain a stable and high precision azimuth angle and tilt angle of drilling tools, with the deviation of the former less than ±1.4° and the deviation of the latter less than ±0.1°. PMID:24859028
A measuring system for well logging attitude and a method of sensor calibration.
Ren, Yong; Wang, Yangdong; Wang, Mijian; Wu, Sheng; Wei, Biao
2014-05-23
This paper proposes an approach for measuring the azimuth angle and tilt angle of underground drilling tools with a MEMS three-axis accelerometer and a three-axis fluxgate sensor. A mathematical model of well logging attitude angle is deduced based on combining space coordinate transformations and algebraic equations. In addition, a system implementation plan of the inclinometer is given in this paper, which features low cost, small volume and integration. Aiming at the sensor and assembly errors, this paper analyses the sources of errors, and establishes two mathematical models of errors and calculates related parameters to achieve sensor calibration. The results show that this scheme can obtain a stable and high precision azimuth angle and tilt angle of drilling tools, with the deviation of the former less than ±1.4° and the deviation of the latter less than ±0.1°.
Error-related negativity varies with the activation of gender stereotypes.
Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin
2008-09-19
The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.
New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293