NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo
We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication
Chen, Chien-Sheng
2015-01-01
To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755
NASA Technical Reports Server (NTRS)
Mayo, Alton P.
1959-01-01
Flapwise bending moments were calculated for a teetering rotor blade using a reasonably rapid theoretical method in which airloads obtained from wind-tunnel tests were employed. The calculated moments agreed reasonably well with those measured with strain gages under the same test conditions. The range of the tests included one hovering and two forward-flight conditions. The rotor speed for the test was very near blade resonance, and difficult-to-calculate resonance effects apparently were responsible for the largest differences between the calculated and measured harmonic components of blade bending moments. These differences, moreover, were largely nullified when the harmonic components were combined to give a comparison of the calculated and measured blade total- moment time histories. The degree of agreement shown is therefore considered adequate to warrant the use of the theoretical method in establishing and applying methods of prediction of rotor-blade fatigue loads. At the same time, the validity of the experimental methods of obtaining both airload and blade stress measurement is also indicated to be adequate for use in establishing improved methods for prediction of rotor-blade fatigue loads during the design stage. The blade stiffnesses and natural frequencies were measured and found to be in close agreement with calculated values; however, for a condition of blade resonance the use of the experimental stiffness values resulted in better agreement between calculated and measured blade stresses.
Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G
2017-12-01
To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.
Measurement of the Microwave Refractive Index of Materials Based on Parallel Plate Waveguides
NASA Astrophysics Data System (ADS)
Zhao, F.; Pei, J.; Kan, J. S.; Zhao, Q.
2017-12-01
An electrical field scanning apparatus based on a parallel plate waveguide method is constructed, which collects the amplitude and phase matrices as a function of the relative position. On the basis of such data, a method for calculating the refractive index of the measured wedge samples is proposed in this paper. The measurement and calculation results of different PTFE samples reveal that the refractive index measured by the apparatus is substantially consistent with the refractive index inferred with the permittivity of the sample. The proposed refractive index calculation method proposed in this paper is a competitive method for the characterization of the refractive index of materials with positive refractive index. Since the apparatus and method can be used to measure and calculate arbitrary direction of the microwave propagation, it is believed that both of them can be applied to the negative refractive index materials, such as metamaterials or “left-handed” materials.
Measuring Phytoplankton From Satellites
NASA Technical Reports Server (NTRS)
Davis, C. O.
1989-01-01
Present and future methods examined. Report reviews methods of calculating concentration of phytoplankton from satellite measurements of color of ocean and using such calculations to estimate productivity of phytoplankton.
NASA Astrophysics Data System (ADS)
Yu, Jun; Hao, Du; Li, Decai
2018-01-01
The phenomenon whereby an object whose density is greater than magnetic fluid can be suspended stably in magnetic fluid under the magnetic field is one of the peculiar properties of magnetic fluids. Examples of applications based on the peculiar properties of magnetic fluid are sensors and actuators, dampers, positioning systems and so on. Therefore, the calculation and measurement of magnetic levitation force of magnetic fluid is of vital importance. This paper concerns the peculiar second-order buoyancy experienced by a magnet immersed in magnetic fluid. The expression for calculating the second-order buoyancy was derived, and a novel method for calculating and measuring the second-order buoyancy was proposed based on the expression. The second-order buoyancy was calculated by ANSYS and measured experimentally using the novel method. To verify the novel method, the second-order buoyancy was measured experimentally with a nonmagnetic rod stuck on the top surface of the magnet. The results of calculations and experiments show that the novel method for calculating the second-order buoyancy is correct with high accuracy. In addition, the main causes of error were studied in this paper, including magnetic shielding of magnetic fluid and the movement of magnetic fluid in a nonuniform magnetic field.
Methods and Systems for Measurement and Estimation of Normalized Contrast in Infrared Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M. (Inventor)
2017-01-01
Methods and systems for converting an image contrast evolution of an object to a temperature contrast evolution and vice versa are disclosed, including methods for assessing an emissivity of the object; calculating an afterglow heat flux evolution; calculating a measurement region of interest temperature change; calculating a reference region of interest temperature change; calculating a reflection temperature change; calculating the image contrast evolution or the temperature contrast evolution; and converting the image contrast evolution to the temperature contrast evolution or vice versa, respectively.
Methods and Systems for Measurement and Estimation of Normalized Contrast in Infrared Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M. (Inventor)
2015-01-01
Methods and systems for converting an image contrast evolution of an object to a temperature contrast evolution and vice versa are disclosed, including methods for assessing an emissivity of the object; calculating an afterglow heat flux evolution; calculating a measurement region of interest temperature change; calculating a reference region of interest temperature change; calculating a reflection temperature change; calculating the image contrast evolution or the temperature contrast evolution; and converting the image contrast evolution to the temperature contrast evolution or vice versa, respectively.
A new shielding calculation method for X-ray computed tomography regarding scattered radiation.
Watanabe, Hiroshi; Noto, Kimiya; Shohji, Tomokazu; Ogawa, Yasuyoshi; Fujibuchi, Toshioh; Yamaguchi, Ichiro; Hiraki, Hitoshi; Kida, Tetsuo; Sasanuma, Kazutoshi; Katsunuma, Yasushi; Nakano, Takurou; Horitsugi, Genki; Hosono, Makoto
2017-06-01
The goal of this study is to develop a more appropriate shielding calculation method for computed tomography (CT) in comparison with the Japanese conventional (JC) method and the National Council on Radiation Protection and Measurements (NCRP)-dose length product (DLP) method. Scattered dose distributions were measured in a CT room with 18 scanners (16 scanners in the case of the JC method) for one week during routine clinical use. The radiation doses were calculated for the same period using the JC and NCRP-DLP methods. The mean (NCRP-DLP-calculated dose)/(measured dose) ratios in each direction ranged from 1.7 ± 0.6 to 55 ± 24 (mean ± standard deviation). The NCRP-DLP method underestimated the dose at 3.4% in fewer shielding directions without the gantry and a subject, and the minimum (NCRP-DLP-calculated dose)/(measured dose) ratio was 0.6. The reduction factors were 0.036 ± 0.014 and 0.24 ± 0.061 for the gantry and couch directions, respectively. The (JC-calculated dose)/(measured dose) ratios ranged from 11 ± 8.7 to 404 ± 340. The air kerma scatter factor κ is expected to be twice as high as that calculated with the NCRP-DLP method and the reduction factors are expected to be 0.1 and 0.4 for the gantry and couch directions, respectively. We, therefore, propose a more appropriate method, the Japanese-DLP method, which resolves the issues of possible underestimation of the scattered radiation and overestimation of the reduction factors in the gantry and couch directions.
A comparison study of size-specific dose estimate calculation methods.
Parikh, Roshni A; Wien, Michael A; Novak, Ronald D; Jordan, David W; Klahr, Paul; Soriano, Stephanie; Ciancibello, Leslie; Berlin, Sheila C
2018-01-01
The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ c ) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI vol, there was poor correlation, ρ c <0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide acceptable dose estimates for pediatric patients <30 cm in body width. Body weight provides a quick and practical method to identify conversion factors that can be used to estimate SSDE with reasonable accuracy in pediatric patients with body width ≥20 cm.
Apparatus for in-situ calibration of instruments that measure fluid depth
Campbell, Melvin D.
1994-01-01
The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.
Measurement and simulation of thermal neutron flux distribution in the RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.
2018-01-01
The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.
Gulati, Shelly; Stubblefield, Ashley A; Hanlon, Jeremy S; Spier, Chelsea L; Stringfellow, William T
2014-03-01
Measuring the discharge of diffuse pollution from agricultural watersheds presents unique challenges. Flows in agricultural watersheds, particularly in Mediterranean climates, can be predominately irrigation runoff and exhibit large diurnal fluctuation in both volume and concentration. Flow and pollutant concentrations in these smaller watersheds dominated by human activity do not conform to a normal distribution and it is not clear if parametric methods are appropriate or accurate for load calculations. The objective of this study was to compare the accuracy of five load estimation methods to calculate pollutant loads from agricultural watersheds. Calculation of loads using results from discrete (grab) samples was compared with the true-load computed using in situ continuous monitoring measurements. A new method is introduced that uses a non-parametric measure of central tendency (the median) to calculate loads (median-load). The median-load method was compared to more commonly used parametric estimation methods which rely on using the mean as a measure of central tendency (mean-load and daily-load), a method that utilizes the total flow volume (volume-load), and a method that uses measure of flow at the time of sampling (instantaneous-load). Using measurements from ten watersheds in the San Joaquin Valley of California, the average percent error compared to the true-load for total dissolved solids (TDS) was 7.3% for the median-load, 6.9% for the mean-load, 6.9% for the volume-load, 16.9% for the instantaneous-load, and 18.7% for the daily-load methods of calculation. The results of this study show that parametric methods are surprisingly accurate, even for data that have starkly non-normal distributions and are highly skewed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kadji, Caroline; De Groof, Maxime; Camus, Margaux F; De Angelis, Riccardo; Fellas, Stéphanie; Klass, Magdalena; Cecotti, Vera; Dütemeyer, Vivien; Barakat, Elie; Cannie, Mieke M; Jani, Jacques C
2017-01-01
The aim of this study was to apply a semi-automated calculation method of fetal body volume and, thus, of magnetic resonance-estimated fetal weight (MR-EFW) prior to planned delivery and to evaluate whether the technique of measurement could be simplified while remaining accurate. MR-EFW was calculated using a semi-automated method at 38.6 weeks of gestation in 36 patients and compared to the picture archiving and communication system (PACS). Per patient, 8 sequences were acquired with a slice thickness of 4-8 mm and an intersection gap of 0, 4, 8, 12, 16, or 20 mm. The median absolute relative errors for MR-EFW and the time of planimetric measurements were calculated for all 8 sequences and for each method (assisted vs. PACS), and the difference between the methods was calculated. The median delivery weight was 3,280 g. The overall median relative error for all 288 MR-EFW calculations was 2.4% using the semi-automated method and 2.2% for the PACS method. Measurements did not differ between the 8 sequences using the assisted method (p = 0.313) or the PACS (p = 0.118), while the time of planimetric measurement decreased significantly with a larger gap (p < 0.001) and in the assisted method compared to the PACS method (p < 0.01). Our simplified MR-EFW measurement showed a dramatic decrease in time of planimetric measurement without a decrease in the accuracy of weight estimates. © 2017 S. Karger AG, Basel.
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-01-01
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-12-28
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.
Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.
ERIC Educational Resources Information Center
Jackson, David P.
1996-01-01
Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…
Apparatus for in-situ calibration of instruments that measure fluid depth
Campbell, M.D.
1994-01-11
The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.
[Quantitative evaluation of Gd-EOB-DTPA uptake in phantom study for liver MRI].
Hayashi, Norio; Miyati, Tosiaki; Koda, Wataru; Suzuki, Masayuki; Sanada, Shigeru; Ohno, Naoki; Hamaguchi, Takashi; Matsuura, Yukihiro; Kawahara, Kazuhiro; Yamamoto, Tomoyuki; Matsui, Osamu
2010-05-20
Gd-EOB-DTPA is a new liver specific MRI contrast media. In the hepatobiliary phase, contrast media is trapped in normal liver tissue, a normal liver shows high intensity, tumor/liver contrast becomes high, and diagnostic ability improves. In order to indicate the degree of uptake of the contrast media, the enhancement ratio (ER) is calculated. The ER is obtained by calculating (signal intensity (SI) after injection-SI before injection) / SI before injection. However, because there is no linearity between contrast media concentration and SI, ER is not correctly estimated by this method. We discuss a method of measuring ER based on SI and T(1) values using the phantom. We used a column phantom, with an internal diameter of 3 cm, that was filled with Gd-EOB-DTPA diluted solution. Moreover, measurement of the T(1) value by the IR method was also performed. The ER measuring method of this technique consists of the following three components: 1) Measurement of ER based on differences in 1/T(1) values using the variable flip angle (FA) method, 2) Measurement of differences in SI, and 3) Measurement of differences in 1/T(1) values using the IR method. ER values calculated by these three methods were compared. In measurement made using the variable FA method and the IR method, linearity was found between contrast media concentration and ER. On the other hand, linearity was not found between contrast media concentration and SI. For calculation of ER using Gd-EOB-DTPA, a more correct ER is obtained by measuring the T(1) value using the variable FA method.
Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
NASA Astrophysics Data System (ADS)
Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin
2018-05-01
Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.
NASA Astrophysics Data System (ADS)
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
NASA Astrophysics Data System (ADS)
Jiang, Chao; Qiao, Mingzhong; Zhu, Peng
2017-12-01
A permanent magnet synchronous motor with radial magnetic circuit and built-in permanent magnet is designed for the electric vehicle. Finite element numerical calculation and experimental measurement are adopted to obtain the direct axis and quadrature axis inductance parameters of the motor which are vital important for the motor control. The calculation method is simple, the measuring principle is clear, the results of numerical calculation and experimental measurement are mutual confirmation. A quick and effective method is provided to obtain the direct axis and quadrature axis inductance parameters of the motor, and then improve the design of motor or adjust the control parameters of the motor controller.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
NASA Astrophysics Data System (ADS)
Tao, Jiangchuan; Zhao, Chunsheng; Kuang, Ye; Zhao, Gang; Shen, Chuanyang; Yu, Yingli; Bian, Yuxuan; Xu, Wanyun
2018-02-01
The number concentration of cloud condensation nuclei (CCN) plays a fundamental role in cloud physics. Instrumentations of direct measurements of CCN number concentration (NCCN) based on chamber technology are complex and costly; thus a simple way for measuring NCCN is needed. In this study, a new method for NCCN calculation based on measurements of a three-wavelength humidified nephelometer system is proposed. A three-wavelength humidified nephelometer system can measure the aerosol light-scattering coefficient (σsp) at three wavelengths and the light-scattering enhancement factor (fRH). The Ångström exponent (Å) inferred from σsp at three wavelengths provides information on mean predominate aerosol size, and hygroscopicity parameter (κ) can be calculated from the combination of fRH and Å. Given this, a lookup table that includes σsp, κ and Å is established to predict NCCN. Due to the precondition for the application, this new method is not suitable for externally mixed particles, large particles (e.g., dust and sea salt) or fresh aerosol particles. This method is validated with direct measurements of NCCN using a CCN counter on the North China Plain. Results show that relative deviations between calculated NCCN and measured NCCN are within 30 % and confirm the robustness of this method. This method enables simplerNCCN measurements because the humidified nephelometer system is easily operated and stable. Compared with the method using a CCN counter, another advantage of this newly proposed method is that it can obtain NCCN at lower supersaturations in the ambient atmosphere.
Measurement of Angle Kappa Using Ultrasound Biomicroscopy and Corneal Topography
Yeo, Joon Hyung; Moon, Nam Ju
2017-01-01
Purpose To introduce a new convenient and accurate method to measure the angle kappa using ultrasound biomicroscopy (UBM) and corneal topography. Methods Data from 42 eyes (13 males and 29 females) were analyzed in this study. The angle kappa was measured using Orbscan II and calculated with UBM and corneal topography. The angle kappa of the dominant eye was compared with measurements by Orbscan II. Results The mean patient age was 36.4 ± 13.8 years. The average angle kappa measured by Orbscan II was 3.98° ± 1.12°, while the average angle kappa calculated with UBM and corneal topography was 3.19° ± 1.15°. The difference in angle kappa measured by the two methods was statistically significant (p < 0.001). The two methods showed good reliability (intraclass correlation coefficient, 0.671; p < 0.001). Bland-Altman plots were used to demonstrate the agreement between the two methods. Conclusions We designed a new method using UBM and corneal topography to calculate the angle kappa. This method is convenient to use and allows for measurement of the angle kappa without an expensive device. PMID:28471103
NASA Astrophysics Data System (ADS)
Kuang, Ye; Zhao, Chun Sheng; Zhao, Gang; Tao, Jiang Chuan; Xu, Wanyun; Ma, Nan; Bian, Yu Xuan
2018-05-01
Water condensed on ambient aerosol particles plays significant roles in atmospheric environment, atmospheric chemistry and climate. Before now, no instruments were available for real-time monitoring of ambient aerosol liquid water contents (ALWCs). In this paper, a novel method is proposed to calculate ambient ALWC based on measurements of a three-wavelength humidified nephelometer system, which measures aerosol light scattering coefficients and backscattering coefficients at three wavelengths under dry state and different relative humidity (RH) conditions, providing measurements of light scattering enhancement factor f(RH). The proposed ALWC calculation method includes two steps: the first step is the estimation of the dry state total volume concentration of ambient aerosol particles, Va(dry), with a machine learning method called random forest model based on measurements of the dry
nephelometer. The estimated Va(dry) agrees well with the measured one. The second step is the estimation of the volume growth factor Vg(RH) of ambient aerosol particles due to water uptake, using f(RH) and the Ångström exponent. The ALWC is calculated from the estimated Va(dry) and Vg(RH). To validate the new method, the ambient ALWC calculated from measurements of the humidified nephelometer system during the Gucheng campaign was compared with ambient ALWC calculated from ISORROPIA thermodynamic model using aerosol chemistry data. A good agreement was achieved, with a slope and intercept of 1.14 and -8.6 µm3 cm-3 (r2 = 0.92), respectively. The advantage of this new method is that the ambient ALWC can be obtained solely based on measurements of a three-wavelength humidified nephelometer system, facilitating the real-time monitoring of the ambient ALWC and promoting the study of aerosol liquid water and its role in atmospheric chemistry, secondary aerosol formation and climate change.
Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.
Cornforth, David J.; Tarvainen, Mika P.; Jelinek, Herbert F.
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311
Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel
NASA Astrophysics Data System (ADS)
Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa
This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A; Pasciak, A
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less
Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.
Gremba, Allison; Weinberg, Seth M
2018-05-09
We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.
Xu, Shenghua; Liu, Jie; Sun, Zhiwei
2006-12-01
Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
Measurement of Angle Kappa Using Ultrasound Biomicroscopy and Corneal Topography.
Yeo, Joon Hyung; Moon, Nam Ju; Lee, Jeong Kyu
2017-06-01
To introduce a new convenient and accurate method to measure the angle kappa using ultrasound biomicroscopy (UBM) and corneal topography. Data from 42 eyes (13 males and 29 females) were analyzed in this study. The angle kappa was measured using Orbscan II and calculated with UBM and corneal topography. The angle kappa of the dominant eye was compared with measurements by Orbscan II. The mean patient age was 36.4 ± 13.8 years. The average angle kappa measured by Orbscan II was 3.98° ± 1.12°, while the average angle kappa calculated with UBM and corneal topography was 3.19° ± 1.15°. The difference in angle kappa measured by the two methods was statistically significant (p < 0.001). The two methods showed good reliability (intraclass correlation coefficient, 0.671; p < 0.001). Bland-Altman plots were used to demonstrate the agreement between the two methods. We designed a new method using UBM and corneal topography to calculate the angle kappa. This method is convenient to use and allows for measurement of the angle kappa without an expensive device. © 2017 The Korean Ophthalmological Society
GIS-based measurements that combine native raster and native vector data are commonly used to assess environmental quality. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used beca...
A method for estimating mount isolations of powertrain mounting systems
NASA Astrophysics Data System (ADS)
Qin, Wu; Shangguan, Wen-Bin; Luo, Guohai; Xie, Zhengchao
2018-07-01
A method for calculating isolation ratios of mounts at a powertrain mounting systems (PMS) is proposed assuming a powertrain as a rigid body and using the identified powertrain excitation forces and the measured IPI (input point inertance) of mounting points at the body side. With measured accelerations of mounts at powertrain and body sides of one Vehicle (Vehicle A), the excitation forces of a powertrain are identified using conversational method firstly. Another Vehicle (Vehicle B) has the same powertrain as that of Vehicle A, but with different body and mount configuration. The accelerations of mounts at powertrain side of a PMS on Vehicle B are calculated using the powertrain excitation forces identified from Vehicle A. The identified forces of the powertrain are validated by comparing the calculated and the measured accelerations of mounts at the powertrain side of the powertrain on Vehicle B. A method for calculating acceleration of mounting point at body side for Vehicle B is presented using the identified powertrain excitation forces and the measured IPI at a connecting point between car body and mount. Using the calculated accelerations of mounts at powertrain side and body side at different directions, the isolation ratios of a mount are then estimated. The isolation ratios are validated using the experiment, which verified the proposed methods for estimating isolation ratios of mounts. The developed method is beneficial for optimizing mount stiffness to meet mount isolation requirements before prototype.
Thermodynamic evaluation of transonic compressor rotors using the finite volume approach
NASA Technical Reports Server (NTRS)
Nicholson, S.; Moore, J.
1986-01-01
A method was developed which calculates two-dimensional, transonic, viscous flow in ducts. The finite volume, time marching formulation is used to obtain steady flow solutions of the Reynolds-averaged form of the Navier Stokes equations. The entire calculation is performed in the physical domain. The method is currently limited to the calculation of attached flows. The features of the current method can be summarized as follows. Control volumes are chosen so that smoothing of flow properties, typically required for stability, is now needed. Different time steps are used in the different governing equations to improve the convergence speed of the viscous calculations. A new pressure interpolation scheme is introduced which improves the shock capturing ability of the method. A multi-volume method for pressure changes in the boundary layer allows calculations which use very long and thin control volumes. A special discretization technique is also used to stabilize these calculations. A special formulation of the energy equation is used to provide improved transient behavior of solutions which use the full energy equation. The method is then compared with a wide variety of test cases. The freestream Mach numbers range from 0.075 to 2.8 in the calculations. Transonic viscous flow in a converging diverging nozzle is calculated with the method; the Mach number upstream of the shock is approximately 1.25. The agreement between the calculated and measured shock strength and total pressure losses is good. Essentially incompressible turbulent boundary layer flow in a adverse pressure gradient is calculated and the computed distribution of mean velocity and shear stress are in good agreement with the measurements. At the other end of the Mach number range, a flat plate turbulent boundary layer with a freestream Mach number of 2.8 is calculated using the full energy equation; the computed total temperature distribution and recovery factor agree well with the measurements when a variable Prandtl number is used through the boundary layer.
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less
Kurasawa, Shintaro; Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun
2017-11-23
This paper describes and verifies a non-invasive blood glucose measurement method using a fiber Bragg grating (FBG) sensor system. The FBG sensor is installed on the radial artery, and the strain (pulse wave) that is propagated from the heartbeat is measured. The measured pulse wave signal was used as a collection of feature vectors for multivariate analysis aiming to determine the blood glucose level. The time axis of the pulse wave signal was normalized by two signal processing methods: the shortest-time-cut process and 1-s-normalization process. The measurement accuracy of the calculated blood glucose level was compared with the accuracy of these signal processing methods. It was impossible to calculate a blood glucose level exceeding 200 mg/dL in the calibration curve that was constructed by the shortest-time-cut process. In the 1-s-normalization process, the measurement accuracy of the blood glucose level was improved, and a blood glucose level exceeding 200 mg/dL could be calculated. By verifying the loading vector of each calibration curve to calculate the blood glucose level with a high measurement accuracy, we found the gradient of the peak of the pulse wave at the acceleration plethysmogram greatly affected.
Free-Space Time-Domain Method for Measuring Thin Film Dielectric Properties
Li, Ming; Zhang, Xi-Cheng; Cho, Gyu Cheon
2000-05-02
A non-contact method for determining the index of refraction or dielectric constant of a thin film on a substrate at a desired frequency in the GHz to THz range having a corresponding wavelength larger than the thickness of the thin film (which may be only a few microns). The method comprises impinging the desired-frequency beam in free space upon the thin film on the substrate and measuring the measured phase change and the measured field reflectance from the reflected beam for a plurality of incident angles over a range of angles that includes the Brewster's angle for the thin film. The index of refraction for the thin film is determined by applying Fresnel equations to iteratively calculate a calculated phase change and a calculated field reflectance at each of the plurality of incident angles, and selecting the index of refraction that provides the best mathematical curve fit with both the dataset of measured phase changes and the dataset of measured field reflectances for each incident angle. The dielectric constant for the thin film can be calculated as the index of refraction squared.
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Hagedorn, Linda Serra
1998-01-01
A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…
On the extraction of pressure fields from PIV velocity measurements in turbines
NASA Astrophysics Data System (ADS)
Villegas, Arturo; Diez, Fancisco J.
2012-11-01
In this study, the pressure field for a water turbine is derived from particle image velocimetry (PIV) measurements. Measurements are performed in a recirculating water channel facility. The PIV measurements include calculating the tangential and axial forces applied to the turbine by solving the integral momentum equation around the airfoil. The results are compared with the forces obtained from the Blade Element Momentum theory (BEMT). Forces are calculated by using three different methods. In the first method, the pressure fields are obtained from PIV velocity fields by solving the Poisson equation. The boundary conditions are obtained from the Navier-Stokes momentum equations. In the second method, the pressure at the boundaries is determined by spatial integration of the pressure gradients along the boundaries. In the third method, applicable only to incompressible, inviscid, irrotational, and steady flow, the pressure is calculated using the Bernoulli equation. This approximated pressure is known to be accurate far from the airfoil and outside of the wake for steady flows. Additionally, the pressure is used to solve for the force from the integral momentum equation on the blade. From the three methods proposed to solve for pressure and forces from PIV measurements, the first one, which is solved by using the Poisson equation, provides the best match to the BEM theory calculations.
Real-Time Stability Margin Measurements for X-38 Robustness Analysis
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Stachowiak, Susan J.
2005-01-01
A method has been developed for real-time stability margin measurement calculations. The method relies on a tailored-forced excitation targeted to a specific frequency range. Computation of the frequency response is matched to the specific frequencies contained in the excitation. A recursive Fourier transformation is used to make the method compatible with real-time calculation. The method was incorporated into the X-38 nonlinear simulation and applied to an X-38 robustness test. X-38 stability margins were calculated for different variations in aerodynamic and mass properties over the vehicle flight trajectory. The new method showed results comparable to more traditional stability analysis techniques, and at the same time, this new method provided coverage that is more complete and increased efficiency.
Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong
2016-09-01
The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2016-02-01
The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.
Device and Method for Continuously Equalizing the Charge State of Lithium Ion Battery Cells
NASA Technical Reports Server (NTRS)
Schwartz, Paul D. (Inventor); Roufberg, Lewis M. (Inventor); Martin, Mark N. (Inventor)
2015-01-01
A method of equalizing charge states of individual cells in a battery includes measuring a previous cell voltage for each cell, measuring a previous shunt current for each cell, calculating, based on the previous cell voltage and the previous shunt current, an adjusted cell voltage for each cell, determining a lowest adjusted cell voltage from among the calculated adjusted cell voltages, and calculating a new shunt current for each cell.
NASA Astrophysics Data System (ADS)
Hartman, H.; Engström, L.; Lundberg, H.; Nilsson, H.; Quinet, P.; Fivet, V.; Palmeri, P.; Malcheva, G.; Blagoev, K.
2017-04-01
Aims: This work reports new experimental radiative lifetimes and calculated oscillator strengths for transitions from 3d84d levels of astrophysical interest in singly ionized nickel. Methods: Radiative lifetimes of seven high-lying levels of even parity in Ni II (98 400-100 600 cm-1) have been measured using the time-resolved laser-induced fluorescence method. Two-step photon excitation of ions produced by laser ablation has been utilized to populate the levels. Theoretical calculations of the radiative lifetimes of the measured levels and transition probabilities from these levels are reported. The calculations have been performed using a pseudo-relativistic Hartree-Fock method, taking into account core polarization effects. Results: A new set of transition probabilities and oscillator strengths has been deduced for 477 Ni II transitions of astrophysical interest in the spectral range 194-520 nm depopulating even parity 3d84d levels. The new calculated gf-values are, on the average, about 20% higher than a previous calculation and yield lifetimes within 5% of the experimental values.
Method for measuring anterior chamber volume by image analysis
NASA Astrophysics Data System (ADS)
Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli
2007-12-01
Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.
Computer-assisted uncertainty assessment of k0-NAA measurement results
NASA Astrophysics Data System (ADS)
Bučar, T.; Smodiš, B.
2008-10-01
In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
Accurate calculation of the geometric measure of entanglement for multipartite quantum states
NASA Astrophysics Data System (ADS)
Teng, Peiyuan
2017-07-01
This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.
Timothy G. Wade; James D. Wickham; Maliha S. Nash; Anne C. Neale; Kurt H. Riitters; K. Bruce Jones
2003-01-01
AbstractGIS-based measurements that combine native raster and native vector data are commonly used in environmental assessments. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used because they can be significantly faster computationally...
FOEHN: The critical experiment for the Franco-German High Flux Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scharmer, K.; Eckert, H. G.
1991-01-01
A critical experiment for the Franco-German High Flux Reactor was carried out in the French reactor EOLE (CEN Cadarache). The purpose of the experiment was to check the calculation methods in a realistic geometry and to measure effects that can only be calculated imprecisely (e.g. beam hole effects). The structure of the experiment and the measurement and calculation methods are described. A detailed comparison between theoretical and experimental results was performed. 30 refs., 105 figs.
Method for controlling gas metal arc welding
Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.
1989-01-01
The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzeja, R.; Werth, D.; Buckley, R.
The Atmospheric Technology Group at SRNL developed a new method to detect signals from Weapons of Mass Destruction (WMD) activities in a time series of chemical measurements at a downwind location. This method was tested with radioxenon measured in Russia and Japan after the 2013 underground test in North Korea. This LDRD calculated the uncertainty in the method with the measured data and also for a case with the signal reduced to 1/10 its measured value. The research showed that the uncertainty in the calculated probability of origin from the NK test site was small enough to confirm the test.more » The method was also wellbehaved for small signal strengths.« less
Budanec, M; Knezević, Z; Bokulić, T; Mrcela, I; Vrtar, M; Vekić, B; Kusić, Z
2008-12-01
This work studied the percent depth doses of (60)Co photon beams in the buildup region of a plastic phantom by LiF TLD measurements and by Monte Carlo calculations. An agreement within +/-1.5% was found between PDDs measured by TLD and calculated by the Monte Carlo method with the TLD in a plastic phantom. The dose in the plastic phantom was scored in voxels, with thickness scaled by physical and electron density. PDDs calculated by electron density scaling showed a better match with PDD(TLD)(MC); the difference is within +/-1.5% in the buildup region for square and rectangular field sizes.
Calculated and measured [Ca(2+)] in buffers used to calibrate Ca(2+) macroelectrodes.
McGuigan, John A S; Stumpff, Friederike
2013-05-01
The ionized concentration of calcium in physiological buffers ([Ca(2+)]) is normally calculated using either tabulated constants or software programs. To investigate the accuracy of such calculations, the [Ca(2+)] in EGTA [ethylene glycol-bis(β-aminoethylether)-N,N,N|,N|-tetraacetic acid], BAPTA [1,2-bis(o-aminophenoxy) ethane-N,N,N|,N|-tetraacetic acid], HEDTA [N-(2-hydroxyethyl)-ethylenediamine-N,N|,N|-triacetic acid], and NTA [N,N-bis(carboxymethyl)glycine] buffers was estimated using the ligand optimization method, and these measured values were compared with calculated values. All measurements overlapped in the pCa range of 3.51 (NTA) to 8.12 (EGTA). In all four buffer solutions, there was no correlation between measured and calculated values; the calculated values differed among themselves by factors varying from 1.3 (NTA) to 6.9 (EGTA). Independent measurements of EGTA purity and the apparent dissociation constants for HEDTA and NTA were not significantly different from the values estimated by the ligand optimization method, further substantiating the method. Using two calibration solutions of pCa 2.0 and 3.01 and seven buffers in the pCa range of 4.0-7.5, calibration of a Ca(2+) electrode over the pCa range of 2.0-7.5 became a routine procedure. It is proposed that such Ca(2+) calibration/buffer solutions be internationally defined and made commercially available to allow the precise measurement of [Ca(2+)] in biology. Copyright © 2013 Elsevier Inc. All rights reserved.
[A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].
Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng
2015-12-01
Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.
Improved CVD Techniques for Depositing Passivation Layers of ICs
1975-10-01
Calculations .......................... 228 4. Precision ........... ....... ........................ 229 5. Optional Measurements of Dense Oxide and Aluminum 4...47. Typical measurements of phosphorus K. net radiation intensity as a function of the calculated phosphorus concentrations • * • 124 48. Effect of... calculated by measuring the de- formation of a substrate, usually in the form of a beam, or a circular disc. "In the beam bending method, stress is
NASA Astrophysics Data System (ADS)
Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich
2015-03-01
Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.
Brittnacher, Mitchell J; Heltshe, Sonya L; Hayden, Hillary S; Radey, Matthew C; Weiss, Eli J; Damman, Christopher J; Zisman, Timothy L; Suskind, David L; Miller, Samuel I
2016-01-01
Comparative analysis of gut microbiomes in clinical studies of human diseases typically rely on identification and quantification of species or genes. In addition to exploring specific functional characteristics of the microbiome and potential significance of species diversity or expansion, microbiome similarity is also calculated to study change in response to therapies directed at altering the microbiome. Established ecological measures of similarity can be constructed from species abundances, however methods for calculating these commonly used ecological measures of similarity directly from whole genome shotgun (WGS) metagenomic sequence are lacking. We present an alignment-free method for calculating similarity of WGS metagenomic sequences that is analogous to the Bray-Curtis index for species, implemented by the General Utility for Testing Sequence Similarity (GUTSS) software application. This method was applied to intestinal microbiomes of healthy young children to measure developmental changes toward an adult microbiome during the first 3 years of life. We also calculate similarity of donor and recipient microbiomes to measure establishment, or engraftment, of donor microbiota in fecal microbiota transplantation (FMT) studies focused on mild to moderate Crohn's disease. We show how a relative index of similarity to donor can be calculated as a measure of change in a patient's microbiome toward that of the donor in response to FMT. Because clinical efficacy of the transplant procedure cannot be fully evaluated without analysis methods to quantify actual FMT engraftment, we developed a method for detecting change in the gut microbiome that is independent of species identification and database bias, sensitive to changes in relative abundance of the microbial constituents, and can be formulated as an index for correlating engraftment success with clinical measures of disease. More generally, this method may be applied to clinical evaluation of human microbiomes and provide potential diagnostic determination of individuals who may be candidates for specific therapies directed at alteration of the microbiome.
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Wells, R. A.; Love, R. A.
1977-01-01
A computer user's manual describing the operation and the essential features of the Modal Calculation Program is presented. The modal Calculation Program calculates the amplitude and phase of modal structures by means of acoustic pressure measurements obtained from microphones placed at selected locations within the fan inlet duct. In addition, the Modal Calculation Program also calculates the first-order errors in the modal coefficients that are due to tolerances in microphone location coordinates and inaccuracies in the acoustic pressure measurements.
The purpose of this SOP is to describe the procedures undertaken for calculating ingestion exposure from Day 4 composite measurements from duplicate diet using the direct method of exposure estimation. This SOP uses data that have been properly coded and certified with appropria...
Method of accurate thickness measurement of boron carbide coating on copper foil
Lacy, Jeffrey L.; Regmi, Murari
2017-11-07
A method is disclosed of measuring the thickness of a thin coating on a substrate comprising dissolving the coating and substrate in a reagent and using the post-dissolution concentration of the coating in the reagent to calculate an effective thickness of the coating. The preferred method includes measuring non-conducting films on flexible and rough substrates, but other kinds of thin films can be measure by matching a reliable film-substrate dissolution technique. One preferred method includes determining the thickness of Boron Carbide films deposited on copper foil. The preferred method uses a standard technique known as inductively coupled plasma optical emission spectroscopy (ICPOES) to measure boron concentration in a liquid sample prepared by dissolving boron carbide films and the Copper substrates, preferably using a chemical etch known as ceric ammonium nitrate (CAN). Measured boron concentration values can then be calculated.
NASA Technical Reports Server (NTRS)
Quinn, Robert D.; Gong, Leslie
2000-01-01
This report describes a method that can calculate transient aerodynamic heating and transient surface temperatures at supersonic and hypersonic speeds. This method can rapidly calculate temperature and heating rate time-histories for complete flight trajectories. Semi-empirical theories are used to calculate laminar and turbulent heat transfer coefficients and a procedure for estimating boundary-layer transition is included. Results from this method are compared with flight data from the X-15 research vehicle, YF-12 airplane, and the Space Shuttle Orbiter. These comparisons show that the calculated values are in good agreement with the measured flight data.
Fiber-optical method of pyrometric measurement of melts temperature
NASA Astrophysics Data System (ADS)
Zakharenko, V. A.; Veprikova, Ya R.
2018-01-01
There is a scientific problem of non-contact measurement of the temperature of metal melts now. The problem is related to the need to achieve the specified measurement errors in conditions of uncertainty of the blackness coefficients of the radiating surfaces. The aim of this work is to substantiate the new method of measurement in which the influence of the blackness coefficient is eliminated. The task consisted in calculating the design and material of special crucible placed in the molten metal, which is an emitter in the form of blackbody (BB). The methods are based on the classical concepts of thermal radiation and calculations based on the Planck function. To solve the problem, the geometry of the crucible was calculated on the basis of the Goofy method which forms the emitter of a blackbody at the immersed in the melt. The paper describes the pyrometric device based on fiber optic pyrometer for temperature measurement of melts, which implements the proposed method of measurement using a special crucible. The emitter is formed by the melt in this crucible, the temperature within which is measured by means of fiber optic pyrometer. Based on the results of experimental studies, the radiation coefficient ε‧ > 0.999, which confirms the theoretical and computational justification is given in the article
A rough set-based measurement model study on high-speed railway safety operation.
Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun
2018-01-01
Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.
Incidence, prevalence, and hybrid approaches to calculating disability-adjusted life years
2012-01-01
When disability-adjusted life years are used to measure the burden of disease on a population in a time interval, they can be calculated in several different ways: from an incidence, pure prevalence, or hybrid perspective. I show that these calculation methods are not equivalent and discuss some of the formal difficulties each method faces. I show that if we don’t discount the value of future health, there is a sense in which the choice of calculation method is a mere question of accounting. Such questions can be important, but they don’t raise deep theoretical concerns. If we do discount, however, choice of calculation method can change the relative burden attributed to different conditions over time. I conclude by recommending that studies involving disability-adjusted life years be explicit in noting what calculation method is being employed and in explaining why that calculation method has been chosen. PMID:22967055
Modeling of Permeability Structure Using Pore Pressure and Borehole Strain Monitoring
NASA Astrophysics Data System (ADS)
Kano, Y.; Ito, H.
2011-12-01
Hydraulic or transport property, especially permeability, of the rock affect the behavior of the fault during earthquake rupture and also interseismic period. The methods to determine permeability underground are hydraulic test utilizing borehole and packer or core measurement in laboratory. Another way to know the permeability around a borehole is to examine responses of pore pressure to natural loading such as barometric pressure change at surface or earth tides. Using response to natural deformation is conventional method for water resource research. The scale of measurement is different among in-situ hydraulic test, response method, and core measurement. It is not clear that the relationship between permeability values form each method for an inhomogeneous medium such as a fault zone. Supposing the measurement of the response to natural loading, we made a model calculation of permeability structure around a fault zone. The model is 2 dimensional and constructed with vertical high-permeability layer in uniform low-permeability zone. We assume the upper and lower boundaries are drained and no-flow condition. We calculated the flow and deformation of the model for step and cyclic loading by numerically solving a two-dimensional diffusion equation. The model calculation shows that the width of the high-permeability zone and contrast of the permeability between high- and low- permeability zones control the contribution of the low-permeability zone. We made a calculation with combinations of permeability and fault width to evaluate the sensitivity of the parameters to in-situ measurement of permeability. We applied the model calculation to the field results of in-situ packer test, and natural response of water level and strain monitoring carried out in the Kamioka mine. The model calculation shows that knowledge of permeability in host rock is also important to obtain permeability of fault zone itself. The model calculations help to design long-term pore pressure monitoring, in-situ hydraulic test, and core measurement using drill holes to better understand fault zone hydraulic properties.
The lifetime risk of maternal mortality: concept and measurement
2009-01-01
Abstract Objective The lifetime risk of maternal mortality, which describes the cumulative loss of life due to maternal deaths over the female life course, is an important summary measure of population health. However, despite its interpretive appeal, the lifetime risk of dying from maternal causes can be defined and calculated in various ways. A clear and concise discussion of both its underlying concept and methods of measurement is badly needed. Methods I define and compare a variety of procedures for calculating the lifetime risk of maternal mortality. I use detailed survey data from Bangladesh in 2001 to illustrate these calculations and compare the properties of the various risk measures. Using official UN estimates of maternal mortality for 2005, I document the differences in lifetime risk derived with the various measures. Findings Taking sub-Saharan Africa as an example, the range of estimates for the 2005 lifetime risk extends from 3.41% to 5.76%, or from 1 in 29 to 1 in 17. The highest value resulted from the method used for producing official UN estimates for the year 2000. The measure recommended here has an intermediate value of 4.47%, or 1 in 22. Conclusion There are strong reasons to consider the calculation method proposed here more accurate and appropriate than earlier procedures. Accordingly, it was adopted for use in producing the 2005 UN estimates of the lifetime risk of maternal mortality. By comparison, the method used for the 2000 UN estimates appears to overestimate this important measure of population health by around 20%. PMID:19551233
Evaluation of lung and chest wall mechanics during anaesthesia using the PEEP-step method.
Persson, P; Stenqvist, O; Lundin, S
2018-04-01
Postoperative pulmonary complications are common. Between patients there are differences in lung and chest wall mechanics. Individualised mechanical ventilation based on measurement of transpulmonary pressures would be a step forward. A previously described method evaluates lung and chest wall mechanics from a change of ΔPEEP and calculation of change in end-expiratory lung volume (ΔEELV). The aim of the present study was to validate this PEEP-step method (PSM) during general anaesthesia by comparing it with the conventional method using oesophageal pressure (PES) measurements. In 24 lung healthy subjects (BMI 18.5-32), three different sizes of PEEP steps were performed during general anaesthesia and ΔEELVs were calculated. Transpulmonary driving pressure (ΔPL) for a tidal volume equal to each ΔEELV was measured using PES measurements and compared to ΔPEEP with limits of agreement and intraclass correlation coefficients (ICC). ΔPL calculated with both methods was compared with a Bland-Altman plot. Mean differences between ΔPEEP and ΔPL were <0.15 cm H 2 O, 95% limits of agreements -2.1 to 2.0 cm H 2 O, ICC 0.6-0.83. Mean differences between ΔPL calculated by both methods were <0.2 cm H 2 O. Ratio of lung elastance and respiratory system elastance was 0.5-0.95. The large variation in mechanical properties among the lung healthy patients stresses the need for individualised ventilator settings based on measurements of lung and chest wall mechanics. The agreement between ΔPLs measured by the two methods during general anaesthesia suggests the use of the non-invasive PSM in this patient population. NCT 02830516. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Belov, A. V.; Kurkov, Andrei S.; Chikolini, A. V.
1989-02-01
A method was developed for calculating the effective cutoff length, the size of a mode spot, and the chromatic dispersion over the profile of the refractive index (measured in the preform stage) of single-mode fiber waveguides with a depressed cladding. The results of such calculations are shown to agree with the results of measurements of these quantities.
Evaluation of a multi-point method for determining acoustic impedance
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Parrott, Tony L.
1988-01-01
An investigation was conducted to explore potential improvements provided by a Multi-Point Method (MPM) over the Standing Wave Method (SWM) and Two-Microphone Method (TMM) for determining acoustic impedance. A wave propagation model was developed to model the standing wave pattern in an impedance tube. The acoustic impedance of a test specimen was calculated from a best fit of this standing wave pattern to pressure measurements obtained along the impedance tube centerline. Three measurement spacing distributions were examined: uniform, random, and selective. Calculated standing wave patterns match the point pressure measurement distributions with good agreement for a reflection factor magnitude range of 0.004 to 0.999. Comparisons of results using 2, 3, 6, and 18 measurement points showed that the most consistent results are obtained when using at least 6 evenly spaced pressure measurements per half-wavelength. Also, data were acquired with broadband noise added to the discrete frequency noise and impedances were calculated using the MPM and TMM algorithms. The results indicate that the MPM will be superior to the TMM in the presence of significant broadband noise levels associated with mean flow.
Method for controlling gas metal arc welding
Smartt, H.B.; Einerson, C.J.; Watkins, A.D.
1987-08-10
The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Method and system for measuring multiphase flow using multiple pressure differentials
Fincke, James R.
2001-01-01
An improved method and system for measuring a multiphase flow in a pressure flow meter. An extended throat venturi is used and pressure of the multiphase flow is measured at three or more positions in the venturi, which define two or more pressure differentials in the flow conduit. The differential pressures are then used to calculate the mass flow of the gas phase, the total mass flow, and the liquid phase. The method for determining the mass flow of the high void fraction fluid flow and the gas flow includes certain steps. The first step is calculating a gas density for the gas flow. The next two steps are finding a normalized gas mass flow rate through the venturi and computing a gas mass flow rate. The following step is estimating the gas velocity in the venturi tube throat. The next step is calculating the pressure drop experienced by the gas-phase due to work performed by the gas phase in accelerating the liquid phase between the upstream pressure measuring point and the pressure measuring point in the venturi throat. Another step is estimating the liquid velocity in the venturi throat using the calculated pressure drop experienced by the gas-phase due to work performed by the gas phase. Then the friction is computed between the liquid phase and a wall in the venturi tube. Finally, the total mass flow rate based on measured pressure in the venturi throat is calculated, and the mass flow rate of the liquid phase is calculated from the difference of the total mass flow rate and the gas mass flow rate.
Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment
NASA Astrophysics Data System (ADS)
Barnett, D. A., Jr.
1991-02-01
An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.
Using Clinical Data Standards to Measure Quality: A New Approach.
D'Amore, John D; Li, Chun; McCrary, Laura; Niloff, Jonathan M; Sittig, Dean F; McCoy, Allison B; Wright, Adam
2018-04-01
Value-based payment for care requires the consistent, objective calculation of care quality. Previous initiatives to calculate ambulatory quality measures have relied on billing data or individual electronic health records (EHRs) to calculate and report performance. New methods for quality measure calculation promoted by federal regulations allow qualified clinical data registries to report quality outcomes based on data aggregated across facilities and EHRs using interoperability standards. This research evaluates the use of clinical document interchange standards as the basis for quality measurement. Using data on 1,100 patients from 11 ambulatory care facilities and 5 different EHRs, challenges to quality measurement are identified and addressed for 17 certified quality measures. Iterative solutions were identified for 14 measures that improved patient inclusion and measure calculation accuracy. Findings validate this approach to improving measure accuracy while maintaining measure certification. Organizations that report care quality should be aware of how identified issues affect quality measure selection and calculation. Quality measure authors should consider increasing real-world validation and the consistency of measure logic in respect to issues identified in this research. Schattauer GmbH Stuttgart.
Calibration method and apparatus for measuring the concentration of components in a fluid
Durham, M.D.; Sagan, F.J.; Burkhardt, M.R.
1993-12-21
A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid. 7 figures.
Calibration method and apparatus for measuring the concentration of components in a fluid
Durham, Michael D.; Sagan, Francis J.; Burkhardt, Mark R.
1993-01-01
A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid.
New method for estimation of fluence complexity in IMRT fields and correlation with gamma analysis
NASA Astrophysics Data System (ADS)
Hanušová, T.; Vondráček, V.; Badraoui-Čuprová, K.; Horáková, I.; Koniarová, I.
2015-01-01
A new method for estimation of fluence complexity in Intensity Modulated Radiation Therapy (IMRT) fields is proposed. Unlike other previously published works, it is based on portal images calculated by the Portal Dose Calculation algorithm in Eclipse (version 8.6, Varian Medical Systems) in the plane of the EPID aS500 detector (Varian Medical Systems). Fluence complexity is given by the number and the amplitudes of dose gradients in these matrices. Our method is validated using a set of clinical plans where fluence has been smoothed manually so that each plan has a different level of complexity. Fluence complexity calculated with our tool is in accordance with the different levels of smoothing as well as results of gamma analysis, when calculated and measured dose matrices are compared. Thus, it is possible to estimate plan complexity before carrying out the measurement. If appropriate thresholds are determined which would distinguish between acceptably and overly modulated plans, this might save time in the re-planning and re-measuring process.
Wang, L; Lovelock, M; Chui, C S
1999-12-01
To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.
Assessment of radiant temperature in a closed incubator.
Décima, Pauline; Stéphan-Blanchard, Erwan; Pelletier, Amandine; Ghyselen, Laurent; Delanaud, Stéphane; Dégrugilliers, Loïc; Telliez, Frédéric; Bach, Véronique; Libert, Jean-Pierre
2012-08-01
In closed incubators, radiative heat loss (R) which is assessed from the mean radiant temperature (Tr) accounts for 40-60% of the neonate's total heat loss. In the absence of a benchmark method to calculate Tr--often considered to be the same as the air incubator temperature-errors could have a considerable impact on the thermal management of neonates. We compared Tr using two conventional methods (measurement with a black-globe thermometer and a radiative "view factor" approach) and two methods based on nude thermal manikins (a simple, schematic design from Wheldon and a multisegment, anthropometric device developed in our laboratory). By taking the Tr estimations for each method, we calculated metabolic heat production values by partitional calorimetry and then compared them with the values calculated from V(O2) and V(CO2) measured in 13 preterm neonates. Comparisons between the calculated and measured metabolic heat production values showed that the two conventional methods and Wheldon's manikin underestimated R, whereas when using the anthropomorphic thermal manikin, the simulated versus clinical difference was not statistically significant. In conclusion, there is a need for a safety standard for measuring TR in a closed incubator. This standard should also make available estimating equations for all avenues of the neonate's heat exchange considering the metabolic heat production and the modifying influence of the thermal insulation provided by the diaper and by the mattress. Although thermal manikins appear to be particularly appropriate for measuring Tr, the current lack of standardized procedures limits their widespread use.
Measurements of UGR of LED light by a DSLR colorimeter
NASA Astrophysics Data System (ADS)
Hsu, Shau-Wei; Chen, Cheng-Hsien; Jiaan, Yuh-Der
2012-10-01
We have developed an image-based measurement method on UGR (unified glare rating) of interior lighting environment. A calibrated DSLR (digital single-lens reflex camera) with an ultra wide-angle lens was used to measure the luminance distribution, by which the corresponding parameters can be automatically calculated. A LED lighting was placed in a room and measured at various positions and directions to study the properties of UGR. The testing results are fitted with visual experiences and UGR principles. To further examine the results, a spectroradiometer and an illuminance meter were respectively used to measure the luminance and illuminance at the same position and orientation of the DSLR. The calculation of UGR by this image-based method may solve the problem of non-uniform luminance-distribution of LED lighting, and was studied on segmentation of the luminance graph for the calculations.
NASA Astrophysics Data System (ADS)
Shevenell, Lisa
1999-03-01
Values of evapotranspiration are required for a variety of water planning activities in arid and semi-arid climates, yet data requirements are often large, and it is costly to obtain this information. This work presents a method where a few, readily available data (temperature, elevation) are required to estimate potential evapotranspiration (PET). A method using measured temperature and the calculated ratio of total to vertical radiation (after the work of Behnke and Maxey, 1969) to estimate monthly PET was applied for the months of April-October and compared with pan evaporation measurements. The test area used in this work was in Nevada, which has 124 weather stations that record sufficient amounts of temperature data. The calculated PET values were found to be well correlated (R2=0·940-0·983, slopes near 1·0) with mean monthly pan evaporation measurements at eight weather stations.In order to extrapolate these calculated PET values to areas without temperature measurements and to sites at differing elevations, the state was divided into five regions based on latitude, and linear regressions of PET versus elevation were calculated for each of these regions. These extrapolated PET values generally compare well with the pan evaporation measurements (R2=0·926-0·988, slopes near 1·0). The estimated values are generally somewhat lower than the pan measurements, in part because the effects of wind are not explicitly considered in the calculations, and near-freezing temperatures result in a calculated PET of zero at higher elevations in the spring months. The calculated PET values for April-October are 84-100% of the measured pan evaporation values. Using digital elevation models in a geographical information system, calculated values were adjusted for slope and aspect, and the data were used to construct a series of maps of monthly PET. The resultant maps show a realistic distribution of regional variations in PET throughout Nevada which inversely mimics topography. The general methods described here could be used to estimate regional PET in other arid western states (e.g. New Mexico, Arizona, Utah) and arid regions world-wide (e.g. parts of Africa).
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Method for determining waveguide temperature for acoustic transceiver used in a gas turbine engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSilva, Upul P.; Claussen, Heiko; Ragunathan, Karthik
A method for determining waveguide temperature for at least one waveguide of a transceiver utilized for generating a temperature map. The transceiver generates an acoustic signal that travels through a measurement space in a hot gas flow path defined by a wall such as in a combustor. The method includes calculating a total time of flight for the acoustic signal and subtracting a waveguide travel time from the total time of flight to obtain a measurement space travel time. A temperature map is calculated based on the measurement space travel time. An estimated wall temperature is obtained from the temperaturemore » map. An estimated waveguide temperature is then calculated based on the estimated wall temperature wherein the estimated waveguide temperature is determined without the use of a temperature sensing device.« less
Compensation of the sheath effects in cylindrical floating probes
NASA Astrophysics Data System (ADS)
Park, Ji-Hwan; Chung, Chin-Wook
2018-05-01
In cylindrical floating probe measurements, the plasma density and electron temperature are overestimated due to sheath expansion and oscillation. To reduce these sheath effects, a compensation method based on well-developed floating sheath theories is proposed and applied to the floating harmonic method. The iterative calculation of the Allen-Boyd-Reynolds equation can derive the floating sheath thickness, which can be used to calculate the effective ion collection area; in this way, an accurate ion density is obtained. The Child-Langmuir law is used to calculate the ion harmonic currents caused by sheath oscillation of the alternating-voltage-biased probe tip. Accurate plasma parameters can be obtained by subtracting these ion harmonic currents from the total measured harmonic currents. Herein, the measurement principles and compensation method are discussed in detail and an experimental demonstration is presented.
Estimation of blade airloads from rotor blade bending moments
NASA Technical Reports Server (NTRS)
Bousman, William G.
1987-01-01
A method is developed to estimate the blade normal airloads by using measured flap bending moments; that is, the rotor blade is used as a force balance. The blade's rotation is calculated in vacuum modes and the airloads are then expressed as an algebraic sum of the mode shapes, modal amplitudes, mass distribution, and frequency properties. The modal amplitudes are identified from the blade bending moments using the Strain Pattern Analysis Method. The application of the method is examined using simulated flap bending moment data that have been calculated for measured airloads for a full-scale rotor in a wind tunnel. The estimated airloads are compared with the wind tunnel measurements. The effects of the number of measurements, the number of modes, and errors in the measurements and the blade properties are examined, and the method is shown to be robust.
SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, B; Liu, S; Zhang, T
2016-06-15
Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Chen, S
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data accessmore » and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D program produce reasonably accurate MU calculations for electron-beam therapy.« less
Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.
Graham, B L; Mink, J T; Cotton, D J
1983-01-01
It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.
Measured values of coal mine stopping resistance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oswald, N.; Prosser, B.; Ruckman, R.
2008-12-15
As coal mines become larger, the number of stoppings in the ventilation system increases. Each stopping represents a potential leakage path which must be adequately represented in the ventilation model. Stopping resistance can be calculated using two methods, the USBM method, used to determine a resistance for a single stopping, and the MVS technique, in which an average resistance is calculated for multiple stoppings. Through MVS data collected from ventilation surveys of different subsurface coal mines, average resistances for stoppings were determined for stopping in poor, average, good, and excellent conditions. The calculated average stoppings resistance were determined for concretemore » block and Kennedy stopping. Using the average stopping resistance, measured and calculated using the MVS method, provides a ventilation modeling tool which can be used to construct more accurate and useful ventilation models. 3 refs., 3 figs.« less
NASA Astrophysics Data System (ADS)
Drozd, M.; Marchewka, M. K.
2006-05-01
The room temperature X-ray studies of L-lysine × tartaric acid complex are not unambiguous. The disorder of three atoms of carbon in L-lysine molecule is observed. These X-ray studies are ambiguous. The theoretical geometry study performed by DFT methods explain the most doubts which are connected with crystallographic measurements. The theoretical vibrational frequencies and potential energy distribution (PED) of L-lysine × tartaric acid were calculated by B3LYP method. The calculated frequencies were compared with experimental measured IR spectra. The complete assignment of the bands has been made on the basis of the calculated PED. The restricted Hartee-Fock (RHF) methods were used for calculation of the hyperpolarizability for investigated compound. The theoretical results are compared with experimental value of β.
Dosimetric evaluation of intrafractional tumor motion by means of a robot driven phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richter, Anne; Wilbert, Juergen; Flentje, Michael
2011-10-15
Purpose: The aim of the work was to investigate the influence of intrafractional tumor motion to the accumulated (absorbed) dose. The accumulated dose was determined by means of calculations and measurements with a robot driven motion phantom. Methods: Different motion scenarios and compensation techniques were realized in a phantom study to investigate the influence of motion on image acquisition, dose calculation, and dose measurement. The influence of motion on the accumulated dose was calculated by employing two methods (a model based and a voxel based method). Results: Tumor motion resulted in a blurring of steep dose gradients and a reductionmore » of dose at the periphery of the target. A systematic variation of motion parameters allowed the determination of the main influence parameters on the accumulated dose. The key parameters with the greatest influence on dose were the mean amplitude and the pattern of motion. Investigations on necessary safety margins to compensate for dose reduction have shown that smaller safety margins are sufficient, if the developed concept with optimized margins (OPT concept) was used instead of the standard internal target volume (ITV) concept. Both calculation methods were a reasonable approximation of the measured dose with the voxel based method being in better agreement with the measurements. Conclusions: Further evaluation of available systems and algorithms for dose accumulation are needed to create guidelines for the verification of the accumulated dose.« less
A new method of calculating electrical conductivity with applications to natural waters
McCleskey, R. Blaine; Nordstrom, D. Kirk; Ryan, Joseph N.; Ball, James W.
2012-01-01
A new method is presented for calculating the electrical conductivity of natural waters that is accurate over a large range of effective ionic strength (0.0004–0.7 mol kg-1), temperature (0–95 °C), pH (1–10), and conductivity (30–70,000 μS cm-1). The method incorporates a reliable set of equations to calculate the ionic molal conductivities of cations and anions (H+, Li+, Na+, K+, Cs+, NH4+, Mg2+, Ca2+, Sr2+, Ba2+, F-, Cl-, Br-, SO42-, HCO3-, CO32-, NO3-, and OH-), environmentally important trace metals (Al3+, Cu2+, Fe2+, Fe3+, Mn2+, and Zn2+), and ion pairs (HSO4-, NaSO4-, NaCO3-, and KSO4-). These equations are based on new electrical conductivity measurements for electrolytes found in a wide range of natural waters. In addition, the method is coupled to a geochemical speciation model that is used to calculate the speciated concentrations required for accurate conductivity calculations. The method was thoroughly tested by calculating the conductivities of 1593 natural water samples and the mean difference between the calculated and measured conductivities was -0.7 ± 5%. Many of the samples tested were selected to determine the limits of the method and include acid mine waters, geothermal waters, seawater, dilute mountain waters, and river water impacted by municipal waste water. Transport numbers were calculated and H+, Na+, Ca2+, Mg2+, NH4+, K+, Cl-, SO42-, HCO3-, CO32-, F-, Al3+, Fe2+, NO3-, and HSO4- substantially contributed (>10%) to the conductivity of at least one of the samples. Conductivity imbalance in conjunction with charge imbalance can be used to identify whether a cation or an anion measurement is likely in error, thereby providing an additional quality assurance/quality control constraint on water analyses.
A new method of calculating electrical conductivity with applications to natural waters
NASA Astrophysics Data System (ADS)
McCleskey, R. Blaine; Nordstrom, D. Kirk; Ryan, Joseph N.; Ball, James W.
2012-01-01
A new method is presented for calculating the electrical conductivity of natural waters that is accurate over a large range of effective ionic strength (0.0004-0.7 mol kg-1), temperature (0-95 °C), pH (1-10), and conductivity (30-70,000 μS cm-1). The method incorporates a reliable set of equations to calculate the ionic molal conductivities of cations and anions (H+, Li+, Na+, K+, Cs+, NH4+, Mg2+, Ca2+, Sr2+, Ba2+, F-, Cl-, Br-, SO42-, HCO3-, CO32-, NO3-, and OH-), environmentally important trace metals (Al3+, Cu2+, Fe2+, Fe3+, Mn2+, and Zn2+), and ion pairs (HSO4-, NaSO4-, NaCO3-, and KSO4-). These equations are based on new electrical conductivity measurements for electrolytes found in a wide range of natural waters. In addition, the method is coupled to a geochemical speciation model that is used to calculate the speciated concentrations required for accurate conductivity calculations. The method was thoroughly tested by calculating the conductivities of 1593 natural water samples and the mean difference between the calculated and measured conductivities was -0.7 ± 5%. Many of the samples tested were selected to determine the limits of the method and include acid mine waters, geothermal waters, seawater, dilute mountain waters, and river water impacted by municipal waste water. Transport numbers were calculated and H+, Na+, Ca2+, Mg2+, NH4+, K+, Cl-, SO42-, HCO3-, CO32-, F-, Al3+, Fe2+, NO3-, and HSO4-substantially contributed (>10%) to the conductivity of at least one of the samples. Conductivity imbalance in conjunction with charge imbalance can be used to identify whether a cation or an anion measurement is likely in error, thereby providing an additional quality assurance/quality control constraint on water analyses.
SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, M; Tobias, R; Pankuch, M
Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less
Chrystal, C; Burrell, K H; Grierson, B A; Groebner, R J; Kaplan, D H
2012-10-01
To improve poloidal rotation measurement capabilities on the DIII-D tokamak, new chords for the charge exchange recombination spectroscopy (CER) diagnostic have been installed. CER is a common method for measuring impurity rotation in tokamak plasmas. These new chords make measurements on the high-field side of the plasma. They are designed so that they can measure toroidal rotation without the need for the calculation of atomic physics corrections. Asymmetry between toroidal rotation on the high- and low-field sides of the plasma is used to calculate poloidal rotation. Results for the main impurity in the plasma are shown and compared with a neoclassical calculation of poloidal rotation.
Wang, Changguang; Williams, Noelle S
2013-03-05
The aim of this study is to further validate the use of ultrafiltration (UF) as a method for determining plasma protein binding (PPB) by demonstrating that non-specific binding (NSB) is not a limitation, even for highly lipophilic compounds, because NSB sites on the apparatus are passivated in the presence of plasma. Mass balance theory was used to calculate recovery of 20 commercial and seven investigational compounds during ultrafiltration in the presence and absence of plasma. PPB was also measured using this mass balance approach for comparison to PPB determined by rapid equilibrium dialysis (RED) and as found in the literature. Compound recovery during UF was dramatically different in the presence and absence of plasma for compounds with high NSB in PBS only. A comparison of PPB calculated by ultrafiltration with literature values or calculated by RED gave concordant results. Discrepancies could be explained by changes in pH, insufficient time to equilibrium, or compound instability during RED, problems which were circumvented by ultrafiltration. Therefore, NSB, as measured by the traditional incubation of compound in PBS, need not be an issue when choosing UF as a PPB assay method. It is more appropriate to calculate compound recovery from the device in plasma as measured by mass balance to determine the suitability of the method for an individual compound. The speed with which UF can be conducted additionally avoids changes in pH or compound loss that can occur with other methods. The mass balance approach to UF is thus a preferred method for rapid determination of PPB. Copyright © 2012 Elsevier B.V. All rights reserved.
Automatic lumbar spine measurement in CT images
NASA Astrophysics Data System (ADS)
Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun
2017-03-01
Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.
Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.
2018-05-01
We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larriba, Carlos, E-mail: clarriba@umn.edu; Hogan, Christopher J.
2013-10-15
The structures of nanoparticles, macromolecules, and molecular clusters in gas phase environments are often studied via measurement of collision cross sections. To directly compare structure models to measurements, it is hence necessary to have computational techniques available to calculate the collision cross sections of structural models under conditions matching measurements. However, presently available collision cross section methods contain the underlying assumption that collision between gas molecules and structures are completely elastic (gas molecule translational energy conserving) and specular, while experimental evidence suggests that in the most commonly used background gases for measurements, air and molecular nitrogen, gas molecule reemission ismore » largely inelastic (with exchange of energy between vibrational, rotational, and translational modes) and should be treated as diffuse in computations with fixed structural models. In this work, we describe computational techniques to predict the free molecular collision cross sections for fixed structural models of gas phase entities where inelastic and non-specular gas molecule reemission rules can be invoked, and the long range ion-induced dipole (polarization) potential between gas molecules and a charged entity can be considered. Specifically, two calculation procedures are described detail: a diffuse hard sphere scattering (DHSS) method, in which structures are modeled as hard spheres and collision cross sections are calculated for rectilinear trajectories of gas molecules, and a diffuse trajectory method (DTM), in which the assumption of rectilinear trajectories is relaxed and the ion-induced dipole potential is considered. Collision cross section calculations using the DHSS and DTM methods are performed on spheres, models of quasifractal aggregates of varying fractal dimension, and fullerene like structures. Techniques to accelerate DTM calculations by assessing the contribution of grazing gas molecule collisions (gas molecules with altered trajectories by the potential interaction) without tracking grazing trajectories are further discussed. The presented calculation techniques should enable more accurate collision cross section predictions under experimentally relevant conditions than pre-existing approaches, and should enhance the ability of collision cross section measurement schemes to discern the structures of gas phase entities.« less
The effects of shared information on semantic calculations in the gene ontology.
Bible, Paul W; Sun, Hong-Wei; Morasso, Maria I; Loganantharaj, Rasiah; Wei, Lai
2017-01-01
The structured vocabulary that describes gene function, the gene ontology (GO), serves as a powerful tool in biological research. One application of GO in computational biology calculates semantic similarity between two concepts to make inferences about the functional similarity of genes. A class of term similarity algorithms explicitly calculates the shared information (SI) between concepts then substitutes this calculation into traditional term similarity measures such as Resnik, Lin, and Jiang-Conrath. Alternative SI approaches, when combined with ontology choice and term similarity type, lead to many gene-to-gene similarity measures. No thorough investigation has been made into the behavior, complexity, and performance of semantic methods derived from distinct SI approaches. We apply bootstrapping to compare the generalized performance of 57 gene-to-gene semantic measures across six benchmarks. Considering the number of measures, we additionally evaluate whether these methods can be leveraged through ensemble machine learning to improve prediction performance. Results showed that the choice of ontology type most strongly influenced performance across all evaluations. Combining measures into an ensemble classifier reduces cross-validation error beyond any individual measure for protein interaction prediction. This improvement resulted from information gained through the combination of ontology types as ensemble methods within each GO type offered no improvement. These results demonstrate that multiple SI measures can be leveraged for machine learning tasks such as automated gene function prediction by incorporating methods from across the ontologies. To facilitate future research in this area, we developed the GO Graph Tool Kit (GGTK), an open source C++ library with Python interface (github.com/paulbible/ggtk).
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
McGuigan, John A S; Kay, James W; Elder, Hugh Y
2014-01-01
In Ca(2+)/Mg(2+) buffers the calculated ionised concentrations ([X(2+)]) can vary by up to a factor of seven. Since there are no defined standards it is impossible to check calculated [X(2+)], making measurement essential. The ligand optimisation method (LOM) is an accurate method to measure [X(2+)] in Ca(2+)/Mg(2+) buffers; independent estimation of ligand purity extends the method to pK(/) < 4. To simplify calculation, Excel programs ALE and AEC were compiled for LOM and its extension. This paper demonstrates that the slope of the electrode in the pX range 2.000-3.301 deviates from Nernstian behaviour as it depends on the value of the lumped interference, Σ. ALE was modified to include this effect; this modified program SALE, and the programs ALE and AEC were used on simulated data for Ca(2+)-EGTA and Mg(2+)-ATP buffers, to calculate electrode and buffer characteristics as a function of Σ. Ca(2+)-electrodes have a Σ < 10(-6) mol/l and there was no difference amongst the three methods. The Σ for Mg(2+)-electrodes lies between 10(-5) and 1.5 (∗) 10(-5) mol/l and calculated [Mg(2+)] with ALE were around 3% less than the true value. SALE and AEC correctly predicted [Mg(2+)]. SALE was used to recalculate K(/) and pK(/) on measured data for Ca(2+)-EGTA and Mg(2+)-EDTA buffers. These results demonstrated that it is pK(/) that is normally distributed. Until defined standards are available, [X(2+)] in Ca(2+)/Mg(2+) buffers have to be measured. The most appropriate method is to use Ca(2+)/Mg(2) electrodes combined with the Excel programs SALE or AEC. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yuan, Weijia; Coombs, T. A.; Kim, Jae-Ho; Han Kim, Chul; Kvitkovic, Jozef; Pamidi, Sastry
2011-12-01
Theoretical and experimental AC loss data on a superconducting pancake coil wound using second generation (2 G) conductors are presented. An anisotropic critical state model is used to calculate critical current and the AC losses of a superconducting pancake coil. In the coil there are two regions, the critical state region and the subcritical region. The model assumes that in the subcritical region the flux lines are parallel to the tape wide face. AC losses of the superconducting pancake coil are calculated using this model. Both calorimetric and electrical techniques were used to measure AC losses in the coil. The calorimetric method is based on measuring the boil-off rate of liquid nitrogen. The electric method used a compensation circuit to eliminate the inductive component to measure the loss voltage of the coil. The experimental results are consistent with the theoretical calculations thus validating the anisotropic critical state model for loss estimations in the superconducting pancake coil.
Sensor, method and system of monitoring transmission lines
Syracuse, Steven J.; Clark, Roy; Halverson, Peter G.; Tesche, Frederick M.; Barlow, Charles V.
2012-10-02
An apparatus, method, and system for measuring the magnetic field produced by phase conductors in multi-phase power lines. The magnetic field measurements are used to determine the current load on the conductors. The magnetic fields are sensed by coils placed sufficiently proximate the lines to measure the voltage induced in the coils by the field without touching the lines. The x and y components of the magnetic fields are used to calculate the conductor sag, and then the sag data, along with the field strength data, can be used to calculate the current load on the line and the phase of the current. The sag calculations of this invention are independent of line voltage and line current measurements. The system applies a computerized fitter routine to measured and sampled voltages on the coils to accurately determine the values of parameters associated with the overhead phase conductors.
Use of petroleum-based correlations and estimation methods for synthetic fuels
NASA Technical Reports Server (NTRS)
Antoine, A. C.
1980-01-01
Correlations of hydrogen content with aromatics content, heat of combustion, and smoke point are derived for some synthetic fuels prepared from oil and coal syncrudes. Comparing the results of the aromatics content with correlations derived for petroleum fuels shows that the shale-derived fuels fit the petroleum-based correlations, but the coal-derived fuels do not. The correlations derived for heat of combustion and smoke point are comparable to some found for petroleum-based correlations. Calculated values of hydrogen content and of heat of combustion are obtained for the synthetic fuels by use of ASTM estimation methods. Comparisons of the measured and calculated values show biases in the equations that exceed the critical statistics values. Comparison of the measured hydrogen content by the standard ASTM combustion method with that by a nuclear magnetic resonance (NMR) method shows a decided bias. The comparison of the calculated and measured NMR hydrogen contents shows a difference similar to that found with petroleum fuels.
Watanabe, Takashi
2013-01-01
The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442
Validation of cardiac accelerometer sensor measurements.
Remme, Espen W; Hoff, Lars; Halvorsen, Per Steinar; Naerum, Edvard; Skulstad, Helge; Fleischer, Lars A; Elle, Ole Jakob; Fosse, Erik
2009-12-01
In this study we have investigated the accuracy of an accelerometer sensor designed for the measurement of cardiac motion and automatic detection of motion abnormalities caused by myocardial ischaemia. The accelerometer, attached to the left ventricular wall, changed its orientation relative to the direction of gravity during the cardiac cycle. This caused a varying gravity component in the measured acceleration signal that introduced an error in the calculation of myocardial motion. Circumferential displacement, velocity and rotation of the left ventricular apical region were calculated from the measured acceleration signal. We developed a mathematical method to separate translational and gravitational acceleration components based on a priori assumptions of myocardial motion. The accuracy of the measured motion was investigated by comparison with known motion of a robot arm programmed to move like the heart wall. The accuracy was also investigated in an animal study. The sensor measurements were compared with simultaneously recorded motion from a robot arm attached next to the sensor on the heart and with measured motion by echocardiography and a video camera. The developed compensation method for the varying gravity component improved the accuracy of the calculated velocity and displacement traces, giving very good agreement with the reference methods.
Shi, Baoli; Wang, Yue; Jia, Lina
2011-02-11
Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.
High accuracy diffuse horizontal irradiance measurements without a shadowband
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlemmer, J.A; Michalsky, J.J.
1995-12-31
The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from total horizontal and direct normal irradiance. This method is in error because of angular (cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular response of the total horizontal pyranometer. Wemore » compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. Results indicate significant improvement in most cases. Remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less
High accuracy diffuse horizontal irradiance measurements without a shadowband
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlemmer, J.A.; Michalsky, J.J.
1995-10-01
The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from the total horizontal and direct normal irradiance. This method is in error because of the angular (often referred to as cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular responsemore » of the total horizontal pyranometer. The authors compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. The results indicate significant improvement in most cases. The remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less
The purpose of this SOP is to describe the procedures undertaken to calculate the ingestion exposure using composite food chemical residue values from the day of direct measurements. The calculation is based on the probabilistic approach. This SOP uses data that have been proper...
NASA Astrophysics Data System (ADS)
Johnson, M. R.; Prager, M.; Grimm, H.; Neumann, M. A.; Kearley, G. J.; Wilson, C. C.
1999-06-01
Measurements of tunnelling and librational excitations for the methyl group in paracetamol and tunnelling excitations for the methyl group in acetanilide are reported. In both cases, results are compared with molecular mechanics calculations, based on the measured low temperature crystal structures, which follow an established recipe. Agreement between calculated and measured methyl group observables is not as good as expected and this is attributed to the presence of comprehensive hydrogen bond networks formed by the peptide groups. Good agreement is obtained with a periodic quantum chemistry calculation which uses density functional methods, these calculations confirming the validity of the one-dimensional rotational model used and the crystal structures. A correction to the Coulomb contribution to the rotational potential in the established recipe using semi-emipircal quantum chemistry methods, which accommodates the modified charge distribution due to the hydrogen bonds, is investigated.
77 FR 21038 - Energy Conservation Program: Test Procedures for Light-Emitting Diode Lamps
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-09
... Photometric Measurements of Solid-State Lighting Products'' for determining lumen output, input power, and CCT.... Test Method 5. Test Calculations and Rounding C. Proposed Approach for Rated Lifetime Measurements 1... Test Method to Project Rated Lifetime 4. Test Conditions 5. Test Setup 6. Test Method and Measurements...
NASA Technical Reports Server (NTRS)
1978-01-01
Various methods for calculating the transmission functions of the 15 micron CO2 band are described. The results of these methods are compared with laboratory measurements. It is found that program P4 provides the best agreement with experimental results on the average.
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics.
Boedo, J A; Rudakov, D L
2017-03-01
We present a method to calculate the ion saturation current, I sat , for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat . It is noted that the I sat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e . We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuously biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and its use in reducing arcs.
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boedo, J. A.; Rudakov, D. L.
Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics
Boedo, J. A.; Rudakov, D. L.
2017-03-20
Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Liu, B; Liang, B
Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less
Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials
Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda
2016-01-01
In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797
Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio
2017-01-01
To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Szyczewski, A.; Hołderna-Natkaniec, K.; Natkaniec, I.
2004-05-01
Inelastic incoherent neutron scattering spectra of progesterone and testosterone measured at 20 and 290 K were compared with the IR spectra measured at 290 K. The Phonon Density of States spectra display well resolved peaks of low frequency internal vibration modes up to 1200 cm -1. The quantum chemistry calculations were performed by semiempirical PM3 method and by the density functional theory method with different basic sets for isolated molecule, as well as for the dimer system of testosterone. The proposed assignment of internal vibrations of normal modes enable us to conclude about the sequence of the onset of the torsion movements of the CH 3 groups. These conclusions were correlated with the results of proton molecular dynamics studies performed by NMR method. The GAUSSIAN program had been used for calculations.
Quantum chemical calculations for polymers and organic compounds
NASA Technical Reports Server (NTRS)
Lopez, J.; Yang, C.
1982-01-01
The relativistic effects of the orbiting electrons on a model compound were calculated. The computational method used was based on 'Modified Neglect of Differential Overlap' (MNDO). The compound tetracyanoplatinate was used since empirical measurement and calculations along "classical" lines had yielded many known properties. The purpose was to show that for large molecules relativity effects could not be ignored and that these effects could be calculated and yield data in closer agreement to empirical measurements. Both the energy band structure and molecular orbitals are depicted.
Prediction of Quality Change During Thawing of Frozen Tuna Meat by Numerical Calculation I
NASA Astrophysics Data System (ADS)
Murakami, Natsumi; Watanabe, Manabu; Suzuki, Toru
A numerical calculation method has been developed to determine the optimum thawing method for minimizing the increase of metmyoglobin content (metMb%) as an indicator of color changes in frozen tuna meat during thawing. The calculation method is configured the following two steps: a) calculation of temperature history in each part of frozen tuna meat during thawing by control volume method under the assumption of one-dimensional heat transfer, and b) calculation of metMb% based on the combination of calculated temperature history, Arrenius equation and the first-order reaction equation for the increase rate of metMb%. Thawing experiments for measuring temperature history of frozen tuna meat were carried out under the conditions of rapid thawing and slow thawing to compare the experimental data with calculated temperature history as well as the increase of metMb%. The results were coincident with the experimental data. The proposed simulation method would be useful for predicting the optimum thawing conditions in terms of metMb%.
Allometric method to estimate leaf area index for row crops
USDA-ARS?s Scientific Manuscript database
Leaf area index (LAI) is critical for predicting plant metabolism, biomass production, evapotranspiration, and greenhouse gas sequestration, but direct LAI measurements are difficult and labor intensive. Several methods are available to measure LAI indirectly or calculate LAI using allometric method...
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Freeman, Andrew L; Fahim, Mina S; Bechtold, Joan E
2012-10-01
Previous methods of pedicle screw strain measurement have utilized complex, time consuming methods of strain gauge application, experience high failure rates, do not effectively measure resultant bending moments, and cannot predict moment orientation. The purpose of this biomechanical study was to validate an improved method of quantifying pedicle screw bending moment orientation and magnitude. Pedicle screws were instrumented to measure biplanar screw bending moments by positioning four strain gauges on flat, machined surfaces below the screw head. Screws were calibrated to measure bending moments by hanging certified weights a known distance from the strain gauges. Loads were applied in 30 deg increments at 12 different angles while recording data from two independent strain channels. The data were then analyzed to calculate the predicted orientation and magnitude of the resultant bending moment. Finally, flexibility tests were performed on a cadaveric motion segment implanted with the instrumented screws to demonstrate the implementation of this technique. The difference between the applied and calculated orientation of the bending moments averaged (±standard error of the mean (SEM)) 0.3 ± 0.1 deg across the four screws for all rotations and loading conditions. The calculated resultant bending moments deviated from the actual magnitudes by an average of 0.00 ± 0.00 Nm for all loading conditions. During cadaveric testing, the bending moment orientations were medial/lateral in flexion-extension, variable in lateral bending, and diagonal in axial torsion. The technique developed in this study provides an accurate method of calculating the orientation and magnitude of screw bending moments and can be utilized with any pedicle screw fixation system.
Apparatus and method for radioactive waste screening
Akers, Douglas W.; Roybal, Lyle G.; Salomon, Hopi; Williams, Charles Leroy
2012-09-04
An apparatus and method relating to screening radioactive waste are disclosed for ensuring that at least one calculated parameter for the measurement data of a sample falls within a range between an upper limit and a lower limit prior to the sample being packaged for disposal. The apparatus includes a radiation detector configured for detecting radioactivity and radionuclide content of the of the sample of radioactive waste and generating measurement data in response thereto, and a collimator including at least one aperture to direct a field of view of the radiation detector. The method includes measuring a radioactive content of a sample, and calculating one or more parameters from the radioactive content of the sample.
NASA Astrophysics Data System (ADS)
Korchemkina, E. N.; Latushkin, A. A.; Lee, M. E.
2017-11-01
The methods of determination of concentration and scattering by suspended particles in seawater are compared. The methods considered include gravimetric measurements of the mass concentration of suspended matter, empirical and analytical calculations based on measurements of the light beam attenuation coefficient (BAC) in 4 spectral bands, calculation of backscattering by particles using satellite measurements in the visible spectral range. The data were obtained in two cruises of the R/V "Professor Vodyanitsky" in the deep-water part of the Black Sea in July and October 2016., Spatial distribution of scattering by marine particles according to satellite data is in good agreement with the contact measurements.
NASA Technical Reports Server (NTRS)
French, R. A.; Cohen, B. A.; Miller, J. S.
2014-01-01
The Potassium-Argon Laser Experiment( KArLE), is composed of two main instruments: a spectrometer as part of the Laser-Induced Breakdown Spectroscopy (LIBS) method and a Mass Spectrometer (MS). The LIBS laser ablates a sample and creates a plasma cloud, generating a pit in the sample. The LIBS plasma is measured for K abundance in weight percent and the released gas is measured using the MS, which calculates Ar abundance in mols. To relate the K and Ar measurements, total mass of the ablated sample is needed but can be difficult to directly measure. Instead, density and volume are used to calculate mass, where density is calculated based on the elemental composition of the rock (from the emission spectrum) and volume is determined by pit morphology. This study aims to reduce the uncertainty for KArLE by analyzing pit volume relationships in several analog materials and comparing methods of pit volume measurements and their associated uncertainties.
Authenticating concealed private data while maintaining concealment
Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM
2007-06-26
A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashjaee, M.; Roomina, M.R.; Ghafouri-Azar, R.
1993-05-01
Two computational methods for calculating hourly, daily, and monthly average values of direct, diffuse, and global solar radiation on horizontal collectors have been presented in this article for location with different latitude, altitude, and atmospheric conditions in Iran. These methods were developed using two different independent sets of measured data from the Iranian Meteorological Organization (IMO) for two cities in Iran (Tehran and Isfahan) during 14 years of measurement for Tehran and 4 years of measurement for Isfahan. Comparison of calculated monthly average global solar radiation, using the two models for Tehran and Isfahan with measured data from the IMO,more » has indicated a good agreement between them. Then these developed methods were extended to another location (city of Bandar-Abbas), where measured data are not available. But the work of Daneshyar predicts its monthly global radiation. The maximum discrepancy of 7% between the developed models and the work of Daneshyar was observed.« less
NASA Astrophysics Data System (ADS)
Dementjev, Aleksandr S.; Jovaisa, A.; Silko, Galina; Ciegis, Raimondas
2005-11-01
Based on the developed efficient numerical methods for calculating the propagation of light beams, the alternative methods for measuring the beam radius and propagation ratio proposed in the international standard ISO 11146 are analysed. The specific calculations of the alternative beam propagation ratios Mi2 performed for a number of test beams with a complicated spatial structure showed that the correlation coefficients ci used in the international standard do not establish the universal one-to-one relation between the alternative propagation ratios Mi2 and invariant propagation ratios Mσ2 found by the method of moments.
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
A new leakage measurement method for damaged seal material
NASA Astrophysics Data System (ADS)
Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng
2018-07-01
In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.
A New Proposal to Redefine Kilogram by Measuring the Planck Constant Based on Inertial Mass
NASA Astrophysics Data System (ADS)
Liu, Yongmeng; Wang, Dawei
2018-04-01
A novel method to measure the Planck constant based on inertial mass is proposed here, which is distinguished from the conventional Kibble balance experiment which is based on the gravitational mass. The kilogram unit is linked to the Planck constant by calculating the difference of the parameters, i.e. resistance, voltage, velocity and time, which is measured in a two-mode experiment, unloaded mass mode and the loaded mass mode. In principle, all parameters measured in this experiment can reach a high accuracy, as that in Kibble balance experiment. This method has an advantage that some systematic error can be eliminated in difference calculation of measurements. In addition, this method is insensitive to air buoyancy and the alignment work in this experiment is easy. At last, the initial design of the apparatus is presented.
Air kerma strength characterization of a GZP6 Cobalt-60 brachytherapy source
Toossi, Mohammad Taghi Bahreyni; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Taheri, Mojtaba; Layegh, Mohsen; Makhdoumi, Yasha; Meigooni, Ali Soleimani
2010-01-01
Background Task group number 40 (TG-40) of the American Association of Physicists in Medicine (AAPM) has recommended calibration of any brachytherapy source before its clinical use. GZP6 afterloading brachytherapy unit is a 60Co high dose rate (HDR) system recently being used in some of the Iranian radiotherapy centers. Aim In this study air kerma strength (AKS) of 60Co source number three of this unit was estimated by Monte Carlo simulation and in air measurements. Materials and methods Simulation was performed by employing the MCNP-4C Monte Carlo code. Self-absorption of the source core and its capsule were taken into account when calculating air kerma strength. In-air measurements were performed according to the multiple distance method; where a specially designed jig and a 0.6 cm3 Farmer type ionization chamber were used for the measurements. Monte Carlo simulation, in air measurement and GZP6 treatment planning results were compared for primary air kerma strength (as for November 8th 2005). Results Monte Carlo calculated and in air measured air kerma strength were respectively equal to 17240.01 μGym2 h−1 and 16991.83 μGym2 h−1. The value provided by the GZP6 treatment planning system (TPS) was “15355 μGym2 h−1”. Conclusion The calculated and measured AKS values are in good agreement. Calculated-TPS and measured-TPS AKS values are also in agreement within the uncertainties related to our calculation, measurements and those certified by the GZP6 manufacturer. Considering the uncertainties, the TPS value for AKS is validated by our calculations and measurements, however, it is incorporated with a large uncertainty. PMID:24376948
Multi-spectral temperature measurement method for gas turbine blade
NASA Astrophysics Data System (ADS)
Gao, Shan; Feng, Chi; Wang, Lixin; Li, Dong
2016-02-01
One of the basic methods to improve both the thermal efficiency and power output of a gas turbine is to increase the firing temperature. However, gas turbine blades are easily damaged in harsh high-temperature and high-pressure environments. Therefore, ensuring that the blade temperature remains within the design limits is very important. There are unsolved problems in blade temperature measurement, relating to the emissivity of the blade surface, influences of the combustion gases, and reflections of radiant energy from the surroundings. In this study, the emissivity of blade surfaces has been measured, with errors reduced by a fitting method, influences of the combustion gases have been calculated for different operational conditions, and a reflection model has been built. An iterative computing method is proposed for calculating blade temperatures, and the experimental results show that this method has high precision.
A novel dual-camera calibration method for 3D optical measurement
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang
2018-05-01
A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.
Preliminary research on dual-energy X-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Han, Hua-Jie; Wang, Sheng-Hao; Gao, Kun; Wang, Zhi-Li; Zhang, Can; Yang, Meng; Zhang, Kai; Zhu, Pei-Ping
2016-04-01
Dual-energy X-ray absorptiometry (DEXA) has been widely applied to measure the bone mineral density (BMD) and soft-tissue composition of the human body. However, the use of DEXA is greatly limited for low-Z materials such as soft tissues due to their weak absorption, while X-ray phase-contrast imaging (XPCI) shows significantly improved contrast in comparison with the conventional standard absorption-based X-ray imaging for soft tissues. In this paper, we propose a novel X-ray phase-contrast method to measure the area density of low-Z materials, including a single-energy method and a dual-energy method. The single-energy method is for the area density calculation of one low-Z material, while the dual-energy method aims to calculate the area densities of two low-Z materials simultaneously. Comparing the experimental and simulation results with the theoretical ones, the new method proves to have the potential to replace DEXA in area density measurement. The new method sets the prerequisites for a future precise and low-dose area density calculation method for low-Z materials. Supported by Major State Basic Research Development Program (2012CB825800), Science Fund for Creative Research Groups (11321503) and National Natural Science Foundation of China (11179004, 10979055, 11205189, 11205157)
NASA Astrophysics Data System (ADS)
Pikálek, Tomáš; Šarbort, Martin; Číp, Ondřej; Pham, Minh Tuan; Lešundák, Adam; Pravdová, Lenka; Buchta, Zdeněk.
2017-06-01
The air refractive index is an important parameter in interferometric length measurements, since it substantially affects the measurement accuracy. We present a refractive index of air measurement method based on monitoring the phase difference between the ambient air and vacuum inside a permanently evacuated double-spaced cell. The cell is placed in one arm of the Michelson interferometer equipped with two light sources—red LED and HeNe laser, while the low-coherence and laser interference signals are measured separately. Both phase and group refractive indices of air can be calculated from the measured signals. The method was experimentally verified by comparing the obtained refractive index values with two different techniques.
Liu, Hsi-Ping; Boore, David M.; Joyner, William B.; Oppenheimer, David H.; Warrick, Richard E.; Zhang, Wenbo; Hamilton, John C.; Brown, Leo T.
2000-01-01
Shear-wave velocities (VS) are widely used for earthquake ground-motion site characterization. VS data are now largely obtained using borehole methods. Drilling holes, however, is expensive. Nonintrusive surface methods are inexpensive for obtaining VS information, but not many comparisons with direct borehole measurements have been published. Because different assumptions are used in data interpretation of each surface method and public safety is involved in site characterization for engineering structures, it is important to validate the surface methods by additional comparisons with borehole measurements. We compare results obtained from a particular surface method (array measurement of surface waves associated with microtremor) with results obtained from borehole methods. Using a 10-element nested-triangular array of 100-m aperture, we measured surface-wave phase velocities at two California sites, Garner Valley near Hemet and Hollister Municipal Airport. The Garner Valley site is located at an ancient lake bed where water-saturated sediment overlies decomposed granite on top of granite bedrock. Our array was deployed at a location where seismic velocities had been determined to a depth of 500 m by borehole methods. At Hollister, where the near-surface sediment consists of clay, sand, and gravel, we determined phase velocities using an array located close to a 60-m deep borehole where downhole velocity logs already exist. Because we want to assess the measurements uncomplicated by uncertainties introduced by the inversion process, we compare our phase-velocity results with the borehole VS depth profile by calculating fundamental-mode Rayleigh-wave phase velocities from an earth model constructed from the borehole data. For wavelengths less than ~2 times of the array aperture at Garner Valley, phase-velocity results from array measurements agree with the calculated Rayleigh-wave velocities to better than 11%. Measurement errors become larger for wavelengths 2 times greater than the array aperture. At Hollister, the measured phase velocity at 3.9 Hz (near the upper edge of the microtremor frequency band) is within 20% of the calculated Rayleigh-wave velocity. Because shear-wave velocity is the predominant factor controlling Rayleigh-wave phase velocities, the comparisons suggest that this nonintrusive method can provide VS information adequate for ground-motion estimation.
NASA Technical Reports Server (NTRS)
Derrickson, J. H.; Dake, S.; Dong, B. L.; Eby, P. B.; Fountain, W. F.; Fuki, M.; Gregory, J. C.; Hayashi, T.; Iyono, A.; King, D. T.
1989-01-01
Recently, new calculations were made of the direct Coulomb pair cross section that rely less in arbitrary parameters. More accurate calculations of the cross section down to low pair energies were made. New measurements of the total direct electron pair yield, and the energy and angular distribution of the electron pairs in emulsion were made for O-16 at 60 and 200 GeV/amu at S-32 at 200 GeV/amu which give satisfactory agreement with the new calculations. These calculations and measurements are presented along with previous accelerator measurements made of this effect during the last 40 years. The microscope scanning criteria used to identify the direct electron pairs is described. Prospects for application of the pair method to cosmic ray energy measurements in the region 10 (exp 13) to 10 (exp 15) eV/amu are discussed.
Two-dimensional analytic weighting functions for limb scattering
NASA Astrophysics Data System (ADS)
Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.
2017-10-01
Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.
Estimation of blade airloads from rotor blade bending moments
NASA Technical Reports Server (NTRS)
Bousman, William G.
1987-01-01
This paper presents a method for the estimation of blade airloads, based on the measurements of flap bending moments. In this procedure, the blade rotation in vacuum modes is calculated, and the airloads are expressed as an algebraic sum of the mode shapes, modal amplitudes, mass distribution, and frequency properties. The method was validated by comparing the calculated airload distribution with the original wind tunnel measurements which were made using ten modes and twenty measurement stations. Good agreement between the predicted and the measured airloads was found up to 0.90 R, but the agreement degraded towards the blade tip. The method is shown to be quite robust to the type of experimental problems that could be expected to occur in the testing of full-scale and model-scale rotors.
Zhao, Jing-Xin; Su, Xiu-Yun; Xiao, Ruo-Xiu; Zhao, Zhe; Zhang, Li-Hai; Zhang, Li-Cheng; Tang, Pei-Fu
2016-11-01
We established a mathematical method to precisely calculate the radiographic anteversion (RA) and radiographic inclination (RI) angles of the acetabular cup based on anterior-posterior (AP) pelvic radiographs after total hip arthroplasty. Using Mathematica software, a mathematical model for an oblique cone was established to simulate how AP pelvic radiographs are obtained and to address the relationship between the two-dimensional and three-dimensional geometry of the opening circle of the cup. In this model, the vertex was the X-ray beam source, and the generatrix was the ellipse in radiographs projected from the opening circle of the acetabular cup. Using this model, we established a series of mathematical formulas to reveal the differences between the true RA and RI cup angles and the measurements results achieved using traditional methods and AP pelvic radiographs and to precisely calculate the RA and RI cup angles based on post-operative AP pelvic radiographs. Statistical analysis indicated that traditional methods should be used with caution if traditional measurements methods are used to calculate the RA and RI cup angles with AP pelvic radiograph. The entire calculation process could be performed by an orthopedic surgeon with mathematical knowledge of basic matrix and vector equations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Development of a neural network technique for KSTAR Thomson scattering diagnostics.
Lee, Seung Hun; Lee, J H; Yamada, I; Park, Jae Sun
2016-11-01
Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ 2 method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ 2 method. The best results were obtained for 10 3 training cycles and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ 2 method and performs the calculation twenty times faster.
NASA Technical Reports Server (NTRS)
Papazian, Peter B.; Perala, Rodney A.; Curry, John D.; Lankford, Alan B.; Keller, J. David
1988-01-01
Using three different current injection methods and a simple voltage probe, transfer impedances for Solid Rocket Motor (SRM) joints, wire meshes, aluminum foil, Thorstrand and a graphite composite motor case were measured. In all cases, the surface current distribution for the particular current injection device was calculated analytically or by finite difference methods. The results of these calculations were used to generate a geometric factor which was the ratio of total injected current to surface current density. The results were validated in several ways. For wire mesh measurements, results showed good agreement with calculated results for a 14 by 18 Al screen. SRM joint impedances were independently verified. The filiment wound case measurement results were validated only to the extent that their curve shape agrees with the expected form of transfer impedance for a homogeneous slab excited by a plane wave source.
NASA Astrophysics Data System (ADS)
Baselt, Tobias; Popp, Tobias; Nelsen, Bryan; Lasagni, Andrés. Fabián.; Hartmann, Peter
2017-05-01
Endlessly single-mode fibers, which enable single mode guidance over a wide spectral range, are indispensable in the field of fiber technology. A two-dimensional photonic crystal with a silica central core and a micrometer-spaced hexagonal array of air holes is an established method to achieve endless single-mode guidance. There are two possible ways to determine the dispersion: measurement and calculation. We calculate the group velocity dispersion GVD based on the measurement of the fiber structure parameters, the hole diameter and the pitch of a presumed homogeneous hexagonal array and compare the calculation with two methods to measure the wavelength-dependent time delay. We measure the time delay on a three hundred meter test fiber with a homemade supercontinuum light source, a set of bandpass filters and a fast detector and compare the results with a white light interferometric setup. To measure the dispersion of optical fibers with high accuracy, a time-frequency-domain setup based on a Mach-Zehnder interferometer is used. The experimental setup allows the determination of the wavelength dependent differential group delay of light travelling through a thirty centimeter piece of test fiber in the wavelength range from VIS to NIR. The determination of the GVD using different methods enables the evaluation of the individual methods for characterizing the endlessly single-mode fiber.
Optical Distance Measurement Device And Method Thereof
Bowers, Mark W.
2004-06-15
A system and method of efficiently obtaining distance measurements of a target by scanning the target. An optical beam is provided by a light source and modulated by a frequency source. The modulated optical beam is transmitted to an acousto-optical deflector capable of changing the angle of the optical beam in a predetermined manner to produce an output for scanning the target. In operation, reflected or diffused light from the target may be received by a detector and transmitted to a controller configured to calculate the distance to the target as well as the measurement uncertainty in calculating the distance to the target.
New approach to isometric transformations in oblique local coordinate systems of reference
NASA Astrophysics Data System (ADS)
Stępień, Grzegorz; Zalas, Ewa; Ziębka, Tomasz
2017-12-01
The research article describes a method of isometric transformation and determining an exterior orientation of a measurement instrument. The method is based on a designation of a "virtual" translation of two relative oblique orthogonal systems to a common, known in the both systems, point. The relative angle orientation of the systems does not change as each of the systems is moved along its axis. The next step is the designation of the three rotation angles (e.g. Tait-Bryan or Euler angles), transformation of the system convoluted at the calculated angles and moving the system to the initial position where the primary coordinate system was. This way eliminates movements of the systems from the calculations and makes it possible to calculate angles of mutual rotation angles of two orthogonal systems primarily involved in the movement. The research article covers laboratory calculations for simulated data. The accuracy of the results is 10-6 m (10-3 regarding the accuracy of the input data). This confi rmed the correctness of the assumed calculation method. In the following step the method was verifi ed under fi eld conditions, where the accuracy of the method raised to 0.003 m. The proposed method enabled to make the measurements with the oblique and uncentered instrument, e.g. total station instrument set over an unknown point. This is the reason why the method was named by the authors as Total Free Station - TFS. The method may be also used for isometric transformations for photogrammetric purposes.
Output calculation of electron therapy at extended SSD using an improved LBR method.
Alkhatib, Hassaan A; Gebreamlak, Wondesen T; Tedeschi, David J; Mihailidis, Dimitris; Wright, Ben W; Neglia, William J; Sobash, Philip T; Fontenot, Jonas D
2015-02-01
To calculate the output factor (OPF) of any irregularly shaped electron beam at extended SSD. Circular cutouts were prepared from 2.0 cm diameter to the maximum possible size for 15 × 15 applicator cone. In addition, two irregular cutouts were prepared. For each cutout, percentage depth dose (PDD) at the standard SSD and doses at different SSD values were measured using 6, 9, 12, and 16 MeV electron beam energies on a Varian 2100C LINAC and the distance at which the central axis electron fluence becomes independent of cutout size was determined. The measurements were repeated with an ELEKTA Synergy LINAC using 14 × 14 applicator cone and electron beam energies of 6, 9, 12, and 15 MeV. The PDD measurements were performed using a scanning system and two diodes-one for the signal and the other a stationary reference outside the tank. The doses of the circular cutouts at different SSDs were measured using PTW 0.125 cm(3) Semiflex ion-chamber and EDR2 films. The electron fluence was measured using EDR2 films. For each circular cutout, the lateral buildup ratio (LBR) was calculated from the measured PDD curve using the open applicator cone as the reference field. The effective SSD (SSDeff) of each circular cutout was calculated from the measured doses at different SSD values. Using the LBR value and the radius of the circular cutout, the corresponding lateral spread parameter [σR(z)] was calculated. Taking the cutout size dependence of σR(z) into account, the PDD curves of the irregularly shaped cutouts at the standard SSD were calculated. Using the calculated PDD curve of the irregularly shaped cutout along with the LBR and SSDeff values of the circular cutouts, the output factor of the irregularly shaped cutout at extended SSD was calculated. Finally, both the calculated PDD curves and output factor values were compared with the measured values. The improved LBR method has been generalized to calculate the output factor of electron therapy at extended SSD. The percentage difference between the calculated and the measured output factors of irregularly shaped cutouts in a clinical useful SSD region was within 2%. Similar results were obtained for all available electron energies of both Varian 2100C and ELEKTA Synergy machines.
Comparison of methods for H*(10) calculation from measured LaBr3(Ce) detector spectra.
Vargas, A; Cornejo, N; Camp, A
2018-07-01
The Universitat Politecnica de Catalunya (UPC) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) have evaluated methods based on stripping, conversion coefficients and Maximum Likelihood Estimation using Expectation Maximization (ML-EM) in calculating the H*(10) rates from photon pulse-height spectra acquired with a spectrometric LaBr 3 (Ce)(1.5″ × 1.5″) detector. There is a good agreement between results of the different H*(10) rate calculation methods using the spectra measured at the UPC secondary standard calibration laboratory in Barcelona. From the outdoor study at ESMERALDA station in Madrid, it can be concluded that the analysed methods provide results quite similar to those obtained with the reference RSS ionization chamber. In addition, the spectrometric detectors can also facilitate radionuclide identification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.
2017-01-01
In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raterman, G; Gauntt, D
2014-06-01
Purpose: To propose a method other than CTDI phantom measurements for routine CT dosimetry QA. This consists of taking a series of air exposure measurements and calculating a factor for converting from this exposure measurement to the protocol's associated head or body CTDI value using DLP. The data presented are the ratios of phantom DLP to air exposure ratios for different scanners, as well as error in the displayed CTDI. Methods: For each scanner, the CTDI is measured at all available tube voltages using both the head and body phantoms. Then, the exposure is measured using a pencil chamber inmore » air at isocenter. A ratio of phantom DLP to exposure in air for a given protocol may be calculated and used for converting a simple air dose measurement to a head or body CTDI value. For our routine QA, the exposure in air for different collimations, mAs, and kVp is measured, and displayed CTDI is recorded. Therefore, the ratio calculated may convert these exposures to CTDI values that may then be compared to the displayed CTDI for a large range of acquisition parameter combinations. Results: It was found that all scanners tend to have a ratio factor that slightly increases with kVp. Also, Philips scanners appear to have less of a dependence on kVp; whereas, GE scanners have a lower ratio at lower kVp. The use of air exposure times the DLP conversion yielded CTDI values that were less than 10% different from the displayed CTDI on several scanners. Conclusion: This method may be used as a primary method for CT dosimetry QA. As a result of the ease of measurement, a dosimetry metric specific to that scanner may be calculated for a wide variety of CT protocols, which could also be used to monitor display CTDI value accuracy.« less
Analysis of a boron-carbide-drum-controlled critical reactor experiment
NASA Technical Reports Server (NTRS)
Mayo, W. T.
1972-01-01
In order to validate methods and cross sections used in the neutronic design of compact fast-spectrum reactors for generating electric power in space, an analysis of a boron-carbide-drum-controlled critical reactor was made. For this reactor the transport analysis gave generally satisfactory results. The calculated multiplication factor for the most detailed calculation was only 0.7-percent Delta k too high. Calculated reactivity worth of the control drums was $11.61 compared to measurements of $11.58 by the inverse kinetics methods and $11.98 by the inverse counting method. Calculated radial and axial power distributions were in good agreement with experiment.
Bacterial aerosol emission rates from municipal wastewater aeration tanks.
Sawyer, B; Elenbogen, G; Rao, K C; O'Brien, P; Zenz, D R; Lue-Hing, C
1993-01-01
In this report we describe the results of a study conducted to determine the rates of bacterial aerosol emission from the surfaces of the aeration tanks of the Metropolitan Water Reclamation District of Greater Chicago John E. Egan Water Reclamation Plant. This study was accomplished by conducting test runs in which Andersen six-stage viable samplers were used to collect bacterial aerosol samples inside a walled tower positioned above an aeration tank liquid surface at the John E. Egan Water Reclamation Plant. The samples were analyzed for standard plate counts (SPC), total coliforms (TC), fecal coliforms, and fecal streptococci. Two methods of calculation were used to estimate the bacterial emission rate. The first method was a conventional stack emission rate calculation method in which the measured air concentration of bacteria was multiplied by the air flow rate emanating from the aeration tanks. The second method was a more empirical method in which an attempt was made to measure all of the bacteria emanating from an isolated area (0.37 m2) of the aeration tank surface over time. The data from six test runs were used to determine bacterial emission rates by both calculation methods. As determined by the conventional calculation method, the average SPC emission rate was 1.61 SPC/m2/s (range, 0.66 to 2.65 SPC/m2/s). As determined by the empirical calculation method, the average SPC emission rate was 2.18 SPC/m2/s (range, 1.25 to 2.66 SPC/m2/s). For TC, the average emission rate was 0.20 TC/m2/s (range, 0.02 to 0.40 TC/m2/s) when the conventional calculation method was used and 0.27 TC/m2/s (range, 0.04 to 0.53 TC/m2/s) when the empirical calculation method was used.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8250547
Calculation of the compounded uncertainty of 14C AMS measurements
NASA Astrophysics Data System (ADS)
Nadeau, Marie-Josée; Grootes, Pieter M.
2013-01-01
The correct method to calculate conventional 14C ages from the carbon isotopic ratios was summarised 35 years ago by Stuiver and Polach (1977) and is now accepted as the only method to calculate 14C ages. There is, however, no consensus regarding the treatment of AMS data, mainly of the uncertainty of the final result. The estimation and treatment of machine background, process blank, and/or in situ contamination is not uniform between laboratories, leading to differences in 14C results, mainly for older ages. As Donahue (1987) and Currie (1994), among others, mentioned, some laboratories find it important to use the scatter of several measurements as uncertainty while others prefer to use Poisson statistics. The contribution of the scatter of the standards, machine background, process blank, and in situ contamination to the uncertainty of the final 14C result is also treated in different ways. In the early years of AMS, several laboratories found it important to describe their calculation process in details. In recent years, this practise has declined. We present an overview of the calculation process for 14C AMS measurements looking at calculation practises published from the beginning of AMS until present.
Verification of Calculated Skin Doses in Postmastectomy Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ito, Shima; Parker, Brent C., E-mail: bcparker@marybird.com; Mary Bird Perkins Cancer Center, Baton Rouge, LA
2011-10-01
Purpose: To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). Methods and Materials: In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi.Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. Results: The mean difference and standard errormore » of the mean difference between measurement and calculation for the scar measurements was -1.8% {+-} 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% {+-} 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% {+-} 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. Conclusions: The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%.« less
Roy, Jean-Sébastien; Moffet, Hélène; Hébert, Luc J; St-Vincent, Guy; McFadyen, Bradford J
2007-01-01
Background Abnormal scapular displacements during arm elevation have been observed in people with shoulder impingement syndrome. These abnormal scapular displacements were evaluated using different methods and instruments allowing a 3-dimensional representation of the scapular kinematics. The validity and the intrasession reliability have been shown for the majority of these methods for healthy people. However, the intersession reliability on healthy people and people with impaired shoulders is not well documented. This measurement property needs to be assessed before using such methods in longitudinal comparative studies. The objective of this study is to evaluate the intra and intersession reliability of 3-dimensional scapular attitudes measured at different arm positions in healthy people and to explore the same measurement properties in people with shoulder impingement syndrome using the Optotrak Probing System. Methods Three-dimensional scapular attitudes were measured twice (test and retest interspaced by one week) on fifteen healthy subjects (mean age 37.3 years) and eight subjects with subacromial shoulder impingement syndrome (mean age 46.1 years) in three arm positions (arm at rest, 70° of humerothoracic flexion and 90° of humerothoracic abduction) using the Optotrak Probing System. Two different methods of calculation of 3-dimensional scapular attitudes were used: relative to the position of the scapula at rest and relative to the trunk. Intraclass correlation coefficient (ICC) and standard error of measure (SEM) were used to estimate intra and intersession reliability. Results For both groups, the reliability of the three-dimensional scapular attitudes for elevation positions was very good during the same session (ICCs from 0.84 to 0.99; SEM from 0.6° to 1.9°) and good to very good between sessions (ICCs from 0.62 to 0.97; SEM from 1.2° to 4.2°) when using the method of calculation relative to the trunk. Higher levels of intersession reliability were found for the method of calculation relative to the trunk in anterior-posterior tilting at 70° of flexion compared to the method of calculation relative to the scapula at rest. Conclusion The estimation of three-dimensional scapular attitudes using the method of calculation relative to the trunk is reproducible in the three arm positions evaluated and can be used to document the scapular behavior. PMID:17584933
Temperature-dependent infrared optical properties of 3C-, 4H- and 6H-SiC
NASA Astrophysics Data System (ADS)
Tong, Zhen; Liu, Linhua; Li, Liangsheng; Bao, Hua
2018-05-01
The temperature-dependent optical properties of cubic (3C) and hexagonal (4H and 6H) silicon carbide are investigated in the infrared range of 2-16 μm both by experimental measurements and numerical simulations. The temperature in experimental measurement is up to 593 K, while the numerical method can predict the optical properties at elevated temperatures. To investigate the temperature effect, the temperature-dependent damping parameter in the Lorentz model is calculated based on anharmonic lattice dynamics method, in which the harmonic and anharmonic interatomic force constants are determined from first-principles calculations. The infrared phonon modes of silicon carbide are determined from first-principles calculations. Based on first-principles calculations, the Lorentz model is parameterized without any experimental fitting data and the temperature effect is considered. In our investigations, we find that the increasing temperature induces a small reduction of the reflectivity in the range of 10-13 μm. More importantly, it also shows that our first-principles calculations can predict the infrared optical properties at high-temperature effectively which is not easy to be obtained through experimental measurements.
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok
2015-01-01
We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715
Multi-Scale Measures of Rugosity, Slope and Aspect from Benthic Stereo Image Reconstructions
Friedman, Ariell; Pizarro, Oscar; Williams, Stefan B.; Johnson-Roberson, Matthew
2012-01-01
This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over . Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements. PMID:23251370
Automated Routines for Calculating Whole-Stream Metabolism: Theoretical Background and User's Guide
Bales, Jerad D.; Nardi, Mark R.
2007-01-01
In order to standardize methods and facilitate rapid calculation and archival of stream-metabolism variables, the Stream Metabolism Program was developed to calculate gross primary production, net ecosystem production, respiration, and selected other variables from continuous measurements of dissolved-oxygen concentration, water temperature, and other user-supplied information. Methods for calculating metabolism from continuous measurements of dissolved-oxygen concentration and water temperature are fairly well known, but a standard set of procedures and computation software for all aspects of the calculations were not available previously. The Stream Metabolism Program addresses this deficiency with a stand-alone executable computer program written in Visual Basic.NET?, which runs in the Microsoft Windows? environment. All equations and assumptions used in the development of the software are documented in this report. Detailed guidance on application of the software is presented, along with a summary of the data required to use the software. Data from either a single station or paired (upstream, downstream) stations can be used with the software to calculate metabolism variables.
NASA Astrophysics Data System (ADS)
Markovin, P. A.; Trepakov, V. A.; Tagantsev, A. K.; Deineka, A.; Andreev, D. A.
2016-01-01
The expressions for the spontaneous polar contribution δ n i s to the principal values of the refractive index due to the quadratic electro-optic effect in ferroelectrics have been considered within the phenomenological approach taking into account the polarization fluctuations. A method has been proposed for calculating the magnitude and temperature dependence of the root-mean-square fluctuations of the polarization (short-range local polar order) P sh = < P fl 2 >1/2 below the ferroelectric transition temperature T c from temperature changes in the spontaneous polar contribution δ n i s ( T) if the average spontaneous polarization P s = < P> characterizing the long-range order is determined from independent measurements (for example, from dielectric hysteresis loops). For the case of isotropic fluctuations, the proposed method has made it possible to calculate P sh and P s only from refractometric measurements. It has been shown that, upon interferometric measurements, the method developed in this work allows calculating P sh and P s directly from the measured temperature and electric-field changes in the relative optical path (the specific optical retardation) of the light.
NASA Astrophysics Data System (ADS)
Nakazawa, Haruna; Doi, Marika; Ogawa, Emiyu; Arai, Tsunenori
2018-02-01
To avoid an instability of the optical coefficient measurement using sliced tissue preparation, we proposed the combination of light intensity measurement through an optical fiber puncturing into a bulk tissue varying field of view (FOV) and ray tracing calculation using Monte-Carlo method. The optical coefficients of myocardium such as absorption coefficient μa, scattering coefficient μs, and anisotropic parameter g are used in the myocardium optical propagation. Since optical coefficients obtained using thin sliced tissue could be instable because they are affected by dehydration and intracellular fluid effusion on the sample surface, variety of coefficients have been reported over individual optical differences of living samples. The proposed method which combined the experiment using the bulk tissue with ray tracing calculation were performed. In this method, a 200 μmΦ high-NA silica fiber installed in a 21G needle was punctured up to the bottom of the myocardial bulk tissue over 3 cm in thickness to measure light intensity changing the fiber-tip depth and FOV. We found that the measured attenuation coefficients decreased as the FOV increased. The ray trace calculation represented the same FOV dependence in above mentioned experimental result. We think our particular fiber punctured measurement using bulk tissue varying FOV with Inverse Monte-Carlo method might be useful to obtain the optical coefficients to avoid sample preparation instabilities.
Experimental analysis and simulation calculation of the inductances of loosely coupled transformer
NASA Astrophysics Data System (ADS)
Kerui, Chen; Yang, Han; Yan, Zhang; Nannan, Gao; Ying, Pei; Hongbo, Li; Pei, Li; Liangfeng, Guo
2017-11-01
The experimental design of iron-core wireless power transmission system is designed, and an experimental model of loosely coupled transformer is built. Measuring the air gap on both sides of the transformer 15mm inductor under the parameters. The feasibility and feasibility of using the finite element method to calculate the coil inductance parameters of the loosely coupled transformer are analyzed. The system was modeled by ANSYS, and the magnetic field was calculated by finite element method, and the inductance parameters were calculated. The finite element method is used to calculate the inductive parameters of the loosely coupled transformer, and the basis for the accurate compensation of the capacitance of the wireless power transmission system is established.
Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M
2002-07-21
The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.
Calculating regional tissue volume for hyperthermic isolated limb perfusion: Four methods compared.
Cecchin, D; Negri, A; Frigo, A C; Bui, F; Zucchetta, P; Bodanza, V; Gregianin, M; Campana, L G; Rossi, C R; Rastrelli, M
2016-12-01
Hyperthermic isolated limb perfusion (HILP) can be performed as an alternative to amputation for soft tissue sarcomas and melanomas of the extremities. Melphalan and tumor necrosis factor-alpha are used at a dosage that depends on the volume of the limb. Regional tissue volume is traditionally measured for the purposes of HILP using water displacement volumetry (WDV). Although this technique is considered the gold standard, it is time-consuming and complicated to implement, especially in obese and elderly patients. The aim of the present study was to compare the different methods described in the literature for calculating regional tissue volume in the HILP setting, and to validate an open source software. We reviewed the charts of 22 patients (11 males and 11 females) who had non-disseminated melanoma with in-transit metastases or sarcoma of the lower limb. We calculated the volume of the limb using four different methods: WDV, tape measurements and segmentation of computed tomography images using Osirix and Oncentra Masterplan softwares. The overall comparison provided a concordance correlation coefficient (CCC) of 0.92 for the calculations of whole limb volume. In particular, when Osirix was compared with Oncentra (validated for volume measures and used in radiotherapy), the concordance was near-perfect for the calculation of the whole limb volume (CCC = 0.99). With methods based on CT the user can choose a reliable plane for segmentation purposes. CT-based methods also provides the opportunity to separate the whole limb volume into defined tissue volumes (cortical bone, fat and water). Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen
Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less
Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements
NASA Astrophysics Data System (ADS)
Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura
2017-10-01
This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.
Krüger, E L; Minella, F O; Matzarakis, A
2014-10-01
Correlations between outdoor thermal indices and the calculated or measured mean radiant temperature T(mrt) are in general of high importance because of the combined effect on human energy balance in outdoor spaces. The most accurate way to determine T(mrt) is by means of integral radiation measurements, i.e. measuring the short- and long-wave radiation from six directions using pyranometers and pyrgeometers, an expensive and not always an easily available procedure. Some studies use globe thermometers combined with air temperature and wind speed sensors. An alternative way to determine T(mrt) is based on output from the RayMan model from measured data of incoming global radiation and morphological features of the monitoring site in particular sky view factor (SVF) data. The purpose of this paper is to compare different methods to assess the mean radiant temperature T(mrt) in terms of differences to a reference condition (T(mrt) calculated from field measurements) and to resulting outdoor comfort levels expressed as PET and UTCI values. The T(mrt) obtained from field measurements is a combination of air temperature, wind speed and globe temperature data according to the forced ventilation formula of ISO 7726 for data collected in Glasgow, UK. Four different methods were used in the RayMan model for T(mrt) calculations: input data consisting exclusively of data measured at urban sites; urban data excluding solar radiation, estimated SVF data and solar radiation data measured at a rural site; urban data excluding solar radiation with SVF data for each site; urban data excluding solar radiation and including solar radiation at the rural site taking no account of SVF information. Results show that all methods overestimate T(mrt) when compared to ISO calculations. Correlations were found to be significant for the first method and lower for the other three. Results in terms of comfort (PET, UTCI) suggest that reasonable estimates could be made based on global radiation data measured at the urban site or as a surrogate of missing SR data or globe temperature data recorded at the urban area on global radiation data measured at a rural location.
Zhao, Wenguang; Qualls, Russell J; Berliner, Pedro R
2008-11-01
A two-concentric-loop iterative (TCLI) method is proposed to estimate the displacement height and roughness length for momentum and sensible heat by using the measurements of wind speed and air temperature at two heights, sensible heat flux above the crop canopy, and the surface temperature of the canopy. This method is deduced theoretically from existing formulae and equations. The main advantage of this method is that data measured not only under near neutral conditions, but also under unstable and slightly stable conditions can be used to calculate the scaling parameters. Based on the data measured above an Acacia Saligna agroforestry system, the displacement height (d0) calculated by the TCLI method and by a conventional method are compared. Under strict neutral conditions, the two methods give almost the same results. Under unstable conditions, d0 values calculated by the conventional method are systematically lower than those calculated by the TCLI method, with the latter exhibiting only slightly lower values than those seen under strictly neutral conditions. Computation of the average values of the scaling parameters for the agroforestry system showed that the displacement height and roughness length for momentum are 68% and 9.4% of the average height of the tree canopy, respectively, which are similar to percentages found in the literature. The calculated roughness length for sensible heat is 6.4% of the average height of the tree canopy, a little higher than the percentages documented in the literature. When wind direction was aligned within 5 degrees of the row direction of the trees, the average displacement height calculated was about 0.6 m lower than when the wind blew across the row direction. This difference was statistically significant at the 0.0005 probability level. This implies that when the wind blows parallel to the row direction, the logarithmic profile of wind speed is shifted lower to the ground, so that, at a given height, the wind speeds are faster than when the wind blows perpendicular to the row direction.
Theoretical and experimental NMR study of protopine hydrochloride isomers.
Tousek, Jaromír; Malináková, Katerina; Dostál, Jirí; Marek, Radek
2005-07-01
The 1H and 13C NMR chemical shifts of cis- and trans-protopinium salts were measured and calculated. The calculations of the chemical shifts consisted of conformational analysis, geometry optimization (RHF/6-31G** method) and shielding constants calculations (B3LYP/6-31G** method). Based on the results of the quantum chemical calculations, two sets of experimental chemical shifts were assigned to the particular isomers. According to the experimental results, the trans-isomer is more stable and its population is approximately 68%. Copyright 2005 John Wiley & Sons, Ltd
NASA Astrophysics Data System (ADS)
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
NASA Astrophysics Data System (ADS)
Ya, Min; Dai, Fulong; Xie, Huimin; Lü, Jian
2003-12-01
Hole-drilling method is one of the most convenient methods for engineering residual stress measurement. Combined with moiré interferometry to obtain the relaxed whole-field displacement data, hole-drilling technique can be used to solve non-uniform residual stress problems, both in-depth and in-plane. In this paper, the theory of moiré interferometry and incremental hole-drilling (MIIHD) for non-uniform residual stress measurement is introduced. Three dimensional finite element model is constructed by ABAQUS to obtain the coefficients for the residual stress calculation. An experimental system including real-time measurement, automatic data processing and residual stresses calculation is established. Two applications for non-uniform in-depth residual stress of surface nanocrystalline material and non-uniform in-plane residual stress of friction stir welding are presented. Experimental results show that MIIHD is effective for both non-uniform in-depth and in-plane residual stress measurements.
Evaluating measurement uncertainty in fluid phase equilibrium calculations
NASA Astrophysics Data System (ADS)
van der Veen, Adriaan M. H.
2018-04-01
The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.
Calawerts, William M; Lin, Liyu; Sprott, JC; Jiang, Jack J
2016-01-01
Objective/Hypothesis The purpose of this paper is to introduce rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. Study Design 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. ty Methods We developed a new parameter, rate of divergence, which uses a modified version of Wolf’s algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. Results There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (p<0.001), and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. Conclusion The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. PMID:26920858
Development of Uav Photogrammetry Method by Using Small Number of Vertical Images
NASA Astrophysics Data System (ADS)
Kunii, Y.
2018-05-01
This new and efficient photogrammetric method for unmanned aerial vehicles (UAVs) requires only a few images taken in the vertical direction at different altitudes. The method includes an original relative orientation procedure which can be applied to images captured along the vertical direction. The final orientation determines the absolute orientation for every parameter and is used for calculating the 3D coordinates of every measurement point. The measurement accuracy was checked at the UAV test site of the Japan Society for Photogrammetry and Remote Sensing. Five vertical images were taken at 70 to 90 m altitude. The 3D coordinates of the measurement points were calculated. The plane and height accuracies were ±0.093 m and ±0.166 m, respectively. These values are of higher accuracy than the results of the traditional photogrammetric method. The proposed method can measure 3D positions efficiently and would be a useful tool for construction and disaster sites and for other field surveying purposes.
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Ground-penetrating radar methods used in surface-water discharge measurements
Haeni, F.P.; Buursink, Marc L.; Costa, John E.; Melcher, Nick B.; Cheng, Ralph T.; Plant, William J.
2000-01-01
In 1999, an experiment was conducted to see if a combination of complementary radar methods could be used to calculate the discharge of a river without having any of the measuring equipment in the water. The cross-sectional area of the 183-meter wide Skagit River in Washington State was measured using a ground-penetrating radar (GPR) system with a single 100-MHz antenna. A van-mounted, side-looking pulsed-Doppler radar system was used to collect water-surface velocity data across the same section of the river. The combined radar data sets were used to calculate the river discharge and the results compared closely to the discharge measurement made by using the standard in-water measurement techniques.
Attempts at estimating mixed venous carbon dioxide tension by the single-breath method.
Ohta, H; Takatani, O; Matsuoka, T
1989-01-01
The single-breath method was originally proposed by Kim et al. [1] for estimating the blood carbon dioxide tension and cardiac output. Its reliability has not been proven. The present study was undertaken, using dogs, to compare the mixed venous carbon dioxide tension (PVCO2) calculated by the single-breath method with the PVCO2 measured in mixed venous blood, and to evaluate the influence of variations in the exhalation duration and the volume of expired air usually discarded from computations as the deadspace. Among the exhalation durations of 15, 30 and 45 s tested, the 15 s duration was found to be too short to obtain an analyzable O2-CO2 curve, but at either 30 or 45 s, the calculated values of PVCO2 were comparable to the measured PVCO2. A significant agreement between calculated and measured PVCO2 was obtained when the expired gas with PCO2 less than 22 Torr was considered as deadspace gas.
Finding the most accurate method to measure head circumference for fetal weight estimation.
Schmidt, Ulrike; Temerinac, Dunja; Bildstein, Katharina; Tuschy, Benjamin; Mayer, Jade; Sütterlin, Marc; Siemer, Jörn; Kehl, Sven
2014-07-01
Accurate measurement of fetal head biometry is important for fetal weight estimation (FWE) and is therefore an important prognostic parameter for neonatal morbidity and mortality and a valuable tool for determining the further obstetric management. Measurement of the head circumference (HC) in particular is employed in many commonly used weight equations. The aim of the present study was to find the most accurate method to measure head circumference for fetal weight estimation. This prospective study included 481 term pregnancies. Inclusion criteria were a singleton pregnancy and ultrasound examination with complete fetal biometric parameters within 3 days of delivery, and an absence of structural or chromosomal malformations. Different methods were used for ultrasound measurement of the HC (ellipse-traced, ellipse-calculated, and circle-calculated). As a reference method, HC was also determined using a measuring tape immediately after birth. FWE was carried out with Hadlock formulas, including either HC or biparietal diameter (BPD), and differences were compared using percentage error (PE), absolute percentage error (APE), limits of agreement (LOA), and cumulative distribution. The ellipse-traced method showed the best results for FWE among all of the ultrasound methods assessed. It had the lowest median APE and the narrowest LOA. With regard to the cumulative distribution, it included the largest number of cases at a discrepancy level of ±10%. The accuracy of BPD was similar to that of the ellipse-traced method when it was used instead of HC for weight estimation. Differences between the three techniques for calculating HC were small but significant. For clinical use, the ellipse-traced method should be recommended. However, when BPD is used instead of HC for FWE, the accuracy is similar to that of the ellipse-traced method. The BPD might therefore be a good alternative to head measurements in estimating fetal weight. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Computation of entropy and Lyapunov exponent by a shift transform.
Matsuoka, Chihiro; Hiraide, Koichi
2015-10-01
We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown that the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.
Computation of entropy and Lyapunov exponent by a shift transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuoka, Chihiro, E-mail: matsuoka.chihiro.mm@ehime-u.ac.jp; Hiraide, Koichi
2015-10-15
We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown thatmore » the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.« less
A study of methods to estimate debris flow velocity
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.
Calculation of out-of-field dose distribution in carbon-ion radiotherapy by Monte Carlo simulation.
Yonai, Shunsuke; Matsufuji, Naruhiro; Namba, Masao
2012-08-01
Recent radiotherapy technologies including carbon-ion radiotherapy can improve the dose concentration in the target volume, thereby not only reducing side effects in organs at risk but also the secondary cancer risk within or near the irradiation field. However, secondary cancer risk in the low-dose region is considered to be non-negligible, especially for younger patients. To achieve a dose estimation of the whole body of each patient receiving carbon-ion radiotherapy, which is essential for risk assessment and epidemiological studies, Monte Carlo simulation plays an important role because the treatment planning system can provide dose distribution only in∕near the irradiation field and the measured data are limited. However, validation of Monte Carlo simulations is necessary. The primary purpose of this study was to establish a calculation method using the Monte Carlo code to estimate the dose and quality factor in the body and to validate the proposed method by comparison with experimental data. Furthermore, we show the distributions of dose equivalent in a phantom and identify the partial contribution of each radiation type. We proposed a calculation method based on a Monte Carlo simulation using the PHITS code to estimate absorbed dose, dose equivalent, and dose-averaged quality factor by using the Q(L)-L relationship based on the ICRP 60 recommendation. The values obtained by this method in modeling the passive beam line at the Heavy-Ion Medical Accelerator in Chiba were compared with our previously measured data. It was shown that our calculation model can estimate the measured value within a factor of 2, which included not only the uncertainty of this calculation method but also those regarding the assumptions of the geometrical modeling and the PHITS code. Also, we showed the differences in the doses and the partial contributions of each radiation type between passive and active carbon-ion beams using this calculation method. These results indicated that it is essentially important to include the dose by secondary neutrons in the assessment of the secondary cancer risk of patients receiving carbon-ion radiotherapy with active as well as passive beams. We established a calculation method with a Monte Carlo simulation to estimate the distribution of dose equivalent in the body as a first step toward routine risk assessment and an epidemiological study of carbon-ion radiotherapy at NIRS. This method has the advantage of being verifiable by the measurement.
NASA Astrophysics Data System (ADS)
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
Numerical Calculation and Measurement of Nonlinear Acoustic Fields in Ultrasound Diagnosis
NASA Astrophysics Data System (ADS)
Kawagishi, Tetsuya; Saito, Shigemi; Mine, Yoshitaka
2002-05-01
In order to develop a tool for designing on the ultrasonic probe and its peripheral devices for tissue-harmonic-imaging systems, a study is carried out to compare the calculation and observation results of nonlinear acoustic fields for a diagnostic ultrasound system. The pulsed ultrasound with a center frequency of 2.5 MHz is emanated from a weakly focusing sector probe with a 6.5 mm aperture radius and a 50 mm focal length into an agar phantom with an attenuation coefficient of about 0.6 dB/cm/MHz or 1.2 dB/cm/MHz. The nonlinear acoustic field is measured using a needle-type hydrophone. The calculation is based on the Khokhlov-Zabolotskaya-Kuznetsov(KZK) equation which is modified so that the frequency dependence of the attenuation coefficient is the same as that in biological tissue. This equation is numerically solved with the implicit backward method employing the iterative method. The measured and calculated amplitude spectra show good agreement with each other.
Measurement of heat transfer coefficient using termoanemometry methods
NASA Astrophysics Data System (ADS)
Dančová, P.; Sitek, P.; Vít, T.
2014-03-01
This work deals with a measurement of heat transfer from a heated flat plate on which a synthetic jet impacts perpendicularly. Measurement of a heat transfer coefficient (HTC) is carried out using the hot wire anemometry method with glue film probe Dantec 55M47. The paper brings also results of velocity profiles measurements and turbulence intensity calculations.
NASA Astrophysics Data System (ADS)
Ullmann, J. L.; Kawano, T.; Bredeweg, T. A.; Couture, A.; Haight, R. C.; Jandel, M.; O'Donnell, J. M.; Rundberg, R. S.; Vieira, D. J.; Wilhelmy, J. B.; Becker, J. A.; Chyzh, A.; Wu, C. Y.; Baramsai, B.; Mitchell, G. E.; Krtička, M.
2014-03-01
Background: Accurate knowledge of the U238(n,γ) cross section is important for developing theoretical nuclear reaction models and for applications. However, capture cross sections are difficult to calculate accurately and often must be measured. Purpose: We seek to confirm previous measurements and test cross-section calculations with an emphasis on the unresolved resonance region from 1 to 500 keV. Method: Cross sections were measured from 10 eV to 500 keV using the DANCE detector array at the LANSCE spallation neutron source. The measurements used a thin target, 48 mg/cm2 of depleted uranium. Gamma cascade spectra were also measured to provide an additional constraint on calculations. The data are compared to cross-section calculations using the code CoH3 and cascade spectra calculations made using the code dicebox. Results: This new cross-section measurement confirms the previous data. The measured gamma-ray spectra suggest the need for additional low-lying dipole strength in the radiative strength function. New Hauser-Feshbach calculations including this strength accurately predict the capture cross section without renormalization. Conclusions: The present cross-section data confirm previous measurements. Including additional low-lying dipole strength in the radiative strength function may lead to more accurate cross-section calculations in nuclei where <Γγ> has not been measured.
McGuigan, John A S; Kay, James W; Elder, Hugh Y
2016-09-01
In Ca(2+) and Mg(2+) buffer solutions the ionised concentrations ([X(2+)]) are either calculated or measured. Calculated values vary by up to a factor of seven due to the following four problems: 1) There is no agreement amongst the tabulated constants in the literature. These constants have usually to be corrected for ionic strength and temperature. 2) The ionic strength correction entails the calculation of the single ion activity coefficient, which involves non-thermodynamic assumptions; the data for temperature correction is not always available. 3) Measured pH is in terms of activity i.e. pHa. pHa measurements are complicated by the change in the liquid junction potentials at the reference electrode making an accurate conversion from H(+) activity to H(+) concentration uncertain. 4) Ligands such as EGTA bind water and are not 100% pure. Ligand purity has to be measured, even when the [X(2+)] are calculated. The calculated [X(2+)] in buffers are so inconsistent that calculation is not an option. Until standards are available, the [X(2+)] in the buffers must be measured. The Ligand Optimisation Method is an accurate and independently verified method of doing this (McGuigan & Stumpff, Anal. Biochem. 436, 29, 2013). Lack of standards means it is not possible to compare the published [Ca(2+)] in the nmolar range, and the apparent constant (K(/)) values for Ca(2+) and Mg(2+) binding to intracellular ligands amongst different laboratories. Standardisation of Ca(2+)/Mg(2+) buffers is now essential. The parameters to achieve this are proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ortiz, Marco G.
1993-01-01
A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.
Ortiz, M.G.
1993-06-08
A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.
NASA Astrophysics Data System (ADS)
Yamamura, Hideho; Sato, Ryohei; Iwata, Yoshiharu
Global efforts toward energy conservation, increasing data centers, and the increasing use of IT equipments are leading to a demand in reduced power consumption of equipments, and power efficiency improvement of power supply units is becoming a necessity. MOSFETs are widely used for their low ON-resistances. Power efficiency is designed using time-domain circuit simulators, except for transformer copper-loss, which has frequency dependency which is calculated separately using methods based on skin and proximity effects. As semiconductor technology reduces the ON-resistance of MOSFETs, frequency dependency due to the skin effect or proximity effect is anticipated. In this study, ON-resistance of MOSFETs are measured and frequency dependency is confirmed. Power loss against rectangular current pulse is calculated. The calculation method for transformer copper-loss is expanded to MOSFETs. A frequency function for the resistance model is newly developed and parametric calculation is enabled. Acceleration of calculation is enabled by eliminating summation terms. Using this method, it is shown that the frequency dependent component of the measured MOSFETs increases the dissipation from 11% to 32% at a switching frequency of 100kHz. From above, this paper points out the importance of the frequency dependency of MOSFETs' ON-resistance, provides means of calculating its pulse losses, and improves loss calculation accuracy of SMPSs.
NASA Technical Reports Server (NTRS)
Seely, J. F.; Feldman, U.; Safronova, U. I.
1986-01-01
The wavelengths of inner-shell 1s-2p transitions in the ions Fe XVIII-XXIV have been measured in solar flare spectra recorded by the Naval Research Laboratory crystal spectrometer (SOLFLEX) on the Air Force P78-1 spacecraft. The measurements are compared with previous measurements and with recently calculated wavelengths. It is found that the measured wavelengths are systematically larger than the wavelengths calculated using the Z-expansion method by up to 0.65 mA. For the more highly charged ions, these differences can be attributed to the QED contributions to the transition energies that are not included in the Z-expansion calculations.
Comparison of Minimally and More Invasive Methods of Determining Mixed Venous Oxygen Saturation.
Smit, Marli; Levin, Andrew I; Coetzee, Johan F
2016-04-01
To investigate the accuracy of a minimally invasive, 2-step, lookup method for determining mixed venous oxygen saturation compared with conventional techniques. Single-center, prospective, nonrandomized, pilot study. Tertiary care hospital, university setting. Thirteen elective cardiac and vascular surgery patients. All participants received intra-arterial and pulmonary artery catheters. Minimally invasive oxygen consumption and cardiac output were measured using a metabolic module and lithium-calibrated arterial waveform analysis (LiDCO; LiDCO, London), respectively. For the minimally invasive method, Step 1 involved these minimally invasive measurements, and arterial oxygen content was entered into the Fick equation to calculate mixed venous oxygen content. Step 2 used an oxyhemoglobin curve spreadsheet to look up mixed venous oxygen saturation from the calculated mixed venous oxygen content. The conventional "invasive" technique used pulmonary artery intermittent thermodilution cardiac output, direct sampling of mixed venous and arterial blood, and the "reverse-Fick" method of calculating oxygen consumption. LiDCO overestimated thermodilution cardiac output by 26%. Pulmonary artery catheter-derived oxygen consumption underestimated metabolic module measurements by 27%. Mixed venous oxygen saturation differed between techniques; the calculated values underestimated the direct measurements by between 12% to 26.3%, this difference being statistically significant. The magnitude of the differences between the minimally invasive and invasive techniques was too great for the former to act as a surrogate of the latter and could adversely affect clinical decision making. Copyright © 2016 Elsevier Inc. All rights reserved.
Methods used to calculate doses resulting from inhalation of Capstone depleted uranium aerosols.
Miller, Guthrie; Cheng, Yung Sung; Traub, Richard J; Little, Tom T; Guilmette, Raymond A
2009-03-01
The methods used to calculate radiological and toxicological doses to hypothetical persons inside either a U.S. Army Abrams tank or Bradley Fighting Vehicle that has been perforated by depleted uranium munitions are described. Data from time- and particle-size-resolved measurements of depleted uranium aerosol as well as particle-size-resolved measurements of aerosol solubility in lung fluids for aerosol produced in the breathing zones of the hypothetical occupants were used. The aerosol was approximated as a mixture of nine monodisperse (single particle size) components corresponding to particle size increments measured by the eight stages plus the backup filter of the cascade impactors used. A Markov Chain Monte Carlo Bayesian analysis technique was employed, which straightforwardly calculates the uncertainties in doses. Extensive quality control checking of the various computer codes used is described.
NASA Astrophysics Data System (ADS)
Clay, J.; Kent, E. R.; Leinfelder-Miles, M.; Lambert, J. J.; Little, C.; Paw U, K. T.; Snyder, R. L.
2016-12-01
Eddy covariance and surface renewal measurements were used to estimate evapotranspiration (ET) over a variety of crop fields in the Sacramento-San Joaquin River Delta during the 2016 growing season. However, comparing and evaluating multiple measurement systems and methods for determining ET was focused upon at a single alfalfa site. The eddy covariance systems included two systems for direct measurement of latent heat flux: one using a separate sonic anemometer and an open path infrared gas analyzer and another using a combined system (Campbell Scientific IRGASON). For these methods, eddy covariance was used with measurements from the Campbell Scientific CSAT3, the LI-COR 7500a, the Campbell Scientific IRGASON, and an additional R.M. Young sonic anemometer. In addition to those direct measures, the surface renewal approach included several energy balance residual methods in which net radiation, ground heat flux, and sensible heat flux (H) were measured. H was measured using several systems and different methods, including using multiple fast-response thermocouple measurements and using the temperatures measured by the sonic anemometers. The energy available for ET was then calculated as the residual of the surface energy balance equation. Differences in ET values were analyzed between the eddy covariance and surface renewal methods, using the IRGASON-derived values of ET as the standard for accuracy.
Roy, Jean-Sébastien; Moffet, Hélène; Hébert, Luc J; St-Vincent, Guy; McFadyen, Bradford J
2007-06-21
Abnormal scapular displacements during arm elevation have been observed in people with shoulder impingement syndrome. These abnormal scapular displacements were evaluated using different methods and instruments allowing a 3-dimensional representation of the scapular kinematics. The validity and the intrasession reliability have been shown for the majority of these methods for healthy people. However, the intersession reliability on healthy people and people with impaired shoulders is not well documented. This measurement property needs to be assessed before using such methods in longitudinal comparative studies. The objective of this study is to evaluate the intra and intersession reliability of 3-dimensional scapular attitudes measured at different arm positions in healthy people and to explore the same measurement properties in people with shoulder impingement syndrome using the Optotrak Probing System. Three-dimensional scapular attitudes were measured twice (test and retest interspaced by one week) on fifteen healthy subjects (mean age 37.3 years) and eight subjects with subacromial shoulder impingement syndrome (mean age 46.1 years) in three arm positions (arm at rest, 70 degrees of humerothoracic flexion and 90 degrees of humerothoracic abduction) using the Optotrak Probing System. Two different methods of calculation of 3-dimensional scapular attitudes were used: relative to the position of the scapula at rest and relative to the trunk. Intraclass correlation coefficient (ICC) and standard error of measure (SEM) were used to estimate intra and intersession reliability. For both groups, the reliability of the three-dimensional scapular attitudes for elevation positions was very good during the same session (ICCs from 0.84 to 0.99; SEM from 0.6 degrees to 1.9 degrees ) and good to very good between sessions (ICCs from 0.62 to 0.97; SEM from 1.2 degrees to 4.2 degrees ) when using the method of calculation relative to the trunk. Higher levels of intersession reliability were found for the method of calculation relative to the trunk in anterior-posterior tilting at 70 degrees of flexion compared to the method of calculation relative to the scapula at rest. The estimation of three-dimensional scapular attitudes using the method of calculation relative to the trunk is reproducible in the three arm positions evaluated and can be used to document the scapular behavior.
Wolever, Thomas M S
2004-02-01
To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.
Comparison of Measured and Calculated Stresses in Built-up Beams
NASA Technical Reports Server (NTRS)
Levin, L Ross; Nelson, David H
1946-01-01
Web stresses and flange stresses were measured in three built-up beams: one of constant depth with flanges of constant cross-section, one linearly tapered in depth with flanges of constant cross section, and one linearly tapered in depth with tapered flanges. The measured stresses were compared with the calculated stresses obtained by the methods outlined in order to determine the degree of accuracy that may be expected from the stress analysis formulas. These comparisons indicated that the average measured stresses for all points in the central section of the beams did not exceed the average calculated stresses by more than 5 percent. It also indicated that the difference between average measured flange stresses and average calculated flange stresses on the net area and a fully effective web did not exceed 6.1 percent.
Development of a neural network technique for KSTAR Thomson scattering diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seung Hun, E-mail: leesh81@nfri.re.kr; Lee, J. H.; Yamada, I.
Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ{sup 2} method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ{sup 2} method. The best results were obtained for 10{sup 3} training cyclesmore » and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ{sup 2} method and performs the calculation twenty times faster.« less
Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.
Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos
2014-01-01
Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.
Thermodynamics of the Si-O-H System
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Opila, Elizabeth J.; Myers, Dwight; Copland, Evan
2004-01-01
Thermodynamic functions for Si(OH)4(g) and SiO(OH)2(g) have been measured using the transpiration method. A second law enthalpy of formation and entropy and a third law enthalpy of formation has been calculated for Si(OH)4. The results are in very good agreement with previous experimental measurements, ab-initio calculations, and estimates.
The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor
ERIC Educational Resources Information Center
Gordon, James; Chancey, Katherine
2005-01-01
The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…
Computer program for the calculation of grain size statistics by the method of moments
Sawyer, Michael B.
1977-01-01
A computer program is presented for a Hewlett-Packard Model 9830A desk-top calculator (1) which calculates statistics using weight or point count data from a grain-size analysis. The program uses the method of moments in contrast to the more commonly used but less inclusive graphic method of Folk and Ward (1957). The merits of the program are: (1) it is rapid; (2) it can accept data in either grouped or ungrouped format; (3) it allows direct comparison with grain-size data in the literature that have been calculated by the method of moments; (4) it utilizes all of the original data rather than percentiles from the cumulative curve as in the approximation technique used by the graphic method; (5) it is written in the computer language BASIC, which is easily modified and adapted to a wide variety of computers; and (6) when used in the HP-9830A, it does not require punching of data cards. The method of moments should be used only if the entire sample has been measured and the worker defines the measured grain-size range. (1) Use of brand names in this paper does not imply endorsement of these products by the U.S. Geological Survey.
A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique
Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie
2015-01-01
Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption..., App. G Appendix G to Subpart B of Part 430—Uniform Test Method for Measuring the Energy Consumption of... energy consumption for primary electric heaters. For primary electric heaters, calculate the annual...
Fascione, Jeanna M; Crews, Ryan T; Wrobel, James S
2012-01-01
Identifying the variability of footprint measurement collection techniques and the reliability of footprint measurements would assist with appropriate clinical foot posture appraisal. We sought to identify relationships between these measures in a healthy population. On 30 healthy participants, midgait dynamic footprint measurements were collected using an ink mat, paper pedography, and electronic pedography. The footprints were then digitized, and the following footprint indices were calculated with photo digital planimetry software: footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index. Differences between techniques were identified with repeated-measures analysis of variance with post hoc test of Scheffe. In addition, to assess practical similarities between the different methods, intraclass correlation coefficients (ICCs) were calculated. To assess intrarater reliability, footprint indices were calculated twice on 10 randomly selected ink mat footprint measurements, and the ICC was calculated. Dynamic footprint measurements collected with an ink mat significantly differed from those collected with paper pedography (ICC, 0.85-0.96) and electronic pedography (ICC, 0.29-0.79), regardless of the practical similarities noted with ICC values (P = .00). Intrarater reliability for dynamic ink mat footprint measurements was high for the footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index (ICC, 0.74-0.99). Footprint measurements collected with various techniques demonstrate differences. Interchangeable use of exact values without adjustment is not advised. Intrarater reliability of a single method (ink mat) was found to be high.
Arratibel-Imaz, Iñaki; Calleja-González, Julio; Emparanza, Jose Ignacio; Terrados, Nicolas; Mjaanes, Jeffrey M; Ostojic, Sergej M
2016-01-01
The calculation of exertion intensity, in which a change is produced in the metabolic processes which provide the energy to maintain physical work, has been defined as the anaerobic threshold (AT). The direct calculation of maximal lactate steady state (MLSS) would require exertion intensities over a long period of time and with sufficient rest periods which would prove significantly difficult for daily practice. Many protocols have been used for the indirect calculation of MLSS. The aim of this study is to determine if the results of measurements with 12 different AT calculation methods and calculation software [Keul, Simon, Stegmann, Bunc, Dickhuth (TKM and WLa), Dmax, Freiburg, Geiger-Hille, Log-Log, Lactate Minimum] can be used interchangeably, including the method of the fixed threshold of Mader/OBLA's 4 mmol/l and then to compare them with the direct measurement of MLSS. There were two parts to this research. Phase 1: results from 162 exertion tests chosen at random from the 1560 tests. Phase 2: sixteen athletes (n = 16) carried out different tests on five consecutive days. There was very high concordance among all the methods [intraclass correlation coefficient (ICC) > 0.90], except Log-Log in relation to the Stegamnn, Dmax, Dickhuth-WLa and Geiger-Hille. The Dickhuth-TKM showed a high tendency towards concordance, with Dmax (2.2 W) and Dickhuth-WLa (0.1 W). The Dickhuth-TKM method presented a high tendency to concordance with Dickhuth-WLa (0.5 W), Freiburg (7.4 W), MLSS (2.0 W), Bunc (8.9 W), Dmax (0.1 W). The calculation of MLSS power showed a high tendency to concordance, with Dickhuth-TKM (2 W), Dmax (2.1 W), Dickhuth-WLa (1.5 W). The fixed threshold of 4 mmol/l or OBLA produces slightly different and higher results than those obtained with all the methods analyzed, including MLSS, meaning an overestimation of power in the individual anaerobic threshold. The Dickhuth-TKM, Dmax and Dickhuth-WLa methods defined a high concordance on a cycle ergometer. Dickhuth-TKM, Dmax, Dickhuth-WLa described a high concordance with the power calculated to know the MLSS.
NASA Astrophysics Data System (ADS)
Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.
2015-11-01
Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate the energy resource flows measurement imbalances, and to filter invalid measurements at the data acquisition and processing stage in performing monitoring of an automated energy resource monitoring and accounting system.
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2011 CFR
2011-07-01
... isokinetic sampling rates prior to a pollutant emission measurement run. The approximation method described... with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its equivalent...
Evaluation of the validity of the Bolton Index using cone-beam computed tomography (CBCT)
Llamas, José M.; Cibrián, Rosa; Gandía, José L.; Paredes, Vanessa
2012-01-01
Aims: To evaluate the reliability and reproducibility of calculating the Bolton Index using cone-beam computed tomography (CBCT), and to compare this with measurements obtained using the 2D Digital Method. Material and Methods: Traditional study models were obtained from 50 patients, which were then digitized in order to be able to measure them using the Digital Method. Likewise, CBCTs of those same patients were undertaken using the Dental Picasso Master 3D® and the images obtained were then analysed using the InVivoDental programme. Results: By determining the regression lines for both measurement methods, as well as the difference between both of their values, the two methods are shown to be comparable, despite the fact that the measurements analysed presented statistically significant differences. Conclusions: The three-dimensional models obtained from the CBCT are as accurate and reproducible as the digital models obtained from the plaster study casts for calculating the Bolton Index. The differences existing between both methods were clinically acceptable. Key words:Tooth-size, digital models, bolton index, CBCT. PMID:22549690
Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-01-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.
Application of adjusted data in calculating fission-product decay energies and spectra
NASA Astrophysics Data System (ADS)
George, D. C.; Labauve, R. J.; England, T. R.
1982-06-01
The code ADENA, which approximately calculates fussion-product beta and gamma decay energies and spectra in 19 or fewer energy groups from a mixture of U235 and Pu239 fuels, is described. The calculation uses aggregate, adjusted data derived from a combination of several experiments and summation results based on the ENDF/B-V fission product file. The method used to obtain these adjusted data and the method used by ADENA to calculate fission-product decay energy with an absorption correction are described, and an estimate of the uncertainty of the ADENA results is given. Comparisons of this approximate method are made to experimental measurements, to the ANSI/ANS 5.1-1979 standard, and to other calculational methods. A listing of the complete computer code (ADENA) is contained in an appendix. Included in the listing are data statements containing the adjusted data in the form of parameters to be used in simple analytic functions.
Feng, Yong; Chen, Aiqing
2017-01-01
This study aimed to quantify blood pressure (BP) measurement accuracy and variability with different techniques. Thirty video clips of BP recordings from the BHS training database were converted to Korotkoff sound waveforms. Ten observers without receiving medical training were asked to determine BPs using (a) traditional manual auscultatory method and (b) visual auscultation method by visualizing the Korotkoff sound waveform, which was repeated three times on different days. The measurement error was calculated against the reference answers, and the measurement variability was calculated from the SD of the three repeats. Statistical analysis showed that, in comparison with the auscultatory method, visual method significantly reduced overall variability from 2.2 to 1.1 mmHg for SBP and from 1.9 to 0.9 mmHg for DBP (both p < 0.001). It also showed that BP measurement errors were significant for both techniques (all p < 0.01, except DBP from the traditional method). Although significant, the overall mean errors were small (−1.5 and −1.2 mmHg for SBP and −0.7 and 2.6 mmHg for DBP, resp., from the traditional auscultatory and visual auscultation methods). In conclusion, the visual auscultation method had the ability to achieve an acceptable degree of BP measurement accuracy, with smaller variability in comparison with the traditional auscultatory method. PMID:29423405
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Hopman, Jeroen C. W.; Liem, K. Djien; de Roode, Rowland; Verdaasdonk, Rudolf M.; Thijssen, Johan M.
2008-02-01
Continuous wave Near Infrared Spectroscopy is a well known non invasive technique for measuring changes in tissue oxygenation. Absorption changes (ΔO2Hb and ΔHHb) are calculated from the light attenuations using the modified Lambert Beer equation. Generally, the concentration changes are calculated relative to the concentration at a starting point in time (delta time method). It is also possible, under certain assumptions, to calculate the concentrations by subtracting the equations at different wavelengths (delta wavelength method). We derived a new algorithm and will show the possibilities and limitations. In the delta wavelength method, the assumption is that the oxygen independent attenuation term will be eliminated from the formula even if its value changes in time, we verified the results with the classical delta time method using extinction coefficients from different literature sources for the wavelengths 767nm, 850nm and 905nm. The different methods of calculating concentration changes were applied to the data collected from animal experiments. The animals (lambs) were in a stable normoxic condition; stepwise they were made hypoxic and thereafter they returned to normoxic condition. The two algorithms were also applied for measuring two dimensional blood oxygen saturation changes in human skin tissue. The different oxygen saturation levels were induced by alterations in the respiration and by temporary arm clamping. The new delta wavelength method yielded in a steady state measurement the same changes in oxy and deoxy hemoglobin as the classical delta time method. The advantage of the new method is the independence of eventual variation of the oxygen independent attenuations in time.
Comparison of RCS prediction techniques, computations and measurements
NASA Astrophysics Data System (ADS)
Brand, M. G. E.; Vanewijk, L. J.; Klinker, F.; Schippers, H.
1992-07-01
Three calculation methods to predict radar cross sections (RCS) of three dimensional objects are evaluated by computing the radar cross sections of a generic wing inlet configuration. The following methods are applied: a three dimensional high frequency method, a three dimensional boundary element method, and a two dimensional finite difference time domain method. The results of the computations are compared with the data of measurements.
Polarization-resolved sensing with tilted fiber Bragg gratings: theory and limits of detection
NASA Astrophysics Data System (ADS)
Bialiayeu, Aliaksandr; Ianoul, Anatoli; Albert, Jacques
2015-08-01
Polarization based sensing with tilted fiber Bragg grating (TFBG) sensors is analysed theoretically by two alternative approaches. The first method is based on tracking the grating transmission for two orthogonal states of linear polarized light that are extracted from the measured Jones matrix or Stokes vectors of the TFBG transmission spectra. The second method is based on the measurements along the system principle axes and polarization dependent loss (PDL) parameter, also calculated from measured data. It is shown that the frequent crossing of the Jones matrix eigenvalues as a function of wavelength leads to a non-physical interchange of the calculated principal axes; a method to remove this unwanted mathematical artefact and to restore the order of the system eigenvalues and the corresponding principal axes is provided. A comparison of the two approaches reveals that the PDL method provides a smaller standard deviation and therefore lower limit of detection in refractometric sensing. Furthermore, the polarization analysis of the measured spectra allows for the identification of the principal states of polarization of the sensor system and consequentially for the calculation of the transmission spectrum for any incident polarization state. The stability of the orientation of the system principal axes is also investigated as a function of wavelength.
Multiple frequency method for operating electrochemical sensors
Martin, Louis P [San Ramon, CA
2012-05-15
A multiple frequency method for the operation of a sensor to measure a parameter of interest using calibration information including the steps of exciting the sensor at a first frequency providing a first sensor response, exciting the sensor at a second frequency providing a second sensor response, using the second sensor response at the second frequency and the calibration information to produce a calculated concentration of the interfering parameters, using the first sensor response at the first frequency, the calculated concentration of the interfering parameters, and the calibration information to measure the parameter of interest.
Van Oostveldt, P; Boeken, G
1976-05-28
Factors influencing the calculation of the relative amount of chromophore and the chromophore area by the two-wavelength method are examined. The study was carried out with the help of models and further tested on Feulgen stained preparations. Except for certain restrictions the difference between the chromophore area as calculated from the two transmissions measurements and the chromophore area obtained by planimetry can be used as a guide for determining the proper measuring conditions, including the choise of the two wavelengths.
A new tissue segmentation method to calculate 3D dose in small animal radiation therapy.
Noblet, C; Delpon, G; Supiot, S; Potiron, V; Paris, F; Chiavassa, S
2018-02-26
In pre-clinical animal experiments, radiation delivery is usually delivered with kV photon beams, in contrast to the MV beams used in clinical irradiation, because of the small size of the animals. At this medium energy range, however, the contribution of the photoelectric effect to absorbed dose is significant. Accurate dose calculation therefore requires a more detailed tissue definition because both density (ρ) and elemental composition (Z eff ) affect the dose distribution. Moreover, when applied to cone beam CT (CBCT) acquisitions, the stoichiometric calibration of HU becomes inefficient as it is designed for highly collimated fan beam CT acquisitions. In this study, we propose an automatic tissue segmentation method of CBCT imaging that assigns both density (ρ) and elemental composition (Z eff ) in small animal dose calculation. The method is based on the relationship found between CBCT number and ρ*Z eff product computed from known materials. Monte Carlo calculations were performed to evaluate the impact of ρZ eff variation on the absorbed dose in tissues. These results led to the creation of a tissue database composed of artificial tissues interpolated from tissue values published by the ICRU. The ρZ eff method was validated by measuring transmitted doses through tissue substitute cylinders and a mouse with EBT3 film. Measurements were compared to the results of the Monte Carlo calculations. The study of the impact of ρZ eff variation over the range of materials, from ρZ eff = 2 g.cm - 3 (lung) to 27 g.cm - 3 (cortical bone) led to the creation of 125 artificial tissues. For tissue substitute cylinders, the use of ρZ eff method led to maximal and average relative differences between the Monte Carlo results and the EBT3 measurements of 3.6% and 1.6%. Equivalent comparison for the mouse gave maximal and average relative differences of 4.4% and 1.2%, inside the 80% isodose area. Gamma analysis led to a 94.9% success rate in the 10% isodose area with 4% and 0.3 mm criteria in dose and distance. Our new tissue segmentation method was developed for 40kVp CBCT images. Both density and elemental composition are assigned to each voxel by using a relationship between HU and the product ρZ eff . The method, validated by comparing measurements and calculations, enables more accurate small animal dose distribution calculated on low energy CBCT images.
Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W
2017-01-01
Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.
Empfangsleistung in Abhängigkeit von der Zielentfernung bei optischen Kurzstrecken-Radargeräten.
Riegl, J; Bernhard, M
1974-04-01
The dependence of the received optical power on the range in optical short-distance radar range finders is calculated by means of the methods of geometrical optics. The calculations are based on a constant intensity of the transmitter-beam cross section and on an ideal thin lens for the receiver optics. The results are confirmed by measurements. Even measurements using a nonideal thick lens system for the receiver optics are in reasonable agreement with the calculations.
Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako
2016-11-01
Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p < 0.05). With this high correlation coefficient, a simple regression line can be applied to estimate measured value from calculated value. A survey of the literature suggests that the values in this study were similar to values that have been reported to date for Japan, and higher than those for other countries in Asia. Iodine intake of Japanese adults was 336 μg/day (GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.
Vibrational properties of TaW alloy using modified embedded atom method potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chand, Manesh, E-mail: maneshchand@gmail.com; Uniyal, Shweta; Joshi, Subodh
2016-05-06
Force-constants up to second neighbours of pure transition metal Ta and TaW alloy are determined using the modified embedded atom method (MEAM) potential. The obtained force-constants are used to calculate the phonon dispersion of pure Ta and TaW alloy. As a further application of MEAM potential, the force-constants are used to calculate the local vibrational density of states and mean square thermal displacements of pure Ta and W impurity atoms with Green’s function method. The calculated results are found to be in agreement with the experimental measurements.
SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supanich, MP
2015-06-15
Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in themore » central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.« less
Non-contact measurement of pulse wave velocity using RGB cameras
NASA Astrophysics Data System (ADS)
Nakano, Kazuya; Aoki, Yuta; Satoh, Ryota; Hoshi, Akira; Suzuki, Hiroyuki; Nishidate, Izumi
2016-03-01
Non-contact measurement of pulse wave velocity (PWV) using red, green, and blue (RGB) digital color images is proposed. Generally, PWV is used as the index of arteriosclerosis. In our method, changes in blood volume are calculated based on changes in the color information, and is estimated by combining multiple regression analysis (MRA) with a Monte Carlo simulation (MCS) model of the transit of light in human skin. After two pulse waves of human skins were measured using RGB cameras, and the PWV was calculated from the difference of the pulse transit time and the distance between two measurement points. The measured forehead-finger PWV (ffPWV) was on the order of m/s and became faster as the values of vital signs raised. These results demonstrated the feasibility of this method.
Ground difference compensating system
Johnson, Kris W.; Akasam, Sivaprasad
2005-10-25
A method of ground level compensation includes measuring a voltage of at least one signal with respect to a primary ground potential and measuring, with respect to the primary ground potential, a voltage level associated with a secondary ground potential. A difference between the voltage level associated with the secondary ground potential and an expected value is calculated. The measured voltage of the at least one signal is adjusted by an amount corresponding to the calculated difference.
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.
A new optical head tracing reflected light for nanoprofiler
NASA Astrophysics Data System (ADS)
Okuda, K.; Okita, K.; Tokuta, Y.; Kitayama, T.; Nakano, M.; Kudo, R.; Yamamura, K.; Endo, K.
2014-09-01
High accuracy optical elements are applied in various fields. For example, ultraprecise aspherical mirrors are necessary for developing third-generation synchrotron radiation and XFEL (X-ray Free Electron LASER) sources. In order to make such high accuracy optical elements, it is necessary to realize the measurement of aspherical mirrors with high accuracy. But there has been no measurement method which simultaneously achieves these demands yet. So, we develop the nanoprofiler that can directly measure the any surfaces figures with high accuracy. The nanoprofiler gets the normal vector and the coordinate of a measurement point with using LASER and the QPD (Quadrant Photo Diode) as a detector. And, from the normal vectors and their coordinates, the three-dimensional figure is calculated. In order to measure the figure, the nanoprofiler controls its five motion axis numerically to make the reflected light enter to the QPD's center. The control is based on the sample's design formula. We measured a concave spherical mirror with a radius of curvature of 400 mm by the deflection method which calculates the figure error from QPD's output, and compared the results with those using a Fizeau interferometer. The profile was consistent within the range of system error. The deflection method can't neglect the error caused from the QPD's spatial irregularity of sensitivity. In order to improve it, we have contrived the zero method which moves the QPD by the piezoelectric motion stage and calculates the figure error from the displacement.
Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François
2014-07-01
Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
NASA Astrophysics Data System (ADS)
McElroy, Kenneth L., Jr.
1992-12-01
A method is presented for the determination of neutral gas densities in the ionosphere from rocket-borne measurements of UV atmospheric emissions. Computer models were used to calculate an initial guess for the neutral atmosphere. Using this neutral atmosphere, intensity profiles for the N2 (0,5) Vegard-Kaplan band, the N2 Lyman-Birge-Hopfield band system, and the OI2972 A line were calculated and compared with the March 1990 NPS MUSTANG data. The neutral atmospheric model was modified and the intensity profiles recalculated until a fit with the data was obtained. The neutral atmosphere corresponding to the intensity profile that fit the data was assumed to be the atmospheric composition prevailing at the time of the observation. The ion densities were then calculated from the neutral atmosphere using a photochemical model. The electron density profile calculated by this model was compared with the electron density profile measured by the U.S. Air Force Geophysics Laboratory at a nearby site.
CALCULATION OF GAMMA SPECTRA IN A PLASTIC SCINTILLATOR FOR ENERGY CALIBRATIONAND DOSE COMPUTATION.
Kim, Chankyu; Yoo, Hyunjun; Kim, Yewon; Moon, Myungkook; Kim, Jong Yul; Kang, Dong Uk; Lee, Daehee; Kim, Myung Soo; Cho, Minsik; Lee, Eunjoong; Cho, Gyuseong
2016-09-01
Plastic scintillation detectors have practical advantages in the field of dosimetry. Energy calibration of measured gamma spectra is important for dose computation, but it is not simple in the plastic scintillators because of their different characteristics and a finite resolution. In this study, the gamma spectra in a polystyrene scintillator were calculated for the energy calibration and dose computation. Based on the relationship between the energy resolution and estimated energy broadening effect in the calculated spectra, the gamma spectra were simply calculated without many iterations. The calculated spectra were in agreement with the calculation by an existing method and measurements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A method to calculate the volume of palatine tonsils.
Prim, M P; De Diego, J I; García-Bermúdez, C; Pérez-Fernández, E; Hardisson, D
2010-12-01
The purpose of this study was to obtain a mathematical formula to calculate the tonsillar volume out of its measurements assessed on surgical specimens. Thirty consecutive surgical specimens of pediatric tonsils were studied. The maximum lengths ("a"), widths ("b"), and depths ("c") of the dissected specimens were measured in millimeters, and the volume of each tonsil was measured in milliliters. One-sample Kolmogorov-Smirnov test was used to check the normality of the sample. To calculate the reproducibility of the quantitative variables, intraclass correlation coefficients were used. Two formulas with high reproducibility (coefficient R between 0.75 and 1) were obtained: 1) [a*b*c* 0.5236] with R = 0.8688; and 2) [a*b*b* 0.3428] with R = 0.9073. It is possible to calculate the volume of the palatine tonsils in surgical specimens precisely enough based on their three measures, or their two main measures (length and width).
NASA Astrophysics Data System (ADS)
Petrova, T. M.; Solodov, A. M.; Solodov, A. A.; Deichuli, V. M.; Starikov, V. I.
2018-05-01
The water vapour line broadening and shifting for 97 lines in the ν1 + ν2 + ν3 band induced by hydrogen pressure are measured with Bruker IFS 125 HR FTIR spectrometer. The measurements were performed at room temperature, at the spectral resolution of 0.01 cm-1 and in a wide pressure range of H2. The calculations of the broadening γ and shift δ coefficients were performed in the semi-classical method framework with use of an effective vibrationally depended interaction potential. Two potential parameters were optimised to improve the quality of calculations. Good agreements with measured broadening coefficients were achieved. The comparison of calculated broadening coefficients γ with the previous measurements is discussed. The analytical expressions that reproduce these coefficients for rotational, ν2, ν1, and ν3 vibrational bands are presented.
NASA Astrophysics Data System (ADS)
Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.
2011-02-01
The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.
WWER-1000 core and reflector parameters investigation in the LR-0 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.
2006-07-01
Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less
Cassette, Philippe; Altzitzoglou, Timotheos; Antohe, Andrei; Rossi, Mario; Arinc, Arzu; Capogni, Marco; Galea, Raphael; Gudelis, Arunas; Kossert, Karsten; Lee, K B; Liang, Juncheng; Nedjadi, Youcef; Oropesa Verdecia, Pilar; Shilnikova, Tanya; van Wyngaardt, Winifred; Ziemek, Tomasz; Zimmerman, Brian
2018-04-01
A comparison of calculations of the activity of a 3 H 2 O liquid scintillation source using the same experimental data set collected at the LNE-LNHB with a triple-to-double coincidence ratio (TDCR) counter was completed. A total of 17 laboratories calculated the activity and standard uncertainty of the LS source using the files with experimental data provided by the LNE-LNHB. The results as well as relevant information on the computation techniques are presented and analysed in this paper. All results are compatible, even if there is a significant dispersion between the reported uncertainties. An output of this comparison is the estimation of the dispersion of TDCR measurement results when measurement conditions are well defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yulkifli; Afandi, Zurian; Yohandri
2018-04-01
Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.
Brunori, Paola; Masi, Piergiorgio; Faggiani, Luigi; Villani, Luciano; Tronchin, Michele; Galli, Claudio; Laube, Clarissa; Leoni, Antonella; Demi, Maila; La Gioia, Antonio
2011-04-11
Neonatal jaundice might lead to severe clinical consequences. Measurement of bilirubin in samples is interfered by hemolysis. Over a method-depending cut-off value of measured hemolysis, bilirubin value is not accepted and a new sample is required for evaluation although this is not always possible, especially with newborns and cachectic oncological patients. When usage of different methods, less prone to interferences, is not feasible an alternative recovery method for analytical significance of rejected data might help clinicians to take appropriate decisions. We studied the effects of hemolysis over total bilirubin measurement, comparing hemolysis-interfered bilirubin measurement with the non-interfered value. Interference curves were extrapolated over a wide range of bilirubin (0-30 mg/mL) and hemolysis (H Index 0-1100). Interference "altitude" curves were calculated and plotted. A bimodal acceptance table was calculated. Non-interfered bilirubin of given samples was calculated, by linear interpolation between the nearest lower and upper interference curves. Rejection of interference-sensitive data from hemolysed samples for every method should be based not upon the interferent concentration but upon a more complex algorithm based upon the concentration-dependent bimodal interaction between the interfered analyte and the measured interferent. The altitude-curve cartography approach to interfered assays may help laboratories to build up their own method-dependent algorithm and to improve the trueness of their data by choosing a cut-off value different from the one (-10% interference) proposed by manufacturers. When re-sampling or an alternative method is not available the altitude-curve cartography approach might also represent an alternative recovery method for analytical significance of rejected data. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehler, E; Higgins, P; Dusenbery, K
2014-06-15
Purpose: To validate a method to create per patient phantoms for dosimetric verification measurements. Methods: Using a RANDO phantom as a substitute for an actual patient, a model of the external features of the head and neck region of the phantom was created. A phantom was used instead of a human for two reasons: to allow for dosimetric measurements that would not be possible in-vivo and to avoid patient privacy issues. Using acrylonitrile butadiene styrene thermoplastic as the building material, a hollow replica was created using the 3D printer filled with a custom tissue equivalent mixture of paraffin wax, magnesiummore » oxide, and calcium carbonate. A traditional parallel-opposed head and neck plan was constructed. Measurements were performed with thermoluminescent dosimeters in both the RANDO phantom and in the 3D printed phantom. Calculated and measured dose was compared at 17 points phantoms including regions in high and low dose regions and at the field edges. On-board cone beam CT was used to localize both phantoms within 1mm and 1° prior to radiation. Results: The maximum difference in calculated dose between phantoms was 1.8% of the planned dose (180 cGy). The mean difference between calculated and measured dose in the anthropomorphic phantom and the 3D printed phantom was 1.9% ± 2.8% and −0.1% ± 4.9%, respectively. The difference between measured and calculated dose was determined in the RANDO and 3D printed phantoms. The differences between measured and calculated dose in each respective phantom was within 2% for 12 of 17 points. The overlap of the RANDO and 3D printed phantom was 0.956 (Jaccard Index). Conclusion: A custom phantom was created using a 3D printer. Dosimetric calculations and measurements showed good agreement between the dose in the RANDO phantom (patient substitute) and the 3D printed phantom.« less
Secondary Ion Mass Spectrometry for Mg Tracer Diffusion: Issues and Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuggle, Jay; Giordani, Andrew; Kulkarni, Nagraj S
2014-01-01
A Secondary Ion Mass Spectrometry (SIMS) method has been developed to measure stable Mg isotope tracer diffusion. This SIMS method was then used to calculate Mg self- diffusivities and the data was verified against historical data measured using radio tracers. The SIMS method has been validated as a reliable alternative to the radio-tracer technique for the measurement of Mg self-diffusion coefficients and can be used as a routine method for determining diffusion coefficients.
Methods to Calculate the Heat Index as an Exposure Metric in Environmental Health Research
Bell, Michelle L.; Peng, Roger D.
2013-01-01
Background: Environmental health research employs a variety of metrics to measure heat exposure, both to directly study the health effects of outdoor temperature and to control for temperature in studies of other environmental exposures, including air pollution. To measure heat exposure, environmental health studies often use heat index, which incorporates both air temperature and moisture. However, the method of calculating heat index varies across environmental studies, which could mean that studies using different algorithms to calculate heat index may not be comparable. Objective and Methods: We investigated 21 separate heat index algorithms found in the literature to determine a) whether different algorithms generate heat index values that are consistent with the theoretical concepts of apparent temperature and b) whether different algorithms generate similar heat index values. Results: Although environmental studies differ in how they calculate heat index values, most studies’ heat index algorithms generate values consistent with apparent temperature. Additionally, most different algorithms generate closely correlated heat index values. However, a few algorithms are potentially problematic, especially in certain weather conditions (e.g., very low relative humidity, cold weather). To aid environmental health researchers, we have created open-source software in R to calculate the heat index using the U.S. National Weather Service’s algorithm. Conclusion: We identified 21 separate heat index algorithms used in environmental research. Our analysis demonstrated that methods to calculate heat index are inconsistent across studies. Careful choice of a heat index algorithm can help ensure reproducible and consistent environmental health research. Citation: Anderson GB, Bell ML, Peng RD. 2013. Methods to calculate the heat index as an exposure metric in environmental health research. Environ Health Perspect 121:1111–1119; http://dx.doi.org/10.1289/ehp.1206273 PMID:23934704
Relationship of the actual thick intraocular lens optic to the thin lens equivalent.
Holladay, J T; Maverick, K J
1998-09-01
To theoretically derive and empirically validate the relationship between the actual thick intraocular lens and the thin lens equivalent. Included in the study were 12 consecutive adult patients ranging in age from 54 to 84 years (mean +/- SD, 73.5 +/- 9.4 years) with best-corrected visual acuity better than 20/40 in each eye. Each patient had bilateral intraocular lens implants of the same style, placed in the same location (bag or sulcus) by the same surgeon. Preoperatively, axial length, keratometry, refraction, and vertex distance were measured. Postoperatively, keratometry, refraction, vertex distance, and the distance from the vertex of the cornea to the anterior vertex of the intraocular lens (AV(PC1)) were measured. Alternatively, the distance (AV(PC1)) was then back-calculated from the vergence formula used for intraocular lens power calculations. The average (+/-SD) of the absolute difference in the two methods was 0.23 +/- 0.18 mm, which would translate to approximately 0.46 diopters. There was no statistical difference between the measured and calculated values; the Pearson product-moment correlation coefficient from linear regression was 0.85 (r2 = .72, F = 56). The average intereye difference was -0.030 mm (SD, 0.141 mm; SEM, 0.043 mm) using the measurement method and +0.124 mm (SD, 0.412 mm; SEM, 0.124 mm) using the calculation method. The relationship between the actual thick intraocular lens and the thin lens equivalent has been determined theoretically and demonstrated empirically. This validation provides the manufacturer and surgeon additional confidence and utility for lens constants used in intraocular lens power calculations.
Nurok, Michael; Lipsitz, Stuart; Satwicz, Paul; Kelly, Andrea; Frankel, Allan
2010-05-01
To create and test a reproducible method for measuring emotional climate, surgical team skills, and threats to patient outcome by conducting an observational study to assess the impact of a surgical team skills and communication improvement intervention on these measurements. Observational study. Operating rooms in a high-volume thoracic surgery center from September 5, 2007, through June 30, 2008. Thoracic surgery operating room teams. Two 90-minute team skills training sessions focused on findings from a standardized safety culture survey administered to all participants and highlighting positive and problematic aspects of team skills, communication, and leadership. The sessions created an interactive forum to educate team members on the importance of communication and to role-play optimal interactive and communication strategies. Calculated indices of emotional climate, team skills, and threat to patient outcome. The calculated communication and team skills score improved from the preintervention to postintervention periods, but the improvement extinguished during the 3 months after the intervention (P < .001). The calculated threat-to-outcome score improved following the team training intervention and remained statistically improved 3 months later (P < .001). Using a new method for measuring emotional climate, teamwork, and threats to patient outcome, we were able to determine that a teamwork training intervention can improve a calculated score of team skills and communication and decrease a calculated score of threats to patient outcome. However, the effect is only durable for threats to patient outcome.
Bates, A.L.; Hatcher, P.G.
1992-01-01
Isolated lignin with a low carbohydrate content was spiked with increasing amounts of alpha-cellulose, and then analysed by solid-state 13C nuclear magnetic resonance (NMR) using cross-polarization with magic angle spinning (CPMAS) and dipolar dephasing methods in order to assess the quantitative reliability of CPMAS measurement of carbohydrate content and to determine how increasingly intense resonances for carbohydrate carbons affect calculations of the degree of lignin's aromatic ring substitution and methoxyl carbon content. Comparisons were made of the carbohydrate content calculated by NMR with carbohydrate concentrations obtained by phenol-sulfuric acid assay and by the calculation from the known amounts of cellulose added. The NMR methods used in this study yield overestimates for carbohydrate carbons due to resonance area overlap from the aliphatic side chain carbons of lignin. When corrections are made for these overlapping resonance areas, the NMR results agree very well with results obtained by other methods. Neither the calculated methoxyl carbon content nor the degree of aromatic ring substitution in lignin, both calculated from dipolar dephasing spectra, change with cellulose content. Likewise, lignin methoxyl content does not correlate with cellulose abundance when measured by integration of CPMAS spectra. ?? 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amharrak, H.; Di Salvo, J.; Lyoussi, A.
2011-07-01
The objective of this study is to develop nuclear heating measurement methods in Zero Power experimental reactors. This paper presents the analysis of Thermo-Luminescent Detector (TLD) and Optically Stimulated Luminescent Detectors (OSLD) experiments in the UO{sub 2} core of the MINERVE research reactor at the CEA Cadarache. The experimental sources of uncertainties on the gamma dose have been reduced by improving the conditions, as well as the repeatability, of the calibration step for each individual TLD. The interpretation of these measurements needs to take into account calculation of cavity correction factors, related to calibration and irradiation configurations, as well asmore » neutron corrections calculations. These calculations are based on Monte Carlo simulations of neutron-gamma and gamma-electron transport coupled particles. TLD and OSLD are positioned inside aluminum pillboxes. The comparison between calculated and measured integral gamma-ray absorbed doses using TLD, shows that calculation slightly overestimates the measurement with a C/E value equal to 1.05 {+-} 5.3 % (k = 2). By using OSLD, the calculation slightly underestimates the measurement with a C/E value equal to 0.96 {+-} 7.0% (k = 2. (authors)« less
NASA Astrophysics Data System (ADS)
Prorokova, M. V.; Bukhmirov, V. V.
2016-02-01
The article describes the method of valuation of comfort of microclimate of residen-tial, public and administrative buildings. The method is based on calculation of the coefficient of thermal comfort of a person in the room. Further amendments are introduced to the asym-metry of the thermal radiation, radiation cooling and air quality. The method serves as the basis for a computer program.
Hortness, J.E.
2004-01-01
The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.
Method of Detecting Coliform Bacteria from Reflected Light
NASA Technical Reports Server (NTRS)
Vincent, Robert K. (Inventor)
2014-01-01
The present invention relates to a method of detecting coliform bacteria in water from reflected light, and also includes devices for the measurement, calculation and transmission of data relating to that method.
NASA Astrophysics Data System (ADS)
Beecken, B. P.; Fossum, E. R.
1996-07-01
Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.
Validation of Calculations in a Digital Thermometer Firmware
NASA Astrophysics Data System (ADS)
Batagelj, V.; Miklavec, A.; Bojkovski, J.
2014-04-01
State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.
SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devic, S; Tomic, N; DeBlois, F
2016-06-15
Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less
Modified Laser Flash Method for Thermal Properties Measurements and the Influence of Heat Convection
NASA Technical Reports Server (NTRS)
Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalia N.; Su, Ching-Hua; Lehoczky, Sandor L.
2003-01-01
The study examined the effect of natural convection in applying the modified laser flash method to measure thermal properties of semiconductor melts. Common laser flash method uses a laser pulse to heat one side of a thin circular sample and measures the temperature response of the other side. Thermal diffusivity can be calculations based on a heat conduction analysis. For semiconductor melt, the sample is contained in a specially designed quartz cell with optical windows on both sides. When laser heats the vertical melt surface, the resulting natural convection can introduce errors in calculation based on heat conduction model alone. The effect of natural convection was studied by CFD simulations with experimental verification by temperature measurement. The CFD results indicated that natural convection would decrease the time needed for the rear side to reach its peak temperature, and also decrease the peak temperature slightly in our experimental configuration. Using the experimental data, the calculation using only heat conduction model resulted in a thermal diffusivity value is about 7.7% lower than that from the model with natural convection. Specific heat capacity was about the same, and the difference is within 1.6%, regardless of heat transfer models.
Carbon storage and sequestration by trees in VIT University campus
NASA Astrophysics Data System (ADS)
Saral, A. Mary; SteffySelcia, S.; Devi, Keerthana
2017-11-01
The present study addresses carbon storage and sequestration by trees grown in VIT University campus, Vellore. Approximately twenty trees were selected from Woodstockarea. The above ground biomass and below ground biomass were calculated. The above ground biomass includes non-destructive anddestructive sampling. The Non-destructive method includes the measurement of height of thetree and diameter of the tree. The height of the tree is calculated using Total Station instrument and diameter is calculated using measuring tape. In the destructive method the weight of samples (leaves) and sub-samples (fruits, flowers) of the tree were considered. To calculate the belowground biomass soil samples are taken and analyzed. The results obtained were used to predict the carbon storage. It was found that out of twenty tree samples Millingtonia hortensis which is commonly known as Cork tree possess maximum carbon storage (14.342kg/tree) and carbon sequestration (52.583kg/tree) respectively.
NASA Astrophysics Data System (ADS)
Czerny, J.; Schulz, K. G.; Ludwig, A.; Riebesell, U.
2013-03-01
Mesocosms as large experimental units provide the opportunity to perform elemental mass balance calculations, e.g. to derive net biological turnover rates. However, the system is in most cases not closed at the water surface and gases exchange with the atmosphere. Previous attempts to budget carbon pools in mesocosms relied on educated guesses concerning the exchange of CO2 with the atmosphere. Here, we present a simple method for precise determination of air-sea gas exchange in mesocosms using N2O as a deliberate tracer. Beside the application for carbon budgeting, transfer velocities can be used to calculate exchange rates of any gas of known concentration, e.g. to calculate aquatic production rates of climate relevant trace gases. Using an arctic KOSMOS (Kiel Off Shore Mesocosms for future Ocean Simulation) experiment as an exemplary dataset, it is shown that the presented method improves accuracy of carbon budget estimates substantially. Methodology of manipulation, measurement, data processing and conversion to CO2 fluxes are explained. A theoretical discussion of prerequisites for precise gas exchange measurements provides a guideline for the applicability of the method under various experimental conditions.
Ida, Midori; Hirata, Masakazu; Hosoda, Kiminori; Nakao, Kazuwa
2013-02-01
Two novel bioelectrical impedance analysis (BIA) methods have been developed recently for evaluation of intra-abdominal fat accumulation. Both methods use electrodes that are placed on abdominal wall and allow evaluation of intra-abdominal fat area (IAFA) easily without radiation exposure. Of these, "abdominal BIA" method measures impedance distribution along abdominal anterior-posterior axis, and IAFA by BIA method(BIA-IAFA) is calculated from waist circumference and the voltage occurring at the flank. Dual BIA method measures impedance of trunk and body surface at the abdominal level and calculates BIA-IAFA from transverse and antero-posterior diameters of the abdomen and the impedance of trunk and abdominal surface. BIA-IAFA by these two BIA methods correlated well with IAFA measured by abdominal CT (CT-IAFA) with correlatipn coefficient of 0.88 (n = 91, p < 0.0001) for the former, and 0.861 (n = 469, p < 0.01) for the latter. These new BIA methods are useful for evaluating abdominal adiposity in clinical study and routine clinical practice of metabolic syndrome and obesity.
Verification of Internal Dose Calculations.
NASA Astrophysics Data System (ADS)
Aissi, Abdelmadjid
The MIRD internal dose calculations have been in use for more than 15 years, but their accuracy has always been questionable. There have been attempts to verify these calculations; however, these attempts had various shortcomings which kept the question of verification of the MIRD data still unanswered. The purpose of this research was to develop techniques and methods to verify the MIRD calculations in a more systematic and scientific manner. The research consisted of improving a volumetric dosimeter, developing molding techniques, and adapting the Monte Carlo computer code ALGAM to the experimental conditions and vice versa. The organic dosimetric system contained TLD-100 powder and could be shaped to represent human organs. The dosimeter possessed excellent characteristics for the measurement of internal absorbed doses, even in the case of the lungs. The molding techniques are inexpensive and were used in the fabrication of dosimetric and radioactive source organs. The adaptation of the computer program provided useful theoretical data with which the experimental measurements were compared. The experimental data and the theoretical calculations were compared for 6 source organ-7 target organ configurations. The results of the comparison indicated the existence of an agreement between measured and calculated absorbed doses, when taking into consideration the average uncertainty (16%) of the measurements, and the average coefficient of variation (10%) of the Monte Carlo calculations. However, analysis of the data gave also an indication that the Monte Carlo method might overestimate the internal absorbed doses. Even if the overestimate exists, at least it could be said that the use of the MIRD method in internal dosimetry was shown to lead to no unnecessary exposure to radiation that could be caused by underestimating the absorbed dose. The experimental and the theoretical data were also used to test the validity of the Reciprocity Theorem for heterogeneous phantoms, such as the MIRD phantom and its physical representation, Mr. ADAM. The results indicated that the Reciprocity Theorem is valid within an average range of uncertainty of 8%.
Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin
2013-12-15
The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.
Dyer, Karrie; Lanning, Craig; Das, Bibhuti; Lee, Po-Feng; Ivy, D. Dunbar; Valdes-Cruz, Lilliam; Shandas, Robin
2007-01-01
Background We have shown previously that input impedance of the pulmonary vasculature provides a comprehensive characterization of right ventricular afterload by including compliance. However, impedance-based compliance assessment requires invasive measurements. Here, we develop and validate a noninvasive method to measure pulmonary artery (PA) compliance using ultrasound color M-mode (CMM) Doppler tissue imaging (DTI). Methods Dynamic compliance (Cdyn) of the PA was obtained from CMM DTI and continuous wave Doppler measurement of the tricuspid regurgitant velocity. Cdyn was calculated as: [(Ds − Dd)/(Dd × Ps)] × 104; where Ds = systolic diameter, Dd = diastolic diameter, and Ps = systolic pressure. The method was validated both in vitro and in 13 patients in the catheterization laboratory, and then tested on 27 pediatric patients with pulmonary hypertension, with comparison with 10 age-matched control subjects. Cdyn was also measured in an additional 13 patients undergoing reactivity studies. Results Instantaneous diameter measured using CMM DTI agreed well with intravascular ultrasound measurements in the in vitro models. Clinically, Cdyn calculated by CMM DTI agreed with Cdyn calculated using invasive techniques (23.4 ± 16.8 vs 29.1 ± 20.6%/100 mm Hg; P = not significant). Patients with pulmonary hypertension had significantly lower peak wall velocity values and lower Cdyn values than control subjects (P < .01). Cdyn values followed an exponentially decaying relationship with PA pressure, indicating the nonlinear stress–strain behavior of these arteries. Reactivity in Cdyn agreed with reactivity measured using impedance techniques. Conclusion The Cdyn method provides a noninvasive means of assessing PA compliance and should be useful as an additional measure of vascular reactivity subsequent to pulmonary vascular resistance in patients with pulmonary hypertension. PMID:16581479
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-09-03
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-01-01
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174
NASA Astrophysics Data System (ADS)
Brunke, Heinz-Peter; Matzka, Jürgen
2018-01-01
At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).
We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.
A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.
Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.
Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.
The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).
NASA Astrophysics Data System (ADS)
Kyllmar, K.; Mårtensson, K.; Johnsson, H.
2005-03-01
A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.
A small-plane heat source method for measuring the thermal conductivities of anisotropic materials
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yue, Kai; Wang, Jun; Zhang, Xinxin
2017-07-01
A new small-plane heat source method was proposed in this study to simultaneously measure the in-plane and cross-plane thermal conductivities of anisotropic insulating materials. In this method the size of the heat source element is smaller than the sample size and the boundary condition is thermal insulation due to no heat flux at the edge of the sample during the experiment. A three-dimensional model in a rectangular coordinate system was established to exactly describe the heat transfer process of the measurement system. Using the Laplace transform, variable separation, and Laplace inverse transform methods, the analytical solution of the temperature rise of the sample was derived. The temperature rises calculated by the analytical solution agree well with the results of numerical calculation. The result of the sensitivity analysis shows that the sensitivity coefficients of the estimated thermal conductivities are high and uncorrelated to each other. At room temperature and in a high-temperature environment, experimental measurements of anisotropic silica aerogel were carried out using the traditional one-dimensional plane heat source method and the proposed method, respectively. The results demonstrate that the measurement method developed in this study is effective and feasible for simultaneously obtaining the in-plane and cross-plane thermal conductivities of the anisotropic materials.
Steponas Kolupaila's contribution to hydrological science development
NASA Astrophysics Data System (ADS)
Valiuškevičius, Gintaras
2017-08-01
Steponas Kolupaila (1892-1964) was an important figure in 20th century hydrology and one of the pioneers of scientific water gauging in Europe. His research on the reliability of hydrological data and measurement methods was particularly important and contributed to the development of empirical hydrological calculation methods. Kolupaila was one of the first who standardised water-gauging methods internationally. He created several original hydrological and hydraulic calculation methods (his discharge assessment method for winter period was particularly significant). His innate abilities and frequent travel made Kolupaila a universal specialist in various fields and an active public figure. He revealed his multilayered scientific and cultural experiences in his most famous book, Bibliography of Hydrometry. This book introduced the unique European hydrological-measurement and computation methods to the community of world hydrologists at that time and allowed the development and adaptation of these methods across the world.
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-01-01
An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-03-13
An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.
Thomas, Carole L.; Stewart, Amy E.; Constantz, Jim E.
2000-01-01
Two methods, one a surface-water method and the second a ground-water method, were used to determine infiltration and percolation rates along a 2.5-kilometer reach of the Santa Fe River near La Bajada, New Mexico. The surface-water method uses streamflow measurements and their differences along a stream reach, streamflow-loss rates, stream surface area, and evaporation rates to determine infiltration rates. The ground-water method uses heat as a tracer to monitor percolation through shallow streambed sediments. Data collection began in October 1996 and continued through December 1997. During that period the stream reach was instrumented with three streamflow gages, and temperature profiles were monitored from the stream-sediment interface to about 3 meters below the streambed at four sites along the reach. Infiltration is the downward flow of water through the stream- sediment interface. Infiltration rates ranged from 92 to 267 millimeters per day for an intense measurement period during June 26- 28, 1997, and from 69 to 256 millimeters per day during September 27-October 6, 1997. Investigators calculated infiltration rates from streamflow loss, stream surface-area measurements, and evaporation-rate estimates. Infiltration rates may be affected by unmeasured irrigation-return flow in the study reach. Although the amount of irrigation-return flow was none to very small, it may result in underestimation of infiltration rates. The infiltration portion of streamflow loss was much greater than the evaporation portion. Infiltration accounted for about 92 to 98 percent of streamflow loss. Evaporation-rate estimates ranged from 3.4 to 7.6 millimeters per day based on pan-evaporation data collected at Cochiti Dam, New Mexico, and accounted for about 2 to 8 percent of streamflow loss. Percolation is the movement of water through saturated or unsaturated sediments below the stream-sediment interface. Percolation rates ranged from 40 to 109 millimeters per day during June 26-28, 1997. Percolation rates were not calculated for the September 27-October 6, 1997, period because a late summer flood removed the temperature sensors from the streambed. Investigators used a heat-and-water flow model, VS2DH (variably saturated, two- dimensional heat), to calculate near-surface streambed infiltration and percolation rates from temperatures measured in the stream and streambed. Near the stream-sediment interface, infiltration and percolation rates are comparable. Comparison of infiltration and percolation rates showed that infiltration rates were greater than percolation rates. The method used to calculate infiltration rates accounted for net loss or gain over the entire stream reach, whereas the method used to calculate percolation was dependent on point measurements and, as applied in this study, neglected the nonvertical component of heat and water fluxes. In general, using the ground-water method was less labor intensive than making a series of streamflow measurements and relied on temperature, an easily measured property. The ground-water method also eliminated the difficulty of measuring or estimating evaporation from the water surface and was therefore more direct. Both methods are difficult to use during periods of flood flow. The ground-water method has problems with the thermocouple-wire temperature sensors washing out during flood events. The surface- water method often cannot be used because of safety concerns for personnel making wading streamflow measurements.
Method for calibrating a Fourier transform ion cyclotron resonance mass spectrometer
Smith, Richard D.; Masselon, Christophe D.; Tolmachev, Aleksey
2003-08-19
A method for improving the calibration of a Fourier transform ion cyclotron resonance mass spectrometer wherein the frequency spectrum of a sample has been measured and the frequency (f) and intensity (I) of at least three species having known mass to charge (m/z) ratios and one specie having an unknown (m/z) ratio have been identified. The method uses the known (m/z) ratios, frequencies, and intensities at least three species to calculate coefficients A, B, and C, wherein the mass to charge ratio of a least one of the three species (m/z).sub.i is equal to ##EQU1## wherein f.sub.i is the detected frequency of the specie, G(I.sub.i) is a predetermined function of the intensity of the species, and Q is a predetermined exponent. Using the calculated values for A, B, and C, the mass to charge ratio of the unknown specie (m/z).sub.ii is calculated as the sum of ##EQU2## wherein f.sub.ii is the measured frequency of the unknown specie, and (I.sub.ii) is the measured intensity of the unknown specie.
Development and accuracy of a multipoint method for measuring visibility.
Tai, Hongda; Zhuang, Zibo; Sun, Dongsong
2017-10-01
Accurate measurements of visibility are of great importance in many fields. This paper reports a multipoint visibility measurement (MVM) method to measure and calculate the atmospheric transmittance, extinction coefficient, and meteorological optical range (MOR). The relative errors of atmospheric transmittance and MOR measured by the MVM method and traditional transmissometer method are analyzed and compared. Experiments were conducted indoors, and the data were simultaneously processed. The results revealed that the MVM can effectively improve the accuracy under different visibility conditions. The greatest improvement of accuracy was 27%. The MVM can be used to calibrate and evaluate visibility meters.
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeda, T.; Shimazu, Y.; Hibi, K.
2012-07-01
Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of thismore » project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)« less
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
A graph-based semantic similarity measure for the gene ontology.
Alvarez, Marco A; Yan, Changhui
2011-12-01
Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W; Kappadath, S
2014-06-01
Purpose: Some common methods to solve for deadtime are (1) dual-source method, which assumes two equal activities; (2) model fitting, which requires multiple acquisitions as source decays; and (3) lossless model, which assumes no deadtime loss at low count rates. We propose a new analytic alternative solution to calculate deadtime for paralyzable gamma camera. Methods: Deadtime T can be calculated analytically from two distinct observed count rates M1 and M2 when the ratio of the true count rates alpha=N2/N1 is known. Alpha can be measured as a ratio of two measured activities using dose calibrators or via radioactive decay. Knowledgemore » of alpha creates a system with 2 equations and 2 unknowns, i.e., T and N1. To verify the validity of the proposed method, projections of a non-uniform phantom (4GBq 99mTc) were acquired in using Siemens SymbiaS multiple times over 48 hours. Each projection has >100kcts. The deadtime for each projection was calculated by fitting the data to a paralyzable model and also by using the proposed 2-acquisition method. The two estimates of deadtime were compared using the Bland-Altmann method. In addition, the dependency of uncertainty in T on uncertainty in alpha was investigated for several imaging conditions. Results: The results strongly suggest that the 2-acquisition method is equivalent to the fitting method. The Bland-Altman analysis yielded mean difference in deadtime estimate of ∼0.076us (95%CI: -0.049us, 0.103us) between the 2-acquisition and model fitting methods. The 95% limits of agreement were calculated to be -0.104 to 0.256us. The uncertainty in deadtime calculated using the proposed method is highly dependent on the uncertainty in the ratio alpha. Conclusion: The 2-acquisition method was found to be equivalent to the parameter fitting method. The proposed method offers a simpler and more practical way to analytically solve for a paralyzable detector deadtime, especially during physics testing.« less
Development of congestion performance measures using ITS information.
DOT National Transportation Integrated Search
2003-01-01
The objectives of this study were to define a performance measure(s) that could be used to show congestion levels on critical corridors throughout Virginia and to develop a method to select and calculate performance measures to quantify congestion in...
SU-E-T-757: TMRs Calculated From PDDs Versus the Direct Measurements for Small Field SRS Cones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H; Zhong, H; Song, K
2015-06-15
Purpose: To investigate the variation of TMR for SRS cones obtained by TMR scanning, calculation from PDDs, and point measurements. The obtained TMRs were also compared to the representative data from the vendor. Methods: TMRs for conical cones of 4, 5, 7.5, 10, 12.5, 15, and 17.5 mm diameter (jaws set to 5×5 cm) were obtained for 6X FFF and 10X FFF energies on a Varian Edge linac. TMR scanning was performed with a Sun Nuclear 3D scanner and Edge detector at 100 cm SDD. TMR point measurements were measured with a Wellhofer tank and Edge detector, at multiple depthsmore » from 0.5 to 20 cm and 100 cm SDD. PDDs for converting to TMR were scanned with a Wellhofer system and SFD detector. The formalism of converting PDD to TMR, given in Khan’s book (4th Edition, p.161) was applied. Sp values at dmax were obtained by measuring Scp and Sc of the cones (jaws set to 5×5 cm) using the Edge detector, and normalized to the 10×10 cm field. Results: Along the central axis beyond dmax, the RMS and maximum percent difference of TMRs obtained with different methods were as follows: (a) 1.3% (max=3.5%) for the calculated TMRs from PDDs versus direct scanning; (b) 1.2% (max=3.3%) for direct scanning versus point measurement; (c) 1.8% (max=5.1%) for the calculated versus point measurements; (d) 1.0% (max=3.6%) for direct scanning versus vendor data; (e) 1.6% (max=7.2%) for the calculated versus vendor data. Conclusion: The overall accuracy of TMRs calculated from PDDs was comparable with that of direct scanning. However, the uncertainty at depths greater than 20 cm, increased up to 5% when compared to point measurements. This issue must be considered when developing a beam model for small field SRS planning using cones.« less
Brain Volume Estimation Enhancement by Morphological Image Processing Tools.
Zeinali, R; Keshtkar, A; Zamani, A; Gharehaghaji, N
2017-12-01
Volume estimation of brain is important for many neurological applications. It is necessary in measuring brain growth and changes in brain in normal/abnormal patients. Thus, accurate brain volume measurement is very important. Magnetic resonance imaging (MRI) is the method of choice for volume quantification due to excellent levels of image resolution and between-tissue contrast. Stereology method is a good method for estimating volume but it requires to segment enough MRI slices and have a good resolution. In this study, it is desired to enhance stereology method for volume estimation of brain using less MRI slices with less resolution. In this study, a program for calculating volume using stereology method has been introduced. After morphologic method, dilation was applied and the stereology method enhanced. For the evaluation of this method, we used T1-wighted MR images from digital phantom in BrainWeb which had ground truth. The volume of 20 normal brain extracted from BrainWeb, was calculated. The volumes of white matter, gray matter and cerebrospinal fluid with given dimension were estimated correctly. Volume calculation from Stereology method in different cases was made. In three cases, Root Mean Square Error (RMSE) was measured. Case I with T=5, d=5, Case II with T=10, D=10 and Case III with T=20, d=20 (T=slice thickness, d=resolution as stereology parameters). By comparing these results of two methods, it is obvious that RMSE values for our proposed method are smaller than Stereology method. Using morphological operation, dilation allows to enhance the estimation volume method, Stereology. In the case with less MRI slices and less test points, this method works much better compared to Stereology method.
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)
2014-01-01
A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.
NASA Astrophysics Data System (ADS)
Kalb, Wolfgang L.; Batlogg, Bertram
2010-01-01
The spectral density of localized states in the band gap of pentacene (trap DOS) was determined with a pentacene-based thin-film transistor from measurements of the temperature dependence and gate-voltage dependence of the contact-corrected field-effect conductivity. Several analytical methods to calculate the trap DOS from the measured data were used to clarify, if the different methods lead to comparable results. We also used computer simulations to further test the results from the analytical methods. Most methods predict a trap DOS close to the valence-band edge that can be very well approximated by a single exponential function with a slope in the range of 50-60 meV and a trap density at the valence-band edge of ≈2×1021eV-1cm-3 . Interestingly, the trap DOS is always slightly steeper than exponential. An important finding is that the choice of the method to calculate the trap DOS from the measured data can have a considerable effect on the final result. We identify two specific simplifying assumptions that lead to significant errors in the trap DOS. The temperature dependence of the band mobility should generally not be neglected. Moreover, the assumption of a constant effective accumulation-layer thickness leads to a significant underestimation of the slope of the trap DOS.
Kimura, Yasuyuki; Siméon, Fabrice G; Zoghbi, Sami S; Zhang, Yi; Hatazawa, Jun; Pike, Victor W; Innis, Robert B; Fujita, Masahiro
2012-02-01
A new PET ligand, 3-fluoro-5-(2-(2-(18)F-(fluoromethyl)-thiazol-4-yl)ethynyl)benzonitrile (18F-SP203) can quantify metabotropic glutamate subtype 5 receptors (mGluR5) in human brain by a bolus injection and kinetic modeling. As an alternative approach to a bolus injection, binding can simply be measured as a ratio of tissue to metabolite-corrected plasma at a single time point under equilibrium conditions achieved by administering the radioligand with a bolus injection followed by a constant infusion. The purpose of this study was to validate the equilibrium method as an alternative to the standard kinetic method for measuring 18F-SP203 binding in the brain. Nine healthy subjects were injected with 18F-SP203 using a bolus plus constant infusion for 300 min. A single ratio of bolus-to-constant infusion (the activity of bolus equaled to that of infusion over 219 min) was applied to all subjects to achieve equilibrium in approximately 120 min. As a measure of ligand binding, we compared total distribution volume (VT) calculated by the equilibrium and kinetic methods in each scan. The equilibrium method calculated VT by the ratio of radioactivity in the brain to the concentration of 18F-SP203 in arterial plasma at 120 min, and the kinetic method calculated VT by a two-tissue compartment model using brain and plasma dynamic data from 0 to 120 min. VT obtained via the equilibrium method was highly correlated with VT obtained via kinetic modeling. Inter-subject variability of VT obtained via the equilibrium method was slightly smaller than VT obtained via the kinetic method. VT obtained via the equilibrium method was ~10% higher than VT obtained via the kinetic method, indicating a small difference between the measurements. Taken together, the results of this study show that using the equilibrium method is an acceptable alternative to the standard kinetic method when using 18F-SP203 to measure mGluR5. Although small differences in the measurements obtained via the equilibrium and kinetic methods exist, both methods consistently measured mGluR5 as indicated by the highly correlated VT values; the equilibrium method was slightly more precise, as indirectly measured by the smaller coefficient of variability across subjects. In addition, when using 18F-SP203, the equilibrium method is more efficient because it requires much less data. Copyright © 2011. Published by Elsevier Inc.
Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph
2003-06-01
A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.
Measuring signal-to-noise ratio in partially parallel imaging MRI
Goerner, Frank L.; Clarke, Geoffrey D.
2011-01-01
Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049
Pressure estimation from single-snapshot tomographic PIV in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Schneiders, Jan F. G.; Pröbsting, Stefan; Dwight, Richard P.; van Oudheusden, Bas W.; Scarano, Fulvio
2016-04-01
A method is proposed to determine the instantaneous pressure field from a single tomographic PIV velocity snapshot and is applied to a flat-plate turbulent boundary layer. The main concept behind the single-snapshot pressure evaluation method is to approximate the flow acceleration using the vorticity transport equation. The vorticity field calculated from the measured instantaneous velocity is advanced over a single integration time step using the vortex-in-cell (VIC) technique to update the vorticity field, after which the temporal derivative and material derivative of velocity are evaluated. The pressure in the measurement volume is subsequently evaluated by solving a Poisson equation. The procedure is validated considering data from a turbulent boundary layer experiment, obtained with time-resolved tomographic PIV at 10 kHz, where an independent surface pressure fluctuation measurement is made by a microphone. The cross-correlation coefficient of the surface pressure fluctuations calculated by the single-snapshot pressure method with respect to the microphone measurements is calculated and compared to that obtained using time-resolved pressure-from-PIV, which is regarded as benchmark. The single-snapshot procedure returns a cross-correlation comparable to the best result obtained by time-resolved PIV, which uses a nine-point time kernel. When the kernel of the time-resolved approach is reduced to three measurements, the single-snapshot method yields approximately 30 % higher correlation. Use of the method should be cautioned when the contributions to fluctuating pressure from outside the measurement volume are significant. The study illustrates the potential for simplifying the hardware configurations (e.g. high-speed PIV or dual PIV) required to determine instantaneous pressure from tomographic PIV.
Sofia, C; Magno, C; Silipigni, S; Cantisani, V; Mucciardi, G; Sottile, F; Inferrera, A; Mazziotti, S; Ascenti, G
2017-01-01
To evaluate the precision of the centrality index (CI) measurement on three-dimensional (3D) volume rendering technique (VRT) images in patients with renal masses, compared to its standard measurement on axial images. Sixty-five patients with renal lesions underwent contrast-enhanced multidetector (MD) computed tomography (CT) for preoperative imaging. Two readers calculated the CI on two-dimensional axial images and on VRT images, measuring it in the plane that the tumour and centre of the kidney were lying in. Correlation and agreement of interobserver measurements and inter-method results were calculated using intraclass correlation (ICC) coefficients and the Bland-Altman method. Time saving was also calculated. The correlation coefficients were r=0.99 (p<0.05) and r=0.99 (p<0.05) for both the CI on axial and VRT images, with an ICC of 0.99, and 0.99, respectively. Correlation between the two methods of measuring the CI on VRT and axial CT images was r=0.99 (p<0.05). The two methods showed a mean difference of -0.03 (SD 0.13). Mean time saving per each examination with VRT was 45.5%. The present study showed that VRT and axial images produce almost identical values of CI, with the advantages of greater ease of execution and a time saving of almost 50% for 3D VRT images. In addition, VRT provides an integrated perspective that can better assist surgeons in clinical decision making and in operative planning, suggesting this technique as a possible standard method for CI measurement. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Schuurmann, R C L; Kuster, L; Slump, C H; Vahl, A; van den Heuvel, D A F; Ouriel, K; de Vries, J-P P M
2016-02-01
Supra- and infrarenal aortic neck angulation have been associated with complications after endovascular aortic aneurysm repair. However, a uniform angulation measurement method is lacking and the concept of angulation suggests a triangular oversimplification of the aortic anatomy. (Semi-)automated calculation of curvature along the center luminal line describes the actual trajectory of the aorta. This study proposes a methodology for calculating aortic (neck) curvature and suggests an additional method based on available tools in current workstations: curvature by digital calipers (CDC). Proprietary custom software was developed for automatic calculation of the severity and location of the largest supra- and infrarenal curvature over the center luminal line. Twenty-four patients with severe supra- or infrarenal angulations (≥45°) and 11 patients with small to moderate angulations (<45°) were included. Both CDC and angulation were measured by two independent observers on the pre- and postoperative computed tomographic angiography scans. The relationships between actual curvature and CDC and angulation were visualized and tested with Pearson's correlation coefficient. The CDC was also fully automatically calculated with proprietary custom software. The difference between manual and automatic determination of CDC was tested with a paired Student t test. A p-value was considered significant when two-tailed α < .05. The correlation between actual curvature and manual CDC is strong (.586-.962) and even stronger for automatic CDC (.865-.961). The correlation between actual curvature and angulation is much lower (.410-.737). Flow direction angulation values overestimate CDC measurements by 60%, with larger variance. No significant difference was found in automatically calculated CDC values and manually measured CDC values. Curvature calculation of the aortic neck improves determination of the true aortic trajectory. Automatic calculation of the actual curvature is preferable, but measurement or calculation of the curvature by digital calipers is a valid alternative if actual curvature is not at hand. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Problems With Risk Reclassification Methods for Evaluating Prediction Models
Pepe, Margaret S.
2011-01-01
For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714
Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu
2017-04-01
The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m-3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m-3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.
Echocardiographic measurements of left ventricular mass by a non-geometric method
NASA Technical Reports Server (NTRS)
Parra, Beatriz; Buckey, Jay; Degraff, David; Gaffney, F. Andrew; Blomqvist, C. Gunnar
1987-01-01
The accuracy of a new nongeometric method for calculating left ventricular myocardial volumes from two-dimensional echocardiographic images was assessed in vitro using 20 formalin-fixed normal human hearts. Serial oblique short-axis images were acquired from one point at 5-deg intervals, for a total of 10-12 cross sections. Echocardiographic myocardial volumes were calculated as the difference between the volumes defined by the epi- and endocardial surfaces. Actual myocardial volumes were determined by water displacement. Volumes ranged from 80 to 174 ml (mean 130.8 ml). Linear regression analysis demonstrated excellent agreement between the echocardiographic and direct measurements.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Scanning system, infrared noise equivalent temperature difference: Measurement procedure
NASA Technical Reports Server (NTRS)
Cannon, J. B., Jr.
1975-01-01
A procedure is described for determining the noise equivalent difference temperature for infrared electro-optical instruments. The instrumentation required, proper measurements, and methods of calculation are included.
Head repositioning accuracy to neutral: a comparative study of error calculation.
Hill, Robert; Jensen, Pål; Baardsen, Tor; Kulvik, Kristian; Jull, Gwendolen; Treleaven, Julia
2009-02-01
Deficits in cervical proprioception have been identified in subjects with neck pain through the measure of head repositioning accuracy (HRA). Nevertheless there appears to be no general consensus regarding the construct of measurement of error used for calculating HRA. This study investigated four different mathematical methods of measurement of error to determine if there were any differences in their ability to discriminate between a control group and subjects with a whiplash associated disorder. The four methods for measuring cervical joint position error were calculated using a previous data set consisting of 50 subjects with whiplash complaining of dizziness (WAD D), 50 subjects with whiplash not complaining of dizziness (WAD ND) and 50 control subjects. The results indicated that no one measure of HRA uniquely detected or defined the differences between the whiplash and control groups. Constant error (CE) was significantly different between the whiplash and control groups from extension (p<0.05). Absolute errors (AEs) and root mean square errors (RMSEs) demonstrated differences between the two WAD groups in rotation trials (p<0.05). No differences were seen with variable error (VE). The results suggest that a combination of AE (or RMSE) and CE are probably the most suitable measures for analysis of HRA.
Comparison of methods to determine disk and heartwood areas
Michael C. Wiemann; John P. Brown; Neal D. Bennett
2002-01-01
The feasibility of using radius measurements on disks to determine cross-sectional areas of tree stems and the heartwood they contain was examined in sugar maple and red oak butt logs. Areas calculated from quadratic means of four stem radii and four heartwood radii were compared with areas measured with a planimeter. The lineal measurement method was less precise for...
De Henau, H; Mathijs, E; Hopping, W D
1986-01-01
Linear Alkylbenzenesulphonates (LAS), a major anionic surfactant used in laundry products, can be measured specifically in the environment by instrumental analysis. In addition to a desulphonation-gas chromatography approach, a method based on high performance liquid chromatography has been developed. The main features of the methods are outlined, and LAS concentrations measured in sewage sludge, sediments and sludge amended soils are reported. Knowledge of usage volumes, sewage treatment practices and environmental transport and transformation mechanisms has been used to predict concentrations of LAS. These calculated concentrations were found to agree well with those actually measured in the environment. Both measured and calculated ambient concentrations of LAS are below those which could produce potentially adverse effects in representative surface water, benthic and terrestrial organisms.
Wind turbine sound pressure level calculations at dwellings.
Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits
2016-03-01
This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.
Methanol in its own gravy. A PCM study for simulation of vibrational spectra.
Billes, Ferenc; Mohammed-Ziegler, Ildikó; Mikosch, Hans
2011-05-07
For studying both hydrogen bond and dipole-dipole interactions between methanol molecules (self-association) the geometry of clusters of increasing numbers of methanol molecules (n = 1,2,3) were optimized and also their vibrational frequencies were calculated with quantum chemical methods. Beside these B3LYP/6-311G** calculations, PCM calculations were also done for all systems with PCM at the same quantum chemical method and basis set, for considering the effect of the liquid continuum on the cluster properties. Comparing the results, the measured and calculated infrared spectra are in good accordance. This journal is © the Owner Societies 2011
Spacecraft angular velocity estimation algorithm for star tracker based on optical flow techniques
NASA Astrophysics Data System (ADS)
Tang, Yujie; Li, Jian; Wang, Gangyi
2018-02-01
An integrated navigation system often uses the traditional gyro and star tracker for high precision navigation with the shortcomings of large volume, heavy weight and high-cost. With the development of autonomous navigation for deep space and small spacecraft, star tracker has been gradually used for attitude calculation and angular velocity measurement directly. At the same time, with the dynamic imaging requirements of remote sensing satellites and other imaging satellites, how to measure the angular velocity in the dynamic situation to improve the accuracy of the star tracker is the hotspot of future research. We propose the approach to measure angular rate with a nongyro and improve the dynamic performance of the star tracker. First, the star extraction algorithm based on morphology is used to extract the star region, and the stars in the two images are matched according to the method of angular distance voting. The calculation of the displacement of the star image is measured by the improved optical flow method. Finally, the triaxial angular velocity of the star tracker is calculated by the star vector using the least squares method. The method has the advantages of fast matching speed, strong antinoise ability, and good dynamic performance. The triaxial angular velocity of star tracker can be obtained accurately with these methods. So, the star tracker can achieve better tracking performance and dynamic attitude positioning accuracy to lay a good foundation for the wide application of various satellites and complex space missions.
Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth
NASA Astrophysics Data System (ADS)
Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.
2017-12-01
We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.
NASA Astrophysics Data System (ADS)
Nayek, C.; Manna, K.; Imam, A. A.; Alqasrawi, A. Y.; Obaidat, I. M.
2018-02-01
Understanding the size dependent magnetic anisotropy of iron oxide nanoparticles is essential for the successful application of these nanoparticles in several technological and medical fields. PEG-coated iron oxide (Fe3O4) nanoparticles with core diameters of 12 nm, 15 nm, and 16 nm were synthesized by the usual co-precipitation method. The morphology and structure of the nanoparticles were investigated using transmission electron microscopy (TEM), high resolution transmission electron microscopy (HRTEM), selected area electron diffraction (SAED), and X-ray diffraction (XRD). Magnetic measurements were conducted using a SQUID. The effective magnetic anisotropy was calculated using two methods from the magnetization measurements. In the first method the zero-field-cooled magnetization versus temperature measurements were used at several applied magnetic fields. In the second method we used the temperature-dependent coercivity curves obtained from the zero-field-cooled magnetization versus magnetic field hysteresis loops. The role of the applied magnetic field on the effective magnetic anisotropy, calculated form the zero-field-cooled magnetization versus temperature measurements, was revealed. The size dependence of the effective magnetic anisotropy constant Keff obtained by the two methods are compared and discussed.
Ab initio simulation of diffractometer instrumental function for high-resolution X-ray diffraction1
Mikhalychev, Alexander; Benediktovitch, Andrei; Ulyanenkova, Tatjana; Ulyanenkov, Alex
2015-01-01
Modeling of the X-ray diffractometer instrumental function for a given optics configuration is important both for planning experiments and for the analysis of measured data. A fast and universal method for instrumental function simulation, suitable for fully automated computer realization and describing both coplanar and noncoplanar measurement geometries for any combination of X-ray optical elements, is proposed. The method can be identified as semi-analytical backward ray tracing and is based on the calculation of a detected signal as an integral of X-ray intensities for all the rays reaching the detector. The high speed of calculation is provided by the expressions for analytical integration over the spatial coordinates that describe the detection point. Consideration of the three-dimensional propagation of rays without restriction to the diffraction plane provides the applicability of the method for noncoplanar geometry and the accuracy for characterization of the signal from a two-dimensional detector. The correctness of the simulation algorithm is checked in the following two ways: by verifying the consistency of the calculated data with the patterns expected for certain simple limiting cases and by comparing measured reciprocal-space maps with the corresponding maps simulated by the proposed method for the same diffractometer configurations. Both kinds of tests demonstrate the agreement of the simulated instrumental function shape with the measured data. PMID:26089760
Concentration Measurements in a Cold Flow Model Annular Combustor Using Laser Induced Fluorescence
NASA Technical Reports Server (NTRS)
Morgan, Douglas C.
1996-01-01
A nonintrusive concentration measurement method is developed for determining the concentration distribution in a complex flow field. The measurement method consists of marking a liquid flow with a water soluble fluorescent dye. The dye is excited by a two dimensional sheet of laser light. The fluorescent intensity is shown to be proportional to the relative concentration level. The fluorescent field is recorded on a video cassette recorder through a video camera. The recorded images are analyzed with image processing hardware and software to obtain intensity levels. Mean and root mean square (rms) values are calculated from these intensity levels. The method is tested on a single round turbulent jet because previous concentration measurements have been made on this configuration by other investigators. The previous results were used to comparison to qualify the current method. These comparisons showed that this method provides satisfactory results. 'Me concentration measurement system was used to measure the concentrations in the complex flow field of a model gas turbine annular combustor. The model annular combustor consists of opposing primary jets and an annular jet which discharges perpendicular to the primary jets. The mixing between the different jet flows can be visualized from the calculated mean and rms profiles. Concentration field visualization images obtained from the processing provide further qualitative information about the flow field.
NASA Astrophysics Data System (ADS)
Baselt, Tobias; Taudt, Christopher; Nelsen, Bryan; Lasagni, Andrés. Fabián.; Hartmann, Peter
2017-06-01
The optical properties of the guided modes in the core of photonic crystal fibers (PCFs) can be easily manipulated by changing the air-hole structure in the cladding. Special properties can be achieved in this case such as endless singlemode operation. Endlessly single-mode fibers, which enable single-mode guidance over a wide spectral range, are indispensable in the field of fiber technology. A two-dimensional photonic crystal with a silica central core and a micrometer-spaced hexagonal array of air holes is an established method to achieve endless single-mode properties. In addition to the guidance of light in the core, different cladding modes occur. The coupling between the core and the cladding modes can affect the endlessly single-mode guides. There are two possible ways to determine the dispersion: measurement and calculation. We calculate the group velocity dispersion (GVD) of different cladding modes based on the measurement of the fiber structure parameters, the hole diameter and the pitch of a presumed homogeneous hexagonal array. Based on the scanning electron image, a calculation was made of the optical guiding properties of the microstructured cladding. We compare the calculation with a method to measure the wavelength-dependent time delay. We measure the time delay of defined cladding modes with a homemade supercontinuum light source in a white light interferometric setup. To measure the dispersion of cladding modes of optical fibers with high accuracy, a time-domain white-light interferometer based on a Mach-Zehnder interferometer is used. The experimental setup allows the determination of the wavelengthdependent differential group delay of light travelling through a thirty centimeter piece of test fiber in the wavelength range from VIS to NIR. The determination of the GVD using different methods enables the evaluation of the individual methods for characterizing the cladding modes of an endlessly single-mode fiber.
Development of a primary standard for absorbed dose from unsealed radionuclide solutions
NASA Astrophysics Data System (ADS)
Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.
2016-12-01
Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k = 1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation dose assessment methods based on MIRD and RADAR data for 90Y have been validated with experimental absorbed dose determination and they agree within the stated expanded uncertainty (k = 2).
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Tissue thickness calculation in ocular optical coherence tomography
Alonso-Caneiro, David; Read, Scott A.; Vincent, Stephen J.; Collins, Michael J.; Wojtkowski, Maciej
2016-01-01
Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature. PMID:26977367
Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H
2014-03-01
In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Abdulbaqi, Hayder Saad; Mutter, Kussay N.; Mustapha, Iskandar Shahrim; Omar, Ahmad Fairuz
2017-06-01
A brain tumour is an abnormal growth of tissue in the brain. Most tumour volume measurement processes are carried out manually by the radiographer and radiologist without relying on any auto program. This manual method is a timeconsuming task and may give inaccurate results. Treatment, diagnosis, signs and symptoms of the brain tumours mainly depend on the tumour volume and its location. In this paper, an approach is proposed to improve volume measurement of brain tumors as well as using a new method to determine the brain tumour location. The current study presents a hybrid method that includes two methods. One method is hidden Markov random field - expectation maximization (HMRFEM), which employs a positive initial classification of the image. The other method employs the threshold, which enables the final segmentation. In this method, the tumour volume is calculated using voxel dimension measurements. The brain tumour location was determined accurately in T2- weighted MRI image using a new algorithm. According to the results, this process was proven to be more useful compared to the manual method. Thus, it provides the possibility of calculating the volume and determining location of a brain tumour.
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Jaffe, Richard L.
2010-01-01
High-level ab initio calculations have been performed on the exo and endo isomers of gas-phase tetrahydrodicyclopentadiene (THDCPD), a principal component of the jet fuel JP10, using the Gaussian Gx and Gx(MPx) composite methods, as well as the CBS-QB3 method, and using a variety of isodesmic and homodesmotic reaction schemes. The impetus for this work is to help resolve large discrepancies existing between literature measurements of the formation enthalpy Delta (sub f)H deg (298) for exo-THDCPD. We find that use of the isodesmic bond separation reaction C10H16 + 14CH4 yields 12C2H6 yields results for the exo isomer (JP10) in between the two experimentally accepted values, for the composite methods G3(MP2), G3(MP2)//B3LYP, and CBS-QB3. Application of this same isodesmic bond separation scheme to gas-phase adamantane yields a value for Delta (sub f)H deg (298) within 5 kJ/mol of experiment. Isodesmic bond separation calculations for the endo isomer give a heat of formation in excellent agreement with the experimental measurement. Combining our calculated values for the gas-phase heat of formation with recent measurements of the heat of vaporization yields recommended values for Delta (sub f)H deg (298)liq of -126.4 and -114.7 kJ/mol for the exo and endo isomers, respectively.
Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope
NASA Astrophysics Data System (ADS)
Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.
2016-03-01
Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.
Mutaf Yildiz, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert
2018-06-01
Home numeracy has been defined as the parent-child interactions that include experiences with numerical content in daily-life settings. Previous studies have commonly operationalized home numeracy either via questionnaires or via observational methods. These studies have shown that both types of measures are positively related to variability in children's mathematical skills. This study investigated whether these distinctive data collection methods index the same aspect of home numeracy. The frequencies of home numeracy activities and parents' opinions about their children's mathematics education were assessed via a questionnaire. The amount of home numeracy talk was observed via two semi-structured videotaped parent-child activity sessions (Lego building and book reading). Children's mathematical skills were examined with two calculation subtests. We observed that parents' reports and number of observed numeracy interactions were not related to each other. Interestingly, parents' reports of numeracy activities were positively related to children's calculation abilities, whereas the observed home numeracy talk was negatively related to children's calculation abilities. These results indicate that these two methods tap on different aspects of home numeracy. Statement of contribution What is already known on this subject? Home numeracy, that is, parent-child interactions that include experiences with numerical content, is supposed to have a positive impact on calculation or mathematical ability in general. Despite many positive results, some studies have failed to find such an association. Home numeracy has been assessed with questionnaires on the frequency of numerical experiences and observations of parent-child interactions; however, those two measures of home numeracy have never been compared directly. What does this study add? This study assessed home numeracy through questionnaires and observations in the 44 parent-child dyads and showed that home numeracy measures derived from questionnaires and observations are not related. Moreover, the relation between the reported frequency of home numeracy activities and calculation on the one hand, and parent-child number talk (derived from observations) and calculation on the other hand is in opposite directions; the frequency of activities is positively related to calculation performance; and the amount of number talk is negatively related to calculation. This study shows that both measures tap into different aspects of home numeracy and can be an important factor explaining inconsistencies in literature. © 2018 The British Psychological Society.
Improved estimates of environmental copper release rates from antifouling products.
Finnie, Alistair A
2006-01-01
The US Navy Dome method for measuring copper release rates from antifouling paint in-service on ships' hulls can be considered to be the most reliable indicator of environmental release rates. In this paper, the relationship between the apparent copper release rate and the environmental release rate is established for a number of antifouling coating types using data from a variety of available laboratory, field and calculation methods. Apart from a modified Dome method using panels, all laboratory, field and calculation methods significantly overestimate the environmental release rate of copper from antifouling coatings. The difference is greatest for self-polishing copolymer antifoulings (SPCs) and smallest for certain erodible/ablative antifoulings, where the ASTM/ISO standard and the CEPE calculation method are seen to typically overestimate environmental release rates by factors of about 10 and 4, respectively. Where ASTM/ISO or CEPE copper release rate data are used for environmental risk assessment or regulatory purposes, it is proposed that the release rate values should be divided by a correction factor to enable more reliable generic environmental risk assessments to be made. Using a conservative approach based on a realistic worst case and accounting for experimental uncertainty in the data that are currently available, proposed default correction factors for use with all paint types are 5.4 for the ASTM/ISO method and 2.9 for the CEPE calculation method. Further work is required to expand this data-set and refine the correction factors through correlation of laboratory measured and calculated copper release rates with the direct in situ environmental release rate for different antifouling paints under a range of environmental conditions.
A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors
Zhu, Jianliang; Wu, Panlong; Bo, Yuming
2016-01-01
Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding
2012-08-15
Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receivingmore » a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.« less
Control Method Stretches Suspensions by Measuring the Sag of Strands in Cable-Stayed Bridges
NASA Astrophysics Data System (ADS)
Bętkowski, Piotr
2017-10-01
In the article is described the method that allows on evaluation and validation of measurement correctness of dynamometers (strain gauges, tension meters) used in systems of suspensions. Control of monitoring devices such as dynamometers is recommended in inspections of suspension bridges. Control device (dynamometer) works with an anchor, and the degree of this cooperation could have a decisive impact on the correctness of the results. Method, which determines the stress in the strand (cable), depending on the sag of stayed cable, is described. This method can be used to control the accuracy of measuring devices directly on the bridge. By measuring the strand sag, it is possible to obtain information about the strength (force) which occurred in the suspension cable. Digital camera is used for the measurement of cable sag. Control measurement should be made independently from the controlled parameter but should verify this parameter directly (it is the best situation). In many cases in practice the controlled parameter is not designation by direct measurement, but the calculations, i.e. relation measured others parameters, as in the method described in the article. In such cases occurred the problem of overlapping error of measurement of intermediate parameters (data) and the evaluation of the reliability of the results. Method of control calculations made in relation to installed in the bridge measuring devices is doubtful without procedure of uncertainty estimation. Such an assessment of the accuracy can be performed using the interval numbers. With the interval numbers are possible the analysis of parametric relationship accuracy of the designation of individual parameters and uncertainty of results. Method of measurements, relations and analytical formulas, and numerical example can be found in the text of the article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Y; Singh, H; Islam, M
2014-06-01
Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less
Methods for determining the internal thrust of scramjet engine modules from experimental data
NASA Technical Reports Server (NTRS)
Voland, Randall T.
1990-01-01
Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.
Index cost estimate based BIM method - Computational example for sports fields
NASA Astrophysics Data System (ADS)
Zima, Krzysztof
2017-07-01
The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.
Dose calculation for electron therapy using an improved LBR method.
Gebreamlak, Wondesen T; Tedeschi, David J; Alkhatib, Hassaan A
2013-07-01
To calculate the percentage depth dose (PDD) of any irregularly shaped electron beam using a modified lateral build-up ratio (LBR) method. Percentage depth dose curves were measured using 6, 9, 12, and 15 MeV electron beam energies for applicator cone sizes of 6 × 6, 10 × 10, 14 × 14, and 20 × 20 cm(2). Circular cutouts for each cone were prepared from 2.0 cm diameter to the maximum possible size for each cone. In addition, three irregular cutouts were prepared. The LBR for each circular cutout was calculated from the measured PDD curve using the open field of the 14 × 14 cm(2) cone as the reference field. Using the LBR values and the radius of the circular cutouts, the corresponding lateral spread parameter [σR(z)] of the electron shower was calculated. Unlike the commonly accepted assumption that σR(z) is independent of cutout size, it is shown that its value increases linearly with circular cutout size (R). Using this characteristic of the lateral spread parameter, the PDD curves of irregularly shaped cutouts were calculated. Finally, the calculated PDD curves were compared with measured PDD curves. In this research, it is shown that the lateral spread parameter σR(z) increases with cutout size. For radii of circular cutout sizes up to the equilibrium range of the electron beam, the increase of σR(z) with the cutout size is linear. The percentage difference of the calculated PDD curve from the measured PDD data for irregularly shaped cutouts was under 1.0% in the region between the surface and therapeutic range of the electron beam. Similar results were obtained for four electron beam energies (6, 9, 12, and 15 MeV).
Agreement between methods of measurement of mean aortic wall thickness by MRI.
Rosero, Eric B; Peshock, Ronald M; Khera, Amit; Clagett, G Patrick; Lo, Hao; Timaran, Carlos
2009-03-01
To assess the agreement between three methods of calculation of mean aortic wall thickness (MAWT) using magnetic resonance imaging (MRI). High-resolution MRI of the infrarenal abdominal aorta was performed on 70 subjects with a history of coronary artery disease who were part of a multi-ethnic population-based sample. MAWT was calculated as the mean distance between the adventitial and luminal aortic boundaries using three different methods: average distance at four standard positions (AWT-4P), average distance at 100 automated positions (AWT-100P), and using a mathematical computation derived from the total vessel and luminal areas (AWT-VA). Bland-Altman plots and Passing-Bablok regression analyses were used to assess agreement between methods. Bland-Altman analyses demonstrated a positive bias of 3.02+/-7.31% between the AWT-VA and the AWT-4P methods, and of 1.76+/-6.82% between the AWT-100P and the AWT-4P methods. Passing-Bablok regression analyses demonstrated constant bias between the AWT-4P method and the other two methods. Proportional bias was, however, not evident among the three methods. MRI methods of measurement of MAWT using a limited number of positions of the aortic wall systematically underestimate the MAWT value compared with the method that calculates MAWT from the vessel areas. Copyright (c) 2009 Wiley-Liss, Inc.
Simulation of Thermographic Responses of Delaminations in Composites with Quadrupole Method
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.; Cramer, K. Elliott
2016-01-01
The application of the quadrupole method for simulating thermal responses of delaminations in carbon fiber reinforced epoxy composites materials is presented. The method solves for the flux at the interface containing the delamination. From the interface flux, the temperature at the surface is calculated. While the results presented are for single sided measurements, with ash heating, expansion of the technique to arbitrary temporal flux heating or through transmission measurements is simple. The quadrupole method is shown to have two distinct advantages relative to finite element or finite difference techniques. First, it is straight forward to incorporate arbitrary shaped delaminations into the simulation. Second, the quadrupole method enables calculation of the thermal response at only the times of interest. This, combined with a significant reduction in the number of degrees of freedom for the same simulation quality, results in a reduction of the computation time by at least an order of magnitude. Therefore, it is a more viable technique for model based inversion of thermographic data. Results for simulations of delaminations in composites are presented and compared to measurements and finite element method results.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Wang, Hongyuan; Lin, Jun; Guan, Shanshan; Feng, Xue; Li, Suyi
2014-12-01
Performance testing and calibration of airborne transient electromagnetic (ATEM) systems are conducted to obtain the electromagnetic response of ground loops. It is necessary to accurately calculate the mutual inductance between transmitting coils, receiving coils and ground loops to compute the electromagnetic responses. Therefore, based on Neumann's formula and the measured attitudes of the coils, this study deduces the formula for the mutual inductance calculation between circular and quadrilateral coils, circular and circular coils, and quadrilateral and quadrilateral coils using a rotation matrix, and then proposes a method to calculate the mutual inductance between two coils at arbitrary attitudes (roll, pitch, and yaw). Using coil attitude simulated data of an ATEM system, we calculate the mutual inductance of transmitting coils and ground loops at different attitudes, analyze the impact of coil attitudes on mutual inductance, and compare the computational accuracy and speed of the proposed method with those of other methods using the same data. The results show that the relative error of the calculation is smaller and that the speed-up is significant compared to other methods. Moreover, the proposed method is also applicable to the mutual inductance calculation of polygonal and circular coils at arbitrary attitudes and is highly expandable.
On Calculating the Zero-Gravity Surface Figure of a Mirror
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.
2010-01-01
An analysis of the classical method of calculating the zero-gravity surface figure of a mirror from surface-figure measurements in the presence of gravity has led to improved understanding of conditions under which the calculations are valid. In this method, one measures the surface figure in two or more gravity- reversed configurations, then calculates the zero-gravity surface figure as the average of the surface figures determined from these measurements. It is now understood that gravity reversal is not, by itself, sufficient to ensure validity of the calculations: It is also necessary to reverse mounting forces, for which purpose one must ensure that mountingfixture/ mirror contacts are located either at the same places or else sufficiently close to the same places in both gravity-reversed configurations. It is usually not practical to locate the contacts at the same places, raising the question of how close is sufficiently close. The criterion for sufficient closeness is embodied in the St. Venant principle, which, in the present context, translates to a requirement that the distance between corresponding gravity-reversed mounting positions be small in comparison to their distances to the optical surface of the mirror. The necessity of reversing mount forces is apparent in the behavior of the equations familiar from finite element analysis (FEA) that govern deformation of the mirror.
New soil water sensors for irrigation management
USDA-ARS?s Scientific Manuscript database
Effective irrigation management is key to obtaining the most crop production per unit of water applied and increasing production in the face of competing demands on water resources. Management methods have included calculating crop water needs based on weather station measurements, calculating soil ...
A simplified method for calculating temperature time histories in cryogenic wind tunnels
NASA Technical Reports Server (NTRS)
Stallings, R. L., Jr.; Lamb, M.
1976-01-01
Average temperature time history calculations of the test media and tunnel walls for cryogenic wind tunnels have been developed. Results are in general agreement with limited preliminary experimental measurements obtained in a 13.5-inch pilot cryogenic wind tunnel.
Method and apparatus for detecting phycocyanin-pigmented algae and bacteria from reflected light
NASA Technical Reports Server (NTRS)
Vincent, Robert (Inventor)
2013-01-01
The present invention relates to a method of detecting phycocyanin algae or bacteria in water from reflected light, and also includes devices for the measurement, calculation and transmission of data relating to that method.
Method and apparatus for detecting phycocyanin-pigmented algae and bacteria from reflected light
NASA Technical Reports Server (NTRS)
Vincent, Robert (Inventor)
2006-01-01
The present invention relates to a method of detecting phycocyanin algae or bacteria in water from reflected light, and also includes devices for the measurement, calculation and transmission of data relating to that method.
Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head
NASA Astrophysics Data System (ADS)
Xiong, Zhenyu; Vijayan, Sarath; Rana, Vijay; Jain, Amit; Rudin, Stephen; Bednarek, Daniel R.
2016-03-01
The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp's. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.
Scoping estimates of the LDEF satellite induced radioactivity
NASA Technical Reports Server (NTRS)
Armstrong, Tony W.; Colborn, B. L.
1990-01-01
The Long Duration Exposure Facility (LDEF) satellite was recovered after almost six years in space. It was well-instrumented with ionizing radiation dosimeters, including thermoluminescent dosimeters, plastic nuclear track detectors, and a variety of metal foil samples for measuring nuclear activation products. The extensive LDEF radiation measurements provide the type of radiation environments and effects data needed to evaluate and help resolve uncertainties in present radiation models and calculational methods. A calculational program was established to aid in LDEF data interpretation and to utilize LDEF data for assessing the accuracy of current models. A summary of the calculational approach is presented. The purpose of the reported calculations is to obtain a general indication of: (1) the importance of different space radiation sources (trapped, galactic, and albedo protons, and albedo neutrons); (2) the importance of secondary particles; and (3) the spatial dependence of the radiation environments and effects expected within the spacecraft. The calculational method uses the High Energy Transport Code (HETC) to estimate the importance of different sources and secondary particles in terms of fluence, absorbed dose in tissue and silicon, and induced radioactivity as a function of depth in aluminum.
Electrical conductivity of electrolytes applicable to natural waters from 0 to 100 degrees C
McCleskey, R. Blaine
2011-01-01
The electrical conductivities of 34 electrolyte solutions found in natural waters ranging from (10-4 to 1) molkg-1 in concentration and from (5 to 90) °C have been determined. High-quality electrical conductivity data for numerous electrolytes exist in the scientific literature, but the data do not span the concentration or temperature ranges of many electrolytes in natural waters. Methods for calculating the electrical conductivities of natural waters have incorporated these data from the literature, and as a result these methods cannot be used to reliably calculate the electrical conductivity over a large enough range of temperature and concentration. For the single-electrolyte solutions, empirical equations were developed that relate electrical conductivity to temperature and molality. For the 942 molar conductivity determinations for single electrolytes from this study, the mean relative difference between the calculated and measured values was 0.1 %. The calculated molar conductivity was compared to literature data, and the mean relative difference for 1978 measurements was 0.2 %. These data provide an improved basis for calculating electrical conductivity for most natural waters.
Mao, Debin; Lookman, Richard; Van De Weghe, Hendrik; Vanermen, Guido; De Brucker, Nicole; Diels, Ludo
2009-04-03
An assessment of aqueous solubility (leaching potential) of soil contaminations with petroleum hydrocarbons (TPH) is important in the context of the evaluation of (migration) risks and soil/groundwater remediation. Field measurements using monitoring wells often overestimate real TPH concentrations in case of presence of pure oil in the screened interval of the well. This paper presents a method to calculate TPH equilibrium concentrations in groundwater using soil analysis by high-performance liquid chromatography followed by comprehensive two-dimensional gas chromatography (HPLC-GCXGC). The oil in the soil sample is divided into 79 defined hydrocarbon fractions on two GCXGC color plots. To each of these fractions a representative water solubility is assigned. Overall equilibrium water solubility of the non-aqueous phase liquid (NAPL) present in the sample and the water phase's chemical composition (in terms of the 79 fractions defined) are then calculated using Raoult's law. The calculation method was validated using soil spiked with 13 different TPH mixtures and 1 field-contaminated soil. Measured water solubilities using a column recirculation equilibration experiment agreed well to calculated equilibrium concentrations and water phase TPH composition.
Radiative decay rate of excitons in square quantum wells: Microscopic modeling and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khramtsov, E. S.; Grigoryev, P. S.; Ignatiev, I. V.
The binding energy and the corresponding wave function of excitons in GaAs-based finite square quantum wells (QWs) are calculated by the direct numerical solution of the three-dimensional Schrödinger equation. The precise results for the lowest exciton state are obtained by the Hamiltonian discretization using the high-order finite-difference scheme. The microscopic calculations are compared with the results obtained by the standard variational approach. The exciton binding energies found by two methods coincide within 0.1 meV for the wide range of QW widths. The radiative decay rate is calculated for QWs of various widths using the exciton wave functions obtained by direct andmore » variational methods. The radiative decay rates are confronted with the experimental data measured for high-quality GaAs/AlGaAs and InGaAs/GaAs QW heterostructures grown by molecular beam epitaxy. The calculated and measured values are in good agreement, though slight differences with earlier calculations of the radiative decay rate are observed.« less
Program helps quickly calculate deviated well path
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, M.P.
1993-11-22
A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less
Lin, Hsin-Hon; Peng, Shin-Lei; Wu, Jay; Shih, Tian-Yu; Chuang, Keh-Shih; Shih, Cheng-Ting
2017-05-01
Osteoporosis is a disease characterized by a degradation of bone structures. Various methods have been developed to diagnose osteoporosis by measuring bone mineral density (BMD) of patients. However, BMDs from these methods were not equivalent and were incomparable. In addition, partial volume effect introduces errors in estimating bone volume from computed tomography (CT) images using image segmentation. In this study, a two-compartment model (TCM) was proposed to calculate bone volume fraction (BV/TV) and BMD from CT images. The TCM considers bones to be composed of two sub-materials. Various equivalent BV/TV and BMD can be calculated by applying corresponding sub-material pairs in the TCM. In contrast to image segmentation, the TCM prevented the influence of the partial volume effect by calculating the volume percentage of sub-material in each image voxel. Validations of the TCM were performed using bone-equivalent uniform phantoms, a 3D-printed trabecular-structural phantom, a temporal bone flap, and abdominal CT images. By using the TCM, the calculated BV/TVs of the uniform phantoms were within percent errors of ±2%; the percent errors of the structural volumes with various CT slice thickness were below 9%; the volume of the temporal bone flap was close to that from micro-CT images with a percent error of 4.1%. No significant difference (p >0.01) was found between the areal BMD of lumbar vertebrae calculated using the TCM and measured using dual-energy X-ray absorptiometry. In conclusion, the proposed TCM could be applied to diagnose osteoporosis, while providing a basis for comparing various measurement methods.
SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, C; Chan, S; Lee, F
Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done withmore » FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.« less
Mouney, Meredith C; Townsend, Wendy M; Moore, George E
2012-12-01
To determine whether differences exist in the calculated intraocular lens (IOL) strengths of a population of adult horses and to assess the association between calculated IOL strength and horse height, body weight, and age, and between calculated IOL strength and corneal diameter. 28 clinically normal adult horses (56 eyes). Axial globe lengths and anterior chamber depths were measured ultrasonographically. Corneal curvatures were determined with a modified photokeratometer and brightness-mode ultrasonographic images. Data were used in the Binkhorst equation to calculate the predicted IOL strength for each eye. The calculated IOL strengths were compared with a repeated-measures ANOVA. Corneal curvature values (photokeratometer vs brightness-mode ultrasonographic images) were compared with a paired t test. Coefficients of determination were used to measure associations. Calculated IOL strengths (range, 15.4 to 30.1 diopters) differed significantly among horses. There was a significant difference in the corneal curvatures as determined via the 2 methods. Weak associations were found between calculated IOL strength and horse height and between calculated IOL strength and vertical corneal diameter. Calculated IOL strength differed significantly among horses. Because only weak associations were detected between calculated IOL strength and horse height and vertical corneal diameter, these factors would not serve as reliable indicators for selection of the IOL strength for a specific horse.
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2010 CFR
2010-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2012 CFR
2012-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2014 CFR
2014-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2013 CFR
2013-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
A non-iterative twin image elimination method with two in-line digital holograms
NASA Astrophysics Data System (ADS)
Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young
2018-02-01
We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.
Electrical resistance tomography using steel cased boreholes as electrodes
Daily, W.D.; Ramirez, A.L.
1999-06-22
An electrical resistance tomography method is described which uses steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constrain the models. 2 figs.
Electrical resistance tomography using steel cased boreholes as electrodes
Daily, William D.; Ramirez, Abelardo L.
1999-01-01
An electrical resistance tomography method using steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constain the models.
Heat flux measurements on ceramics with thin film thermocouples
NASA Technical Reports Server (NTRS)
Holanda, Raymond; Anderson, Robert C.; Liebert, Curt H.
1993-01-01
Two methods were devised to measure heat flux through a thick ceramic using thin film thermocouples. The thermocouples were deposited on the front and back face of a flat ceramic substrate. The heat flux was applied to the front surface of the ceramic using an arc lamp Heat Flux Calibration Facility. Silicon nitride and mullite ceramics were used; two thicknesses of each material was tested, with ceramic temperatures to 1500 C. Heat flux ranged from 0.05-2.5 MW/m2(sup 2). One method for heat flux determination used an approximation technique to calculate instantaneous values of heat flux vs time; the other method used an extrapolation technique to determine the steady state heat flux from a record of transient data. Neither method measures heat flux in real time but the techniques may easily be adapted for quasi-real time measurement. In cases where a significant portion of the transient heat flux data is available, the calculated transient heat flux is seen to approach the extrapolated steady state heat flux value as expected.
Survey and Experimental Testing of Nongravimetric Mass Measurement Devices
NASA Technical Reports Server (NTRS)
Oakey, W. E.; Lorenz, R.
1977-01-01
Documentation presented describes the design, testing, and evaluation of an accelerated gravimetric balance, a low mass air bearing oscillator of the spring-mass type, and a centrifugal device for liquid mass measurement. A direct mass readout method was developed to replace the oscillation period readout method which required manual calculations to determine mass. A protoype 25 gram capacity micro mass measurement device was developed and tested.
Measuring allostatic load in the workforce: a systematic review
MAUSS, Daniel; LI, Jian; SCHMIDT, Burkhard; ANGERER, Peter; JARCZOK, Marc N.
2014-01-01
The Allostatic Load Index (ALI) has been used to establish associations between stress and health-related outcomes. This review summarizes the measurement and methodological challenges of allostatic load in occupational settings. Databases of Medline, PubPsych, and Cochrane were searched to systematically explore studies measuring ALI in working adults following the PRISMA statement. Study characteristics, biomarkers and methods were tabulated. Methodological quality was evaluated using a standardized checklist. Sixteen articles (2003–2013) met the inclusion criteria, with a total of 39 (range 6–17) different variables used to calculate ALI. Substantial heterogeneity was observed in the number and type of biomarkers used, the analytic techniques applied and study quality. Particularly, primary mediators were not regularly included in ALI calculation. Consensus on methods to measure ALI in working populations is limited. Research should include longitudinal studies using multi-systemic variables to measure employees at risk for biological wear and tear. PMID:25224337
Paper area density measurement from forward transmitted scattered light
Koo, Jackson C.
2001-01-01
A method whereby the average paper fiber area density (weight per unit area) can be directly calculated from the intensity of transmitted, scattered light at two different wavelengths, one being a non-absorpted wavelength. Also, the method makes it possible to derive the water percentage per fiber area density from a two-wavelength measurement. In the optical measuring technique optical transmitted intensity, for example, at 2.1 microns cellulose absorption line is measured and compared with another scattered, optical transmitted intensity reference in the nearby spectrum region, such as 1.68 microns, where there is no absorption. From the ratio of these two intensities, one can calculate the scattering absorption coefficient at 2.1 microns. This absorption coefficient at this wavelength is, then, experimentally correlated to the paper fiber area density. The water percentage per fiber area density can be derived from this two-wavelength measurement approach.
NASA Astrophysics Data System (ADS)
Furgerot, Lucille; Mouazé, Dominique; Tessier, Bernadette; Perez, Laurent; Haquin, Sylvain; Weill, Pierre; Crave, Alain
2016-07-01
Tidal bores are believed to induce significant sediment transport in macrotidal estuaries. However, due to high turbulence and very large suspended sediment concentration (SSC), the measurement of sediment transport induced by a tidal bore is actually a technical challenge. Consequently, very few quantitative data have been published so far. This paper presents SSC measurements performed in the Sée River estuary (Mont-Saint-Michel Bay, northwestern France) during the tidal bore passage with direct and indirect (optical) methods. Both methods are calibrated in laboratory in order to verify the consistency of measurements, to calculate the uncertainties, and to correct the raw data. The SSC measurements coupled with ADCP velocity data are used to calculate the instantaneous sediment transport (qs) associated with the tidal bore passage (up to 40 kg/m2/s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelini, G.; Lanza, E.; Rozza Dionigi, A.
1983-05-01
The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gislason-Lee, Amber J., E-mail: A.J.Gislason@leeds.ac.uk; Tunstall, Clare M.; Kengyelics, Stephen K.
Purpose: Cardiac x-ray detectors are used to acquire moving images in real-time for angiography and interventional procedures. Detective quantum efficiency (DQE) is not generally measured on these dynamic detectors; the required “for processing” image data and control of x-ray settings have not been accessible. By 2016, USA hospital physicists will have the ability to measure DQE and will likely utilize the International Electrotechnical Commission (IEC) standard for measuring DQE of dynamic x-ray imaging devices. The current IEC standard requires an image of a tilted tungsten edge test object to obtain modulation transfer function (MTF) for DQE calculation. It specifies themore » range of edge angles to use; however, it does not specify a preferred method to determine this angle for image analysis. The study aimed to answer the question “will my choice in method impact my results?” Four different established edge angle determination methods were compared to investigate the impact on DQE. Methods: Following the IEC standard, edge and flat field images were acquired on a cardiac flat-panel detector to calculate MTF and noise power spectrum, respectively, to determine DQE. Accuracy of the methods in determining the correct angle was ascertained using a simulated edge image with known angulations. Precision of the methods was ascertained using variability of MTF and DQE, calculated via bootstrapping. Results: Three methods provided near equal angles and the same MTF while the fourth, with an angular difference of 6%, had a MTF lower by 3% at 1.5 mm{sup −1} spatial frequency and 8% at 2.5 mm{sup −1}; corresponding DQE differences were 6% at 1.5 mm{sup −1} and 17% at 2.5 mm{sup −1}; differences were greater than standard deviations in the measurements. Conclusions: DQE measurements may vary by a significant amount, depending on the method used to determine the edge angle when following the IEC standard methodology for a cardiac x-ray detector. The most accurate and precise methods are recommended for absolute assessments and reproducible measurements, respectively.« less
Tang, Wing Chun; Tang, Ying Yung; Lam, Carly S Y
2014-01-01
The aim of the study was to evaluate the level of agreement between the 'Representative Value' (RV) of refraction obtained from the Shin-Nippon NVision-K 5001 instrument with values calculated from individual measurement readings using standard algebraic methods. Cycloplegic autorefraction readings for 101 myopic children aged 8-13 years (10.9 ± 1.42 years) were obtained using the Shin-Nippon NVision-K 5001. Ten autorefractor measurements were taken for each eye. The spherical equivalent (SE), sphere (Sph) and cylindrical component (Cyl) power of each eye were calculated, firstly, by averaging the 10 repeated measurements (Mean SE, Mean Sph and Mean Cyl), and secondly, by the vector representation method (Vector SE, Vector Sph and Vector Cyl). These calculated values were then compared with those of RV (RV SE, RV Sph and RV Cyl) provided by the proprietary software of the NVision-K 5001 using one-way analysis of variance (anova). The agreement between the methods was also assessed. The SE of the subjects ranged from -5.37 to -0.62 D (mean ± SD, = -2.89 ± 1.01 D). The Mean SE was in exact agreement with the Vector SE. There were no significant differences between the RV readings and those calculated using non-vectorial or vectorial methods for any of the refractive powers (SE, p = 0.99; Sph, p = 0.93; Cyl, p = 0.24). The (mean ± SD) differences were: RV SE vs Mean SE (and also RV SE vs Vector SE) -0.01 ± 0.06 D; RV Sph vs Mean Sph, -0.01 ± 0.05 D; RV Sph vs Vector Sph, -0.04 ± 0.06 D; RV Cyl vs Mean Cyl, 0.01 ± 0.07 D; RV Cyl vs Vector Cyl, 0.06 ± 0.09 D. Ninety-eight percent of RV reading differed from their non-vectorial or vectorial counterparts by less than 0.25 D. The RV values showed good agreement to the results calculated using conventional methods. Although the formula used to calculate RV by the NVision-K 5001 autorefractor is proprietary, our results provide validation for the use of RV measurements in clinical practice and vision science research. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
System and method for measuring residual stress
Prime, Michael B.
2002-01-01
The present invention is a method and system for determining the residual stress within an elastic object. In the method, an elastic object is cut along a path having a known configuration. The cut creates a portion of the object having a new free surface. The free surface then deforms to a contour which is different from the path. Next, the contour is measured to determine how much deformation has occurred across the new free surface. Points defining the contour are collected in an empirical data set. The portion of the object is then modeled in a computer simulator. The points in the empirical data set are entered into the computer simulator. The computer simulator then calculates the residual stress along the path which caused the points within the object to move to the positions measured in the empirical data set. The calculated residual stress is then presented in a useful format to an analyst.
NASA Technical Reports Server (NTRS)
Ketchum, Eleanor A. (Inventor)
2000-01-01
A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.
Quantitative Rainbow Schlieren Deflectometry as a Temperature Diagnostic for Spherical Flames
NASA Technical Reports Server (NTRS)
Feikema, Douglas A.
2004-01-01
Numerical analysis and experimental results are presented to define a method for quantitatively measuring the temperature distribution of a spherical diffusion flame using Rainbow Schlieren Deflectometry in microgravity. First, a numerical analysis is completed to show the method can suitably determine temperature in the presence of spatially varying species composition. Also, a numerical forward-backward inversion calculation is presented to illustrate the types of calculations and deflections to be encountered. Lastly, a normal gravity demonstration of temperature measurement in an axisymmetric laminar, diffusion flame using Rainbow Schlieren deflectometry is presented. The method employed in this paper illustrates the necessary steps for the preliminary design of a Schlieren system. The largest deflections for the normal gravity flame considered in this paper are 7.4 x 10(-4) radians which can be accurately measured with 2 meter focal length collimating and decollimating optics. The experimental uncertainty of deflection is less than 5 x 10(-5) radians.
Verevkin, Sergey P; Emel'yanenko, Vladimir N; Kozlova, Svetlana A
2008-10-23
This work has been undertaken in order to obtain data on thermodynamic properties of organic carbonates and to revise the group-additivity values necessary for predicting their standard enthalpies of formation and enthalpies of vaporization. The standard molar enthalpies of formation of dibenzyl carbonate, tert-butyl phenyl carbonate, and diphenyl carbonate were measured using combustion calorimetry. Molar enthalpies of vaporization of these compounds were obtained from the temperature dependence of the vapor pressure measured by the transpiration method. Molar enthalpy of sublimation of diphenyl carbonate was measured in the same way. Ab initio calculations of molar enthalpies of formation of organic carbonates have been performed using the G3MP2 method, and results are in excellent agreement with the available experiment. Then the group-contribution method has been developed to predict values of the enthalpies of formation and enthalpies of vaporization of organic carbonates.
View-limiting shrouds for insolation radiometers
NASA Technical Reports Server (NTRS)
Dennison, E. W.; Trentelman, G. F.
1985-01-01
Insolation radiometers (normal incidence pyrheliometers) are used to measure the solar radiation incident on solar concentrators for calibrating thermal power generation measurements. The measured insolation value is dependent on the atmospheric transparency, solar elevation angle, circumsolar radiation, and radiometer field of view. The radiant energy entering the thermal receiver is dependent on the same factors. The insolation value and the receiver input will be proportional if the concentrator and the radiometer have similar fields of view. This report describes one practical method for matching the field of view of a radiometer to that of a solar concentrator. The concentrator field of view can be calculated by optical ray tracing methods and the field of view of a radiometer with a simple shroud can be calculated by using geometric equations. The parameters for the shroud can be adjusted to provide an acceptable match between the respective fields of view. Concentrator fields of view have been calculated for a family of paraboloidal concentrators and receiver apertures. The corresponding shroud parameters have also been determined.
Modeling of crack bridging in a unidirectional metal matrix composite
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Kantzos, Pete; Telesman, Jack
1991-01-01
The effective fatigue crack driving force and crack opening profiles were determined analytically for fatigue tested unidirectional composite specimens exhibiting fiber bridging. The crack closure pressure due to bridging was modeled using two approaches; the fiber pressure model and the shear lag model. For both closure models, the Bueckner weight function method and the finite element method were used to calculate crack opening displacements and the crack driving force. The predicted near crack tip opening profile agreed well with the experimentally measured profiles for single edge notch SCS-6/Ti-15-3 metal matrix composite specimens. The numerically determined effective crack driving force, Delta K(sup eff), was calculated using both models to correlate the measure crack growth rate in the composite. The calculated Delta K(sup eff) from both models accounted for the crack bridging by showing a good agreement between the measured fatigue crack growth rates of the bridged composite and that of unreinforced, unbridged titanium matrix alloy specimens.
Modeling of crack bridging in a unidirectional metal matrix composite
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Kantzos, Pete; Telesman, Jack
1992-01-01
The effective fatigue crack driving force and crack opening profiles were determined analytically for fatigue tested unidirectional composite specimens exhibiting fiber bridging. The crack closure pressure due to bridging was modeled using two approaches: the fiber pressure model and the shear lag model. For both closure models, the Bueckner weight function method and the finite element method were used to calculate crack opening displacements and the crack driving force. The predicted near crack tip opening profile agreed well with the experimentally measured profiles for single edge notch SCS-6/Ti-15-3 metal matrix composite specimens. The numerically determined effective crack driving force, Delta K(eff), was calculated using both models to correlate the measure crack growth rate in the composite. The calculated Delta K(eff) from both models accounted for the crack bridging by showing a good agreement between the measured fatigue crack growth rates of the bridged composite and that of unreinforced, unbridged titanium matrix alloy specimens.
Reflection measurement of waveguide-injected high-power microwave antennas.
Yuan, Chengwei; Peng, Shengren; Shu, Ting; Zhang, Qiang; Zhao, Xuelong
2015-12-01
A method for reflection measurements of High-power Microwave (HPM) antennas excited with overmoded waveguides is proposed and studied systemically. In theory, principle of the method is proposed and the data processing formulas are developed. In simulations, a horn antenna excited by a TE11 mode exciter is examined and its reflection is calculated by CST Microwave Studio and by the method proposed in this article, respectively. In experiments, reflection measurements of two HPM antennas are conducted, and the measured results are well consistent with the theoretical expectations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maulina, Hervin; Santoso, Iman, E-mail: iman.santoso@ugm.ac.id; Subama, Emmistasega
2016-04-19
The extraction of the dielectric constant of nanostructured graphene on SiC substrates from spectroscopy ellipsometry measurement using the Gauss-Newton inversion (GNI) method has been done. This study aims to calculate the dielectric constant and refractive index of graphene by extracting the value of ψ and Δ from the spectroscopy ellipsometry measurement using GNI method and comparing them with previous result which was extracted using Drude-Lorentz (DL) model. The results show that GNI method can be used to calculate the dielectric constant and refractive index of nanostructured graphene on SiC substratesmore faster as compared to DL model. Moreover, the imaginary partmore » of the dielectric constant values and coefficient of extinction drastically increases at 4.5 eV similar to that of extracted using known DL fitting. The increase is known due to the process of interband transition and the interaction between the electrons and electron-hole at M-points in the Brillouin zone of graphene.« less
Huang, Chenxi; Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi; Liu, Huafeng
2012-11-19
We propose a new method for realizing high-spatial-resolution detection of singularity points in optical vortex beams. The method uses a Shack-Hartmann wavefront sensor (SHWS) to record a Hartmanngram. A map of evaluation values related to phase slope is then calculated from the Hartmanngram. The position of an optical vortex is determined by comparing the map with reference maps that are calculated from numerically created spiral phases having various positions. Optical experiments were carried out to verify the method. We displayed various spiral phase distribution patterns on a phase-only spatial light modulator and measured the resulting singularity point using the proposed method. The results showed good linearity in detecting the position of singularity points. The RMS error of the measured position of the singularity point was approximately 0.056, in units normalized to the lens size of the lenslet array used in the SHWS.
Procedure for Determining Speed and Climbing Performance of Airships
NASA Technical Reports Server (NTRS)
Thompson, F L
1936-01-01
The procedure for obtaining air-speed and rate-of-climb measurements in performance tests of airships is described. Two methods of obtaining speed measurements, one by means of instruments in the airship and the other by flight over a measured ground course, are explained. Instruments, their calibrations, necessary correction factors, observations, and calculations are detailed for each method, and also for the rate-of-climb tests. A method of correction for the effect on density of moist air and a description of other methods of speed course testing are appended.
A three dimensional point cloud registration method based on rotation matrix eigenvalue
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui
2017-09-01
We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.
Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.; Auffermann, William F.; Henry, Travis S.; Khosa, Faisal; Coy, Adam M.; Tridandapani, Srini
2015-01-01
Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as well as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (PAGG) and IVS (PIV S) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (PCT). The one exception was the RCA, which improved for PAGG for 18 of the 20 subjects when compared to PCT (PCT = 2.48; PAGG = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:25652511
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.
2015-02-15
Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as wellmore » as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (P{sub AGG}) and IVS (P{sub IV} {sub S}) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (P{sub CT}). The one exception was the RCA, which improved for P{sub AGG} for 18 of the 20 subjects when compared to P{sub CT} (P{sub CT} = 2.48; P{sub AGG} = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.« less
Comparison of beam position calculation methods for application in digital acquisition systems
NASA Astrophysics Data System (ADS)
Reiter, A.; Singh, R.
2018-05-01
Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.
[Calculating the stark broadening of welding arc spectra by Fourier transform method].
Pan, Cheng-Gang; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao
2012-07-01
It's the most effective and accurate method to calculate the electronic density of plasma by using the Stark width of the plasma spectrum. However, it's difficult to separate Stark width from the composite spectrum linear produced by several mechanisms. In the present paper, Fourier transform was used to separate the Lorentz linear from the spectrum observed, thus to get the accurate Stark width. And we calculated the distribution of the TIG welding arc plasma. This method does not need to measure arc temperature accurately, to measure the width of the plasma spectrum broadened by instrument, and has the function to reject the noise data. The results show that, on the axis, the electron density of TIG welding arc decreases with the distance from tungsten increasing, and changes from 1.21 X 10(17) cm(-3) to 1.58 x 10(17) cm(-3); in the radial, the electron density decreases with the distance from axis increasing, and near the tungsten zone the biggest electronic density is off axis.
A fast RCS accuracy assessment method for passive radar calibrators
NASA Astrophysics Data System (ADS)
Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI
2016-10-01
In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.
Lorini, Chiara; Collini, Francesca; Castagnoli, Mariangela; Di Bari, Mauro; Cavallini, Maria Chiara; Zaffarana, Nicoletta; Pepe, Pasquale; Lucenteforte, Ersilia; Vannacci, Alfredo; Bonaccorsi, Guglielmo
2014-10-01
The aim of this study was to use the Malnutrition Universal Screening Tool (MUST) to assess the applicability of alternative versus direct anthropometric measurements for evaluating the risk for malnutrition in older individuals living in nursing homes (NHs). We conducted a cross-sectional survey in 67 NHs in Tuscany, Italy. We measured the weight, standing height (SH), knee height (KH), ulna length (UL), and middle-upper-arm circumference of 641 NH residents. Correlations between the different methods for calculating body mass index (BMI; using direct or alternative measurements) were evaluated by the intraclass correlation coefficient and the Bland-Altman method; agreement in the allocation of participants to the same risk category was assessed by squared weighted kappa statistic and indicators of internal relative validity. The intraclass correlation coefficient for BMI calculated using KH was 0.839 (0.815-0.861), whereas those calculated by UL were 0.890 (0.872-0.905). The limits of agreement were ±6.13 kg/m(2) using KH and ±4.66 kg/m(2) using UL. For BMI calculated using SH, 79.9% of the patients were at low risk, 8.1% at medium risk, and 12.2% at high risk for malnutrition. The agreement between this classification and that obtained using BMI calculated by alternative measurements was "fair-good." When it is not possible to determine risk category by using SH, we suggest using the alternative measurements (primarily UL, due to its highest sensitivity) to predict the height and to compare these evaluations with those obtained by using middle-upper-arm-circumference to predict the BMI. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, L.; Li, Z.; Li, K.; Blarel, L.; Wendisch, M.
2014-12-01
The polarized CIMEL sun/sky radiometers have been routinely operated within the Sun/sky-radiometer Observation NETwork (SONET) in China and some sites of the AErosol RObotic NETwork (AERONET) around the world. However, the polarization measurements are not yet widely used due to in a certain degree the lack of Stokes parameters derived directly from these polarization measurements. Meanwhile, it have been shown that retrievals of several microphysical properties of aerosol particles can be significantly improved by using degree of linear polarization (DoLP) measurements of polarized CIMEL sun/sky radiometers (CE318-DP). The Stokes parameters Q and U, as well as angle of polarization (AoP) contain additional information about linear polarization and its orientation. A method to calculate Stokes parameters Q, U, and AoP from CE318-DP polarized skylight measurements is introduced in this study. A new polarized almucantar geometry based on CE318-DP is measured to illustrate abundant variation features of these parameters. The polarization parameters calculated in this study are consistent with previous results of DoLP and I, and also comparable to vector radiative transfer simulations.
Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method
NASA Astrophysics Data System (ADS)
Yuanyue, Yang; Huimin, Li
2018-02-01
Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.
Gravimetric surveys for assessing rock mass condition around a mine shaft
NASA Astrophysics Data System (ADS)
Madej, Janusz
2017-06-01
The fundamentals of use of vertical gravimetric surveying method in mine shafts are presented in the paper. The methods of gravimetric measurements and calculation of interval and complex density are discussed in detail. The density calculations are based on an original method accounting for the gravity influence of the mine shaft thus guaranteeing closeness of calculated and real values of density of rocks beyond the shaft lining. The results of many gravimetric surveys performed in shafts are presented and interpreted. As a result, information about the location of heterogeneous zones of work beyond the shaft lining is obtained. In many cases, these zones used to threaten the safe operation of machines and utilities in the shaft.
Moen, Stephan Craig; Meyers, Craig Glenn; Petzen, John Alexander; Foard, Adam Muhling
2012-08-07
A method of calibrating a nuclear instrument using a gamma thermometer may include: measuring, in the instrument, local neutron flux; generating, from the instrument, a first signal proportional to the neutron flux; measuring, in the gamma thermometer, local gamma flux; generating, from the gamma thermometer, a second signal proportional to the gamma flux; compensating the second signal; and calibrating a gain of the instrument based on the compensated second signal. Compensating the second signal may include: calculating selected yield fractions for specific groups of delayed gamma sources; calculating time constants for the specific groups; calculating a third signal that corresponds to delayed local gamma flux based on the selected yield fractions and time constants; and calculating the compensated second signal by subtracting the third signal from the second signal. The specific groups may have decay time constants greater than 5.times.10.sup.-1 seconds and less than 5.times.10.sup.5 seconds.
Determination Method of Bridge Rotation Angle Response Using MEMS IMU.
Sekiya, Hidehiko; Kinomoto, Takeshi; Miki, Chitoshi
2016-11-09
To implement steel bridge maintenance, especially that related to fatigue damage, it is important to monitor bridge deformations under traffic conditions. Bridges deform and rotate differently under traffic load conditions because their structures differ in terms of length and flexibility. Such monitoring enables the identification of the cause of stress concentrations that cause fatigue damage and the proposal of appropriate countermeasures. However, although bridge deformation monitoring requires observations of bridge angle response as well as the bridge displacement response, measuring the rotation angle response of a bridge subject to traffic loads is difficult. Theoretically, the rotation angle response can be calculated by integrating the angular velocity, but for field measurements of actual in-service bridges, estimating the necessary boundary conditions would be difficult due to traffic-induced vibration. To solve the problem, this paper proposes a method for determining the rotation angle response of an in-service bridge from its angular velocity, as measured by a inertial measurement unit (IMU). To verify our proposed method, field measurements were conducted using nine micro-electrical mechanical systems (MEMS) IMUs and two contact displacement gauges. The results showed that our proposed method provided high accuracy when compared to the reference responses calculated by the contact displacement gauges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antoni, R.; Passard, C.; Perot, B.
2015-07-01
The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ({sup 3}He proportional counter inside the measurement cavity).more » A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the {sup 239}Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and {sup 235}U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less
NASA Astrophysics Data System (ADS)
Harris, Courtney K.; Wiberg, Patricia L.
1997-09-01
Modeling shelf sediment transport rates and bed reworking depths is problematic when the wave and current forcing conditions are not precisely known, as is usually the case when long-term sedimentation patterns are of interest. Two approaches to modeling sediment transport under such circumstances are considered. The first relies on measured or simulated time series of flow conditions to drive model calculations. The second approach uses as model input probability distribution functions of bottom boundary layer flow conditions developed from wave and current measurements. Sediment transport rates, frequency of bed resuspension by waves and currents, and bed reworking calculated using the two methods are compared at the mid-shelf STRESS (Sediment TRansport on Shelves and Slopes) site on the northern California continental shelf. Current, wave and resuspension measurements at the site are used to generate model inputs and test model results. An 11-year record of bottom wave orbital velocity, calculated from surface wave spectra measured by the National Data Buoy Center (NDBC) Buoy 46013 and verified against bottom tripod measurements, is used to characterize the frequency and duration of wave-driven transport events and to estimate the joint probability distribution of wave orbital velocity and period. A 109-day record of hourly current measurements 10 m above bottom is used to estimate the probability distribution of bottom boundary layer current velocity at this site and to develop an auto-regressive model to simulate current velocities for times when direct measurements of currents are not available. Frequency of transport, the maximum volume of suspended sediment, and average flux calculated using measured wave and simulated current time series agree well with values calculated using measured time series. A probabilistic approach is more amenable to calculations over time scales longer than existing wave records, but it tends to underestimate net transport because it does not capture the episodic nature of transport events. Both methods enable estimates to be made of the uncertainty in transport quantities that arise from an incomplete knowledge of the specific timing of wave and current conditions. 1997 Elsevier Science Ltd
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Determination of plasma volume in anaesthetized piglets using the carbon monoxide (CO) method.
Heltne, J K; Farstad, M; Lund, T; Koller, M E; Matre, K; Rynning, S E; Husby, P
2002-07-01
Based on measurements of the circulating red blood cell volume (V(RBC)) in seven anaesthetized piglets using carbon monoxide (CO) as a label, plasma volume (PV) was calculated for each animal. The increase in carboxyhaemoglobin (COHb) concentration following administration of a known amount of CO into a closed circuit re-breathing system was determined by diode-array spectrophotometry. Simultaneously measured haematocrit (HCT) and haemoglobin (Hb) values were used for PV calculation. The PV values were compared with simultaneously measured PVs determined using the Evans blue technique. Mean values (SD) for PV were 1708.6 (287.3)ml and 1738.7 (412.4)ml with the CO method and the Evans blue technique, respectively. Comparison of PVs determined with the two techniques demonstrated good correlation (r = 0.995). The mean difference between PV measurements was -29.9 ml and the limits of agreement (mean difference +/-2SD) were -289.1 ml and 229.3 ml. In conclusion, the CO method can be applied easily under general anaesthesia and controlled ventilation with a simple administration system. The agreement between the compared methods was satisfactory. Plasma volume determined with the CO method is safe, accurate and has no signs of major side effects.
SU-F-T-151: Measurement Evaluation of Skin Dose in Scanning Proton Beam Therapy for Breast Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, J; Nichols, E; Strauss, D
Purpose: To measure the skin dose and compare it with the calculated dose from a treatment planning system (TPS) for breast cancer treatment using scanning proton beam therapy (SPBT). Methods: A single en-face-beam SPBT plan was generated by a commercial TPS for two breast cancer patients. The treatment volumes were the entire breasts (218 cc and 1500 cc) prescribed to 50.4 Gy (RBE) in 28 fractions. A range shifter of 5 cm water equivalent thickness was used. The organ at risk (skin) was defined to be 5 mm thick from the surface. The skin doses were measured in water withmore » an ADCL calibrated parallel plate (PP) chamber. The measured data were compared with the values calculated in the TPS. Skin dose calculations can be subject to uncertainties created by the definition of the external contour and the limitations of the correction based algorithms, such as proton convolution superposition. Hence, the external contours were expanded by 0, 3 mm and 1 cm to include additional pixels for dose calculation. In addition, to examine the effects of the cloth gown on the skin dose, the skin dose measurements were conducted with and without gown. Results: On average the measured skin dose was 4% higher than the calculated values. At deeper depths, the measured and calculated doses were in better agreement (< 2%). Large discrepancy occur for the dose calculated without external expansion due to volume averaging. The addition of the gown only increased the measured skin dose by 0.4%. Conclusion: The implemented TPS underestimated the skin dose for breast treatments. Superficial dose calculation without external expansion would result in large errors for SPBT for breast cancer.« less
Drits, Victor A.; Eberl, Dennis D.; Środoń, Jan
1998-01-01
A modified version of the Bertaut-Warren-Averbach (BWA) technique (Bertaut 1949, 1950; Warren and Averbach 1950) has been developed to measure coherent scattering domain (CSD) sizes and strains in minerals by analysis of X-ray diffraction (XRD) data. This method is used to measure CSD thickness distributions for calculated and experimental XRD patterns of illites and illite-smectites (I-S). The method almost exactly recovers CSD thickness distributions for calculated illite XRD patterns. Natural I-S samples contain swelling layers that lead to nonperiodic structures in the c* direction and to XRD peaks that are broadened and made asymmetric by mixed layering. Therefore, these peaks cannot be analyzed by the BWA method. These difficulties are overcome by K-saturation and heating prior to X-ray analysis in order to form 10-Å periodic structures. BWA analysis yields the thickness distribution of mixed-layer crystals (coherently diffracting stacks of fundamental illite particles). For most I-S samples, CSD thickness distributions can be approximated by lognormal functions. Mixed-layer crystal mean thickness and expandability then can be used to calculate fundamental illite particle mean thickness. Analyses of the dehydrated, K-saturated samples indicate that basal XRD reflections are broadened by symmetrical strain that may be related to local variations in smectite interlayers caused by dehydration, and that the standard deviation of the strain increases regularly with expandability. The 001 and 002 reflections are affected only slightly by this strain and therefore are suited for CSD thickness analysis. Mean mixed-layer crystal thicknesses for dehydrated I-S measured by the BWA method are very close to those measured by an integral peak width method.
Investigation of Gamow Teller transition properties in 56-64Ni isotopes using QRPA methods
NASA Astrophysics Data System (ADS)
Cakmak, Sadiye; Nabi, Jameel-Un; Babacan, Tahsin
2018-02-01
Weak rates in nickel isotopes play an integral role in the dynamics of supernovae. Electron capture and β-decay of nickel isotopes, dictated by Gamow-Teller transitions, significantly alter the lepton fraction of the stellar matter. In this paper we calculate Gamow-Teller (GT) transitions for isotopes of nickel, Ni6456-, using QRPA methods. The GT strength distributions were calculated using four different QRPA models. Our results are also compared with previous theoretical calculations and measured strength distributions wherever available. Our investigation concluded that amongst all RPA models, the pn-QRPA(C) model best described the measured GT distributions (including total GT strength and centroid placement). It is hoped that the current investigation of GT properties would prove handy and may lead to a better understanding of the presupernova evolution of massive stars.
Shahbazi-Gahrouei, Daryoush; Ayat, Saba
2012-01-01
Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806
Fast Laplace solver approach to pore-scale permeability
NASA Astrophysics Data System (ADS)
Arns, C. H.; Adler, P. M.
2018-02-01
We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.
NASA Astrophysics Data System (ADS)
Palmesi, P.; Abert, C.; Bruckner, F.; Suess, D.
2018-05-01
Fast stray field calculation is commonly considered of great importance for micromagnetic simulations, since it is the most time consuming part of the simulation. The Fast Multipole Method (FMM) has displayed linear O(N) parallelization behavior on many cores. This article investigates the error of a recent FMM approach approximating sources using linear—instead of constant—finite elements in the singular integral for calculating the stray field and the corresponding potential. After measuring performance in an earlier manuscript, this manuscript investigates the convergence of the relative L2 error for several FMM simulation parameters. Various scenarios either calculating the stray field directly or via potential are discussed.
Method of determining pH by the alkaline absorption of carbon dioxide
Hobbs, David T.
1992-01-01
A method for measuring the concentration of hydroxides in alkaline solutions in a remote location using the tendency of hydroxides to absorb carbon dioxide. The method includes the passing of carbon dioxide over the surface of an alkaline solution in a remote tank before and after measurements of the carbon dioxide solution. A comparison of the measurements yields the absorption fraction from which the hydroxide concentration can be calculated using a correlation of hydroxide or pH to absorption fraction.
Seresht, L. Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad
2014-01-01
Background: Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). Materials and Methods: This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Results: Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Conclusions: Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation. PMID:24627845
Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko
2018-03-01
The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malík, M., E-mail: michal.malik@tul.cz; Primas, J.; Kopecký, V.
2014-01-15
This paper deals with the effects surrounding phenomenon of a mechanical force generated on a high voltage asymmetrical capacitor (the so called Biefeld-Brown effect). A method to measure this force is described and a formula to calculate its value is also given. Based on this the authors derive a formula characterising the neutral air flow velocity impacting an asymmetrical capacitor connected to high voltage. This air flow under normal circumstances lessens the generated force. In the following part this velocity is measured using Particle Image Velocimetry measuring technique and the results of the theoretically calculated velocity and the experimentally measuredmore » value are compared. The authors found a good agreement between the results of both approaches.« less
Calculations vs. measurements of remnant dose rates for SNS spent structures
NASA Astrophysics Data System (ADS)
Popova, I. I.; Gallmeier, F. X.; Trotter, S.; Dayton, M.
2018-06-01
Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction. Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.
Calculations vs. measurements of remnant dose rates for SNS spent structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popova, Irina I.; Gallmeier, Franz X.; Trotter, Steven M.
Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction.more » Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.« less
Method Of Characterizing An Electrode Binder
Cocciantelli, Jean-Michel; Coco, Isabelle; Villenave, Jean-Jacques
1999-05-11
In a method of characterizing a polymer binder for cell electrodes in contact with an electrolyte and including a current collector and a paste containing an electrochemically active material and said binder, a spreading coefficient of the binder on the active material is calculated from the measured angle of contact between standard liquids and the active material and the binder, respectively. An interaction energy of the binder with the electrolyte is calculated from the measured angle of contact between the electrolyte and the binder. The binder is selected such that the spreading coefficient is less than zero and the interaction energy is at least 60 mJ/m.sup.2.
NASA Technical Reports Server (NTRS)
Sopher, R.; Twomey, W. J.
1990-01-01
NASA-Langley is sponsoring a rotorcraft structural dynamics program with the objective to establish in the U.S. a superior capability to utilize finite element analysis models for calculations to support industrial design of helicopter airframe structures. In the initial phase of the program, teams from the major U.S. manufacturers of helicopter airframes will apply extant finite element analysis methods to calculate loads and vibrations of helicopter airframes, and perform correlations between analysis and measurements. The aforementioned rotorcraft structural dynamics program was given the acronym DAMVIBS (Design Analysis Method for Vibrations). Sikorsky's RDYNE Rotorcraft Dynamics Analysis used for the correlation study, the specifics of the application of RDYNE to the AH-1G, and comparisons of the predictions of the method with flight data for loads and vibrations on the AH-1G are described. RDYNE was able to predict trends of variations of loads and vibrations with airspeed, but in some instances magnitudes differed from measured results by factors of two or three to one. Sensitivities were studied of predictions to rotor inflow modeling, effects of torsional modes, number of blade bending modes, fuselage structural damping, and hub modal content.
NASA Astrophysics Data System (ADS)
Luo, Yao; Wu, Mei-Ping; Wang, Ping; Duan, Shu-Ling; Liu, Hao-Jun; Wang, Jin-Long; An, Zhan-Feng
2015-09-01
The full magnetic gradient tensor (MGT) refers to the spatial change rate of the three field components of the geomagnetic field vector along three mutually orthogonal axes. The tensor is of use to geological mapping, resources exploration, magnetic navigation, and others. However, it is very difficult to measure the full magnetic tensor gradient using existing engineering technology. We present a method to use triaxial aeromagnetic gradient measurements for deriving the full MGT. The method uses the triaxial gradient data and makes full use of the variation of the magnetic anomaly modulus in three dimensions to obtain a self-consistent magnetic tensor gradient. Numerical simulations show that the full MGT data obtained with the proposed method are of high precision and satisfy the requirements of data processing. We selected triaxial aeromagnetic gradient data from the Hebei Province for calculating the full MGT. Data processing shows that using triaxial tensor gradient data allows to take advantage of the spatial rate of change of the total field in three dimensions and suppresses part of the independent noise in the aeromagnetic gradient. The calculated tensor components have improved resolution, and the transformed full tensor gradient satisfies the requirement of geological mapping and interpretation.
Parr, W C H; Chatterjee, H J; Soligo, C
2012-04-05
Orientation of the subtalar joint axis dictates inversion and eversion movements of the foot and has been the focus of evolutionary and clinical studies for a number of years. Previous studies have measured the subtalar joint axis against the axis of the whole foot, the talocrural joint axis and, recently, the principal axes of the talus. The present study introduces a new method for estimating average joint axes from 3D reconstructions of bones and applies the method to the talus to calculate the subtalar and talocrural joint axes. The study also assesses the validity of the principal axes as a reference coordinate system against which to measure the subtalar joint axis. In order to define the angle of the subtalar joint axis relative to that of another axis in the talus, we suggest measuring the subtalar joint axis against the talocrural joint axis. We present corresponding 3D vector angles calculated from a modern human skeletal sample. This method is applicable to virtual 3D models acquired through surface-scanning of disarticulated 'dry' osteological samples, as well as to 3D models created from CT or MRI scans. Copyright © 2012 Elsevier Ltd. All rights reserved.
Measuring Viscosities of Gases at Atmospheric Pressure
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Hoshang, Chegini
1987-01-01
Variant of general capillary method for measuring viscosities of unknown gases based on use of thermal mass-flowmeter section for direct measurement of pressure drops. In technique, flowmeter serves dual role, providing data for determining volume flow rates and serving as well-characterized capillary-tube section for measurement of differential pressures across it. New method simple, sensitive, and adaptable for absolute or relative viscosity measurements of low-pressure gases. Suited for very complex hydrocarbon mixtures where limitations of classical theory and compositional errors make theoretical calculations less reliable.
Calawerts, William M; Lin, Liyu; Sprott, J C; Jiang, Jack J
2017-01-01
The purpose of this paper is to introduce the rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. A total of 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. We developed a new parameter, rate of divergence, which uses a modified version of Wolf's algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time-delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (P < 0.001) and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on the amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerouali, K; Aubry, J; Doucet, R
2016-06-15
Purpose: To implement the new EBT-XD Gafchromic films for accurate dosimetric and geometric validation of stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) CyberKnife (CK) patient specific QA. Methods: Film calibration was performed using a triplechannel film analysis on an Epson 10000XL scanner. Calibration films were irradiated using a Varian Clinac 21EX flattened beam (0 to 20 Gy), to ensure sufficient dose homogeneity. Films were scanned to a resolution of 0.3 mm, 24 hours post irradiation following a well-defined protocol. A set of 12 QA was performed for several types of CK plans: trigeminal neuralgia, brain metastasis, prostate andmore » lung tumors. A custom made insert for the CK head phantom has been manufactured to yield an accurate measured to calculated dose registration. When the high dose region was large enough, absolute dose was also measured with an ionization chamber. Dose calculation is performed using MultiPlan Ray-tracing algorithm for all cases since the phantom is mostly made from near water-equivalent plastic. Results: Good agreement (<2%) was found between the dose to the chamber and the film, when a chamber measurement was possible The average dose difference and standard deviations between film measurements and TPS calculations were respectively 1.75% and 3%. The geometric accuracy has been estimated to be <1 mm, combining robot positioning uncertainty and film registration to calculated dose. Conclusion: Patient specific QA measurements using EBT-XD films yielded a full 2D dose plane with high spatial resolution and acceptable dose accuracy. This method is particularly promising for trigeminal neuralgia plan QA, where the positioning of the spatial dose distribution is equally or more important than the absolute delivered dose to achieve clinical goals.« less
Application of color Doppler flow mapping to calculate orifice area of St Jude mitral valve
NASA Technical Reports Server (NTRS)
Leung, D. Y.; Wong, J.; Rodriguez, L.; Pu, M.; Vandervoort, P. M.; Thomas, J. D.
1998-01-01
BACKGROUND: The effective orifice area (EOA) of a prosthetic valve is superior to transvalvular gradients as a measure of valve function, but measurement of mitral prosthesis EOA has not been reliable. METHODS AND RESULTS: In vitro flow across St Jude valves was calculated by hemispheric proximal isovelocity surface area (PISA) and segment-of-spheroid (SOS) methods. For steady and pulsatile conditions, PISA and SOS flows correlated with true flow, but SOS and not PISA underestimated flow. These principles were then used intraoperatively to calculate cardiac output and EOA of newly implanted St Jude mitral valves in 36 patients. Cardiac output by PISA agreed closely with thermodilution (r=0.91, Delta=-0.05+/-0.55 L/min), but SOS underestimated it (r=0.82, Delta=-1.33+/-0.73 L/min). Doppler EOAs correlated with Gorlin equation estimates (r=0.75 for PISA and r=0.68 for SOS, P<0.001) but were smaller than corresponding in vitro EOA estimates. CONCLUSIONS: Proximal flow convergence methods can calculate forward flow and estimate EOA of St Jude mitral valves, which may improve noninvasive assessment of prosthetic mitral valve obstruction.
Analysis of No-load Iron Losses of Turbine Generators by 3D Magnetic Field Analysis
NASA Astrophysics Data System (ADS)
Nakahara, Akihito; Mogi, Hisashi; Takahashi, Kazuhiko; Ide, Kazumasa; Kaneda, Junya; Hattori, Ken'Ichi; Watanabe, Takashi; Kaido, Chikara; Minematsu, Eisuke; Hanzawa, Kazufumi
This paper focuses on no-load iron losses of turbine generators. To calculate iron losses of turbine generators a program was developed. In the program, core loss curves of materials used for stator core were reproduced precisely by using tables of loss coefficients. Accuracy of calculation by this method was confirmed by comparing calculated values with measured in a model stator core. The iron loss of a turbine generator estimated with considering three-dimensional distribution of magnetic fluxes. And additional losses included in measured iron loss was evaluated with three-dimensional magnetic field analysis.
NASA Astrophysics Data System (ADS)
Wada, Sanehiro; Furuichi, Noriyuki; Shimada, Takashi
2017-11-01
This paper proposes the application of a novel ultrasonic pulse, called a partial inversion pulse (PIP), to the measurement of the velocity profile and flow rate in a pipe using the ultrasound time-domain correlation (UTDC) method. In general, the measured flow rate depends on the velocity profile in the pipe; thus, on-site calibration is the only method of checking the accuracy of on-site flow rate measurements. Flow rate calculation using UTDC is based on the integration of the measured velocity profile. The advantages of this method compared with the ultrasonic pulse Doppler method include the possibility of the velocity range having no limitation and its applicability to flow fields without a sufficient amount of reflectors. However, it has been previously reported that the measurable velocity range for UTDC is limited by false detections. Considering the application of this method to on-site flow fields, the issue of velocity range is important. To reduce the effect of false detections, a PIP signal, which is an ultrasound signal that contains a partially inverted region, was developed in this study. The advantages of the PIP signal are that it requires little additional hardware cost and no additional software cost in comparison with conventional methods. The effects of inversion on the characteristics of the ultrasound transmission were estimated through numerical calculation. Then, experimental measurements were performed at a national standard calibration facility for water flow rate in Japan. The experimental results demonstrate that measurements made using a PIP signal are more accurate and yield a higher detection ratio than measurements using a normal pulse signal.
Saka, Masayuki; Yamauchi, Hiroki; Hoshi, Kenji; Yoshioka, Toru; Hamada, Hidetoshi; Gamada, Kazuyoshi
2015-05-01
Humeral retroversion is defined as the orientation of the humeral head relative to the distal humerus. Because none of the previous methods used to measure humeral retroversion strictly follow this definition, values obtained by these techniques vary and may be biased by morphologic variations of the humerus. The purpose of this study was 2-fold: to validate a method to define the axis of the distal humerus with a virtual cylinder and to establish the reliability of 3-dimensional (3D) measurement of humeral retroversion by this cylinder fitting method. Humeral retroversion in 14 baseball players (28 humeri) was measured by the 3D cylinder fitting method. The root mean square error was calculated to compare values obtained by a single tester and by 2 different testers using the embedded coordinate system. To establish the reliability, intraclass correlation coefficient (ICC) and precision (standard error of measurement [SEM]) were calculated. The root mean square errors for the humeral coordinate system were <1.0 mm/1.0° for comparison of all translations/rotations obtained by a single tester and <1.0 mm/2.0° for comparison obtained by 2 different testers. Assessment of reliability and precision of the 3D measurement of retroversion yielded an intratester ICC of 0.99 (SEM, 1.0°) and intertester ICC of 0.96 (SEM, 2.8°). The error in measurements obtained by a distal humerus cylinder fitting method was small enough not to affect retroversion measurement. The 3D measurement of retroversion by this method provides excellent intratester and intertester reliability. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Residual stress in glass: indentation crack and fractography approaches
Anunmana, Chuchai; Anusavice, Kenneth J.; Mecholsky, John J.
2009-01-01
Objective To test the hypothesis that the indentation crack technique can determine surface residual stresses that are not statistically significantly different from those determined from the analytical procedure using surface cracks, the four-point flexure test, and fracture surface analysis. Methods Soda-lime-silica glass bar specimens (4 mm × 2.3 mm × 28 mm) were prepared and annealed at 650 °C for 30 min before testing. The fracture toughness values of the glass bars were determined from 12 specimens based on induced surface cracks, four-point flexure, and fractographic analysis. To determine the residual stress from the indentation technique, 18 specimens were indented under 19.6 N load using a Vickers microhardness indenter. Crack lengths were measured within 1 min and 24 h after indentation, and the measured crack lengths were compared with the mean crack lengths of annealed specimens. Residual stress was calculated from an equation developed for the indentation technique. All specimens were fractured in a four-point flexure fixture and the residual stress was calculated from the strength and measured crack sizes on the fracture surfaces. Results The results show that there was no significant difference between the residual stresses calculated from the two techniques. However, the differences in mean residual stresses calculated within 1 min compared with those calculated after 24 h were statistically significant (p=0.003). Significance This study compared the indentation technique with the fractographic analysis method for determining the residual stress in the surface of soda-lime silica glass. The indentation method may be useful for estimating residual stress in glass. PMID:19671475
Satou, Tsukasa; Ito, Misae; Shinomiya, Yuma; Takahashi, Yoshiaki; Hara, Naoto; Niida, Takahiro
2018-04-04
To investigate differences in the stimulus accommodative convergence/accommodation (AC/A) ratio using various techniques and accommodative stimuli, and to describe a method for determining the stimulus AC/A ratio. A total of 81 subjects with a mean age of 21 years (range, 20-23 years) were enrolled. The relationship between ocular deviation and accommodation was assessed using two methods. Ocular deviation was measured by varying the accommodative requirement using spherical plus/minus lenses to create an accommodative stimulus of 10.00 diopters (D) (in 1.00 D steps). Ocular deviation was assessed using the alternate prism cover test in method 1 at distance (5 m) and near (1/3 m), and the major amblyoscope in method 2. The stimulus AC/A ratios obtained using methods 1 and 2 were calculated and defined as the stimulus AC/A ratios with low and high accommodation, respectively, using the following analysis method. The former was calculated as the difference between the convergence response to an accommodative stimulus of 3 D and 0 D, divided by 3. The latter was calculated as the difference between the convergence response to a maximum (max) accommodative stimulus with distinct vision of the subject and an accommodative stimulus of max minus 3.00 D, divided by 3. The median stimulus AC/A ratio with low accommodation (1.0 Δ/D for method 1 at distance, 2.0 Δ/D for method 1 at near, and 2.7 Δ/D for method 2) differed significantly among the measurement methods (P < 0.01). Differences in the median stimulus AC/A ratio with high accommodation (4.0 Δ/D for method 1 at distance, 3.7 Δ/D for method 1 at near, and 4.7 Δ/D for method 2) between method 1 at distance and method 2 were statistically significant (P < 0.05), while method 1 at near was not significantly different compared with other methods. Differences in the stimulus AC/A ratio value were significant according to measurement technique and accommodative stimuli. However, differences caused by measurement technique may be reduced by using a high accommodative stimulus during measurements.
Kehl, Sven; Siemer, Jörn; Brunnemer, Suna; Weiss, Christel; Eckert, Sven; Schaible, Thomas; Sütterlin, Marc
2014-05-01
The purpose of this study was to compare different methods for measuring the fetal lung area-to-head circumference ratio and to investigate their prediction of postpartum survival and the need for neonatal extracorporeal membrane oxygenation (ECMO) therapy in fetuses with isolated congenital diaphragmatic hernias. This prospective study included 118 fetuses of at least 20 weeks' gestation with isolated left-sided congenital diaphragmatic hernias. The lung-to-head ratio was measured with 3 different methods (longest diameter, anteroposterior diameter, and tracing). To eliminate the influence of gestational age, the observed-to-expected lung-to-head ratio was calculated. Receiver operating characteristic (ROC) curves were calculated for the statistical prediction of survival and need for ECMO therapy by the observed-to-expected lung-to-head ratio measured with the different methods. For survival and ECMO necessity 118 and 102 cases (16 neonates were not eligible for ECMO) were assessed, respectively. For prediction of postpartum survival and ECMO necessity, the areas under the ROC curves and 95% confidence intervals showed very similar results for the 3 methods for prediction of survival (tracing, 0.8445 [0.7553-0.9336]; longest diameter, 0.8248 [0.7360-0.9136]; and anteroposterior diameter, 0.8002 [0.7075-0.8928]) and for ECMO necessity (tracing, 0.7344 [0.6297-0.8391]; longest diameter, 0.7128 [0.6027-0.8228]; and anteroposterior diameter, 0.7212 [0.6142-0.8281]). Comparisons between the areas under the ROC curves showed that the tracing method was superior to the anteroposterior diameter method in predicting postpartum survival (P = .0300). Lung-to-head ratio and observed-to-expected lung-to-head ratio measurements were shown to accurately predict postnatal survival and the need for ECMO therapy in fetuses with left-sided congenital diaphragmatic hernias. Tracing the limits of the lungs seems to be the favorable method for calculating the fetal lung area.
Finding trap stiffness of optical tweezers using digital filters.
Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G
2018-02-01
Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.
Estimation of composite hydraulic resistance in ice-covered alluvial streams
NASA Astrophysics Data System (ADS)
Ghareh Aghaji Zare, Soheil; Moore, Stephanie A.; Rennie, Colin D.; Seidou, Ousmane; Ahmari, Habib; Malenchak, Jarrod
2016-02-01
Formation, propagation, and recession of ice cover introduce a dynamic boundary layer to the top of rivers during northern winters. Ice cover affects water velocity magnitude and distribution, water level and consequently conveyance capacity of the river. In this research, total resistance, i.e., "composite resistance," is studied for a 4 month period including stable ice cover, breakup, and open water stages in Lower Nelson River (LNR), northern Manitoba, Canada. Flow and ice characteristics such as water velocity and depth and ice thickness and condition were measured continuously using acoustic techniques. An Acoustic Doppler Current Profiler (ADCP) and Shallow Water Ice Profiling Sonar (SWIPS) were installed simultaneously on a bottom mount and deployed for this purpose. Total resistance to the flow and boundary roughness are estimated using measured bulk hydraulic parameters. A novel method is developed to calculate composite resistance directly from measured under ice velocity profiles. The results of this method are compared to the measured total resistance and to the calculated composite resistance using formulae available in literature. The new technique is demonstrated to compare favorably to measured total resistance and to outperform previously available methods.
Kim, Min Woo; Sun, Gwanggyu; Lee, Jung Hyuk; Kim, Byung-Gee
2018-06-01
Ribozyme (Rz) is a very attractive RNA molecule in metabolic engineering and synthetic biology fields where RNA processing is required as a control unit or ON/OFF signal for its cleavage reaction. In order to use Rz for such RNA processing, Rz must have highly active and specific catalytic activity. However, current methods for assessing the intracellular activity of Rz have limitations such as difficulty in handling and inaccuracies in the evaluation of correct cleavage activity. In this paper, we proposed a simple method to accurately measure the "intracellular cleavage efficiency" of Rz. This method deactivates unwanted activity of Rz which may consistently occur after cell lysis using DNA quenching method, and calculates the cleavage efficiency by analyzing the cleaved fraction of mRNA by Rz from the total amount of mRNA containing Rz via quantitative real-time PCR (qPCR). The proposed method was applied to measure "intracellular cleavage efficiency" of sTRSV, a representative Rz, and its mutant, and their intracellular cleavage efficiencies were calculated as 89% and 93%, respectively. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Qin, J. J.; Jones, M.; Shiota, T.; Greenberg, N. L.; Firstenberg, M. S.; Tsujino, H.; Zetts, A. D.; Sun, J. P.; Cardon, L. A.; Odabashian, J. A.;
2000-01-01
AIM: The aim of this study was to investigate the feasibility and accuracy of using symmetrically rotated apical long axis planes for the determination of left ventricular (LV) volumes with real-time three-dimensional echocardiography (3DE). METHODS AND RESULTS: Real-time 3DE was performed in six sheep during 24 haemodynamic conditions with electromagnetic flow measurements (EM), and in 29 patients with magnetic resonance imaging measurements (MRI). LV volumes were calculated by Simpson's rule with five 3DE methods (i.e. apical biplane, four-plane, six-plane, nine-plane (in which the angle between each long axis plane was 90 degrees, 45 degrees, 30 degrees or 20 degrees, respectively) and standard short axis views (SAX)). Real-time 3DE correlated well with EM for LV stroke volumes in animals (r=0.68-0.95) and with MRI for absolute volumes in patients (r-values=0.93-0.98). However, agreement between MRI and apical nine-plane, six-plane, and SAX methods in patients was better than those with apical four-plane and bi-plane methods (mean difference = -15, -18, -13, vs. -31 and -48 ml for end-diastolic volume, respectively, P<0.05). CONCLUSION: Apically rotated measurement methods of real-time 3DE correlated well with reference standards for calculating LV volumes. Balancing accuracy and required time for these LV volume measurements, the apical six-plane method is recommended for clinical use.
Code of Federal Regulations, 2013 CFR
2013-01-01
... least three significant figures shall be reported. 4.3Off mode. 4.3.1Pool heaters with a seasonal off... significant figures shall be reported. 5.Calculations. 5.1Thermal efficiency. Calculate the thermal efficiency...
QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES
The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...
Research into Influence of Gaussian Beam on Terahertz Radar Cross Section of a Semicircular Boss
NASA Astrophysics Data System (ADS)
Li, Hui-Yu; Li, Qi; She, Jian-Yu; Zhao, Yong-Peng; Chen, De-Ying; Wang, Qi
2013-08-01
In radar cross section (RCS) calculation of a rough surface, the model can be simplified into the scattering of geometrically idealized bosses on a surface. Thus the problem of the RCS calculation of a rough surface is changed to the RCS calculation of the semicircular boss. The RCS measurement of scale model can help save time and money. The utilization of terahertz in RCS is attractive because of its special properties: the wavelength of the terahertz wave can help limit the size of the model in a suitable range in the measurement of the scale model and get more detailed data in the measurement of the real object. However, usually the incident beam of a terahertz source is a Gaussian beam; in the theoretical RCS estimation, usually a plane wave is assumed as the incident beam for sake of simplicity which may lead to an error between the measurement and calculation results. In this paper, the method of images is used to calculate the RCS of a semicircular boss at 2.52 THz and the results are compared to the one calculated when the incident beam is a plane wave.
Dyer, Karrie; Lanning, Craig; Das, Bibhuti; Lee, Po-Feng; Ivy, D Dunbar; Valdes-Cruz, Lilliam; Shandas, Robin
2006-04-01
We have shown previously that input impedance of the pulmonary vasculature provides a comprehensive characterization of right ventricular afterload by including compliance. However, impedance-based compliance assessment requires invasive measurements. Here, we develop and validate a noninvasive method to measure pulmonary artery (PA) compliance using ultrasound color M-mode (CMM) Doppler tissue imaging (DTI). Dynamic compliance (C(dyn)) of the PA was obtained from CMM DTI and continuous wave Doppler measurement of the tricuspid regurgitant velocity. C(dyn) was calculated as: [(D(s) - D(d))/(D(d) x P(s))] x 10(4); where D(s) = systolic diameter, D(d) = diastolic diameter, and P(s) = systolic pressure. The method was validated both in vitro and in 13 patients in the catheterization laboratory, and then tested on 27 pediatric patients with pulmonary hypertension, with comparison with 10 age-matched control subjects. C(dyn) was also measured in an additional 13 patients undergoing reactivity studies. Instantaneous diameter measured using CMM DTI agreed well with intravascular ultrasound measurements in the in vitro models. Clinically, C(dyn) calculated by CMM DTI agreed with C(dyn) calculated using invasive techniques (23.4 +/- 16.8 vs 29.1 +/- 20.6%/100 mm Hg; P = not significant). Patients with pulmonary hypertension had significantly lower peak wall velocity values and lower C(dyn) values than control subjects (P < .01). C(dyn) values followed an exponentially decaying relationship with PA pressure, indicating the nonlinear stress-strain behavior of these arteries. Reactivity in C(dyn) agreed with reactivity measured using impedance techniques. The C(dyn) method provides a noninvasive means of assessing PA compliance and should be useful as an additional measure of vascular reactivity subsequent to pulmonary vascular resistance in patients with pulmonary hypertension.
A method for cone fitting based on certain sampling strategy in CMM metrology
NASA Astrophysics Data System (ADS)
Zhang, Li; Guo, Chaopeng
2018-04-01
A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.
[Welding arc temperature field measurements based on Boltzmann spectrometry].
Si, Hong; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao
2012-09-01
Arc plasma, as non-uniform plasma, has complicated energy and mass transport processes in its internal, so plasma temperature measurement is of great significance. Compared with absolute spectral line intensity method and standard temperature method, Boltzmann plot measuring is more accurate and convenient. Based on the Boltzmann theory, the present paper calculates the temperature distribution of the plasma and analyzes the principle of lines selection by real time scanning the space of the TIG are measurements.
NASA Astrophysics Data System (ADS)
Šiljeg, A.; Lozić, S.; Šiljeg, S.
2014-12-01
The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.
Dexter, Franklin; Ledolter, Johannes; Hindman, Bradley J
2016-01-01
In this Statistical Grand Rounds, we review methods for the analysis of the diversity of procedures among hospitals, the activities among anesthesia providers, etc. We apply multiple methods and consider their relative reliability and usefulness for perioperative applications, including calculations of SEs. We also review methods for comparing the similarity of procedures among hospitals, activities among anesthesia providers, etc. We again apply multiple methods and consider their relative reliability and usefulness for perioperative applications. The applications include strategic analyses (e.g., hospital marketing) and human resource analytics (e.g., comparisons among providers). Measures of diversity of procedures and activities (e.g., Herfindahl and Gini-Simpson index) are used for quantification of each facility (hospital) or anesthesia provider, one at a time. Diversity can be thought of as a summary measure. Thus, if the diversity of procedures for 48 hospitals is studied, the diversity (and its SE) is being calculated for each hospital. Likewise, the effective numbers of common procedures at each hospital can be calculated (e.g., by using the exponential of the Shannon index). Measures of similarity are pairwise assessments. Thus, if quantifying the similarity of procedures among cases with a break or handoff versus cases without a break or handoff, a similarity index represents a correlation coefficient. There are several different measures of similarity, and we compare their features and applicability for perioperative data. We rely extensively on sensitivity analyses to interpret observed values of the similarity index.
Reproducibility of Regional DEXA Examinations of Abdominal Fat and Lean Tissue
Tallroth, Kaj; Kettunen, Jyrki A.; Kujala, Urho M.
2013-01-01
Objective The aim of this study was to develop and test the validity of a new repeatable method to delimit abdominal areas for follow-up of fat mass (FM) and lean tissue mass (LM) in DEXA examinations. Methods 37 male volunteers underwent two DEXA examinations. Total body FM and LM measurements and corresponding abdominal measurements in a carefully defined region were calculated from the first scan. After repositioning of the subjects and a second scan, the delimited region was copied and the abdominal tissues re-calculated. Results The mean LM of the abdominal area was 2.804 kg (SD 0.556), and the mean FM was 1.026 kg (SD 0.537). The intra-class correlation coefficient for the repeated abdominal LM, FM, and LM/FM ratio measurements was 0.99. The mean difference (bias) for the repeated abdominal LM measurements was −13 g (95% confidence interval (CI) −193.0 to 166.8), and for the repeated abdominal FM measurements it was −35 g (95% CI −178.9 to 108.5). Conclusions The results indicate that regional DEXA is a sensitive method with excellent reproducibility in the measurements of the abdominal fat and lean tissues. The method may serve as a useful tool for evaluation and follow-up of various dietary and training programmes. PMID:23615566
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1984-01-01
Methods for measuring the lateral forces, occurring as a result of asymmetric nozzle flow separation, are discussed. The effect of some parameters on the side load is explained. A new method was developed for calculation of the side load. The values calculated are compared with side load data of the J-2 engine. Results are used for predicting side loads of the space shuttle main engine.
Comparison of different phase retrieval algorithms
NASA Astrophysics Data System (ADS)
Kaufmann, Rolf; Plamondon, Mathieu; Hofmann, Jürgen; Neels, Antonia
2017-09-01
X-ray phase contrast imaging is attracting more and more interest. Since the phase cannot be measured directly an indirect method using e.g. a grating interferometer has to be applied. This contribution compares three different approaches to calculate the phase from Talbot-Lau interferometer measurements using a phase-stepping approach. Besides the usually applied Fourier coefficient method also a linear fitting technique and Taylor series expansion method are applied and compared.
Biases of chamber methods for measuring soil CO2 efflux demonstrated with a laboratory apparatus.
S. Mark Nay; Kim G. Mattson; Bernard T. Bormann
1994-01-01
Investigators have historically measured soil CO2 efflux as an indicator of soil microbial and root activity and more recently in calculations of carbon budgets. The most common methods estimate CO2 efflux by placing a chamber over the soil surface and quantifying the amount of CO2 entering the...
USDA-ARS?s Scientific Manuscript database
Kinetic energy of water droplets has a substantial effect on development of a soil surface seal and infiltration rate of bare soil. Methods for measuring sprinkler droplet size and velocity needed to calculate droplet kinetic energy have been developed and tested over the past 50 years, each with ad...
A Method of Measuring the Costs and Benefits of Applied Research.
ERIC Educational Resources Information Center
Sprague, John W.
The Bureau of Mines studied the application of the concepts and methods of cost-benefit analysis to the problem of ranking alternative applied research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…
New determination of the fine structure constant from the electron value and QED.
Gabrielse, G; Hanneke, D; Kinoshita, T; Nio, M; Odom, B
2006-07-21
Quantum electrodynamics (QED) predicts a relationship between the dimensionless magnetic moment of the electron (g) and the fine structure constant (alpha). A new measurement of g using a one-electron quantum cyclotron, together with a QED calculation involving 891 eighth-order Feynman diagrams, determine alpha(-1)=137.035 999 710 (96) [0.70 ppb]. The uncertainties are 10 times smaller than those of nearest rival methods that include atom-recoil measurements. Comparisons of measured and calculated g test QED most stringently, and set a limit on internal electron structure.
Method of Detecting Coliform Bacteria and Escherichia Coli Bacteria from Reflected Light
NASA Technical Reports Server (NTRS)
Vincent, Robert (Inventor)
2013-01-01
The present invention relates to a method of detecting coliform bacteria in water from reflected light and a method of detecting Eschericha Coli bacteria in water from reflected light, and also includes devices for the measurement, calculation and transmission of data relating to that method.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
NASA Astrophysics Data System (ADS)
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Shen, C; Tian, Z
Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less
NASA Astrophysics Data System (ADS)
Otsuka, Mioko; Homma, Ryoei; Hasegawa, Yasuhiro
2017-05-01
The phonon and carrier thermal conductivities of thermoelectric materials were calculated using the Wiedemann-Franz law, Boltzmann equation, and a method we propose in this study called the Debye specific heat method. We prepared polycrystalline n-type doped bismuth telluride (BiTe) and bismuth antimony (BiSb) bulk alloy samples and measured six parameters (Seebeck coefficient, resistivity, thermal conductivity, thermal diffusivity, magneto-resistivity, and Hall coefficient). The carrier density and mobility were estimated for calculating the carrier thermal conductivity by using the Boltzmann equation. In the Debye specific heat method, the phonon thermal diffusivity, and thermal conductivity were calculated from the temperature dependence of the effective specific heat by using not only the measured thermal conductivity and Debye model, but also the measured thermal diffusivity. The carrier thermal conductivity was also evaluated from the phonon thermal conductivity by using the specific heat. The ratio of carrier thermal conductivity to thermal conductivity was evaluated for the BiTe and BiSb samples, and the values obtained using the Debye specific heat method at 300 K were 52% for BiTe and <5.5% for BiSb. These values are either considerably larger or smaller than those obtained using other methods. The Dulong-Petit law was applied to validate the Debye specific heat method at 300 K, which is significantly greater than the Debye temperature of the BiTe and BiSb samples, and it was confirmed that the phonon specific heat at 300 K has been accurately reproduced using our proposed method.
Nguyen, Anh-Dung; Boling, Michelle C; Slye, Carrie A; Hartley, Emily M; Parisi, Gina L
2013-01-01
Accurate, efficient, and reliable measurement methods are essential to prospectively identify risk factors for knee injuries in large cohorts. To determine tester reliability using digital photographs for the measurement of static lower extremity alignment (LEA) and whether values quantified with an electromagnetic motion-tracking system are in agreement with those quantified with clinical methods and digital photographs. Descriptive laboratory study. Laboratory. Thirty-three individuals participated and included 17 (10 women, 7 men; age = 21.7 ± 2.7 years, height = 163.4 ± 6.4 cm, mass = 59.7 ± 7.8 kg, body mass index = 23.7 ± 2.6 kg/m2) in study 1, in which we examined the reliability between clinical measures and digital photographs in 1 trained and 1 novice investigator, and 16 (11 women, 5 men; age = 22.3 ± 1.6 years, height = 170.3 ± 6.9 cm, mass = 72.9 ± 16.4 kg, body mass index = 25.2 ± 5.4 kg/m2) in study 2, in which we examined the agreement among clinical measures, digital photographs, and an electromagnetic tracking system. We evaluated measures of pelvic angle, quadriceps angle, tibiofemoral angle, genu recurvatum, femur length, and tibia length. Clinical measures were assessed using clinically accepted methods. Frontal- and sagittal-plane digital images were captured and imported into a computer software program. Anatomic landmarks were digitized using an electromagnetic tracking system to calculate static LEA. Intraclass correlation coefficients and standard errors of measurement were calculated to examine tester reliability. We calculated 95% limits of agreement and used Bland-Altman plots to examine agreement among clinical measures, digital photographs, and an electromagnetic tracking system. Using digital photographs, fair to excellent intratester (intraclass correlation coefficient range = 0.70-0.99) and intertester (intraclass correlation coefficient range = 0.75-0.97) reliability were observed for static knee alignment and limb-length measures. An acceptable level of agreement was observed between clinical measures and digital pictures for limb-length measures. When comparing clinical measures and digital photographs with the electromagnetic tracking system, an acceptable level of agreement was observed in measures of static knee angles and limb-length measures. The use of digital photographs and an electromagnetic tracking system appears to be an efficient and reliable method to assess static knee alignment and limb-length measurements.
40 CFR 60.704 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... and hydrogen. (iii) Method 4 to measure the content of water vapor. (3) The volumetric flow rate shall... part) if published values are not available or cannot be calculated. Bws=Water vapor content of the...
40 CFR 60.704 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and hydrogen. (iii) Method 4 to measure the content of water vapor. (3) The volumetric flow rate shall... part) if published values are not available or cannot be calculated. Bws=Water vapor content of the...
New method of noncontact temperature measurement in on-line textile production
NASA Astrophysics Data System (ADS)
Cheng, Xianping; Song, Xing-Li; Deng, Xing-Zhong
1993-09-01
Based on the condition of textile production the method of infrared non-contact temperature measurement is adcpted in the heat-setting and drying heat-treatment process . This method is used to monitor the moving cloth. The temperature of the cloth is displayed rapidly and exactly. The principle of the temperature measurement is analysed theoretically in this paper. Mathematical analysis and calculation are used for introducing signal transmitting method. Adopted method of combining software with hardware the temperature is corrected and compensated with the aid of a single-chip microcomputer. The results of test indicate that the application of temperature measurement instrument provides reliable parameters in the quality control. And it is an important measure on improving the quality of products.
NASA Astrophysics Data System (ADS)
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-08-14
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-01-01
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203
Mizumura, Sunao; Nishikawa, Kazuhiro; Murata, Akihiro; Yoshimura, Kosei; Ishii, Nobutomo; Kokubo, Tadashi; Morooka, Miyako; Kajiyama, Akiko; Terahara, Atsuro
2018-05-01
In Japan, the Southampton method for dopamine transporter (DAT) SPECT is widely used to quantitatively evaluate striatal radioactivity. The specific binding ratio (SBR) is the ratio of specific to non-specific binding observed after placing pentagonal striatal voxels of interest (VOIs) as references. Although the method can reduce the partial volume effect, the SBR may fluctuate due to the presence of low-count areas of cerebrospinal fluid (CSF), caused by brain atrophy, in the striatal VOIs. We examined the effect of the exclusion of low-count VOIs on SBR measurement. We retrospectively reviewed DAT imaging of 36 patients with parkinsonian syndromes performed after injection of 123 I-FP-CIT. SPECT data were reconstructed using three conditions. We defined the CSF area in each SPECT image after segmenting the brain tissues. A merged image of gray and white matter images was constructed from each patient's magnetic resonance imaging (MRI) to create an idealized brain image that excluded the CSF fraction (MRI-mask method). We calculated the SBR and asymmetric index (AI) in the MRI-mask method for each reconstruction condition. We then calculated the mean and standard deviation (SD) of voxel RI counts in the reference VOI without the striatal VOIs in each image, and determined the SBR by excluding the low-count pixels (threshold method) using five thresholds: mean-0.0SD, mean-0.5SD, mean-1.0SD, mean-1.5SD, and mean-2.0SD. We also calculated the AIs from the SBRs measured using the threshold method. We examined the correlation among the SBRs of the threshold method, between the uncorrected SBRs and the SBRs of the MRI-mask method, and between the uncorrected AIs and the AIs of the MRI-mask method. The intraclass correlation coefficient indicated an extremely high correlation among the SBRs and among the AIs of the MRI-mask and threshold methods at thresholds between mean-2.0D and mean-1.0SD, regardless of the reconstruction correction. The differences among the SBRs and the AIs of the two methods were smallest at thresholds between man-2.0SD and mean-1.0SD. The SBR calculated using the threshold method was highly correlated with the MRI-SBR. These results suggest that the CSF correction of the threshold method is effective for the calculation of idealized SBR and AI values.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Validation of doubly labeled water method using a ruminant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fancy, S.G.; Blanchard, J.M.; Holleman, D.F.
1986-07-01
CO/sub 2/ production (CDP, ml CO/sub 2/ . g-1 . h-1) by captive caribou and reindeer (Rangifer tarandus) was measured using the doubly labeled water method (/sup 3/H/sub 2/O and H2(18)O) and compared with CO/sub 2/ expiration rates (VCO/sub 2/), adjusted for CO/sub 2/ losses in CH4 and urine, as determined by open-circuit respirometry. CDP calculated from samples of blood or urine from a reindeer in winter was 1-3% higher than the adjusted VCO/sub 2/. Differences between values derived by the two methods of 5-20% were found in summer trials with caribou. None of these differences were statistically significant (Pmore » greater than 0.05). Differences in summer could in part be explained by the net deposition of /sup 3/H, 18O, and unlabeled CO/sub 2/ in antlers and other growing tissues. Total body water volumes calculated from /sup 3/H/sub 2/O dilution were up to 15% higher than those calculated from H/sub 2/(18)O dilution. The doubly labeled water method appears to be a reasonably accurate method for measuring CDP by caribou and reindeer in winter when growth rates are low, but the method may overestimate CDP by rapidly growing and/or fattening animals.« less
Neutron apparatus for measuring strain in composites
Kupperman, David S.; Majumdar, Saurindranath; Faber, Jr., John F.; Singh, J. P.
1990-01-01
A method and apparatus for orienting a pulsed neutron source and a multi-angle diffractometer toward a sample of a ceramic-matrix or metal-matrix composite so that the measurement of internal strain (from which stress is calculated) is reduced to uncomplicated time-of-flight measurements.
Uncertainty Analysis in Humidity Measurements by the Psychrometer Method
Chen, Jiunyuan; Chen, Chiachung
2017-01-01
The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15–50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty. PMID:28216599
Uncertainty Analysis in Humidity Measurements by the Psychrometer Method.
Chen, Jiunyuan; Chen, Chiachung
2017-02-14
The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15-50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty.
A simple method of calculating Stirling engines for engine design optimization
NASA Technical Reports Server (NTRS)
Martini, W. R.
1978-01-01
A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.
Hmiel, A.; Winey, J. M.; Gupta, Y. M.; ...
2016-05-23
Accurate theoretical calculations of the nonlinear elastic response of strong solids (e.g., diamond) constitute a fundamental and important scientific need for understanding the response of such materials and for exploring the potential synthesis and design of novel solids. However, without corresponding experimental data, it is difficult to select between predictions from different theoretical methods. Recently the complete set of third-order elastic constants (TOECs) for diamond was determined experimentally, and the validity of various theoretical approaches to calculate the same may now be assessed. We report on the use of density functional theory (DFT) methods to calculate the six third-order elasticmore » constants of diamond. Two different approaches based on homogeneous deformations were used: (1) an energy-strain fitting approach using a prescribed set of deformations, and (2) a longitudinal stress-strain fitting approach using uniaxial compressive strains along the [100], [110], and [111] directions, together with calculated pressure derivatives of the second-order elastic constants. The latter approach provides a direct comparison to the experimental results. The TOECs calculated using the energy-strain approach differ significantly from the measured TOECs. In contrast, calculations using the longitudinal stress-uniaxial strain approach show good agreement with the measured TOECs and match the experimental values significantly better than the TOECs reported in previous theoretical studies. Lastly, our results on diamond have demonstrated that, with proper analysis procedures, first-principles calculations can indeed be used to accurately calculate the TOECs of strong solids.« less
NASA Technical Reports Server (NTRS)
Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.
1995-01-01
The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.
NASA Technical Reports Server (NTRS)
Rogge, Matthew D. (Inventor); Moore, Jason P. (Inventor)
2014-01-01
Shape of a multi-core optical fiber is determined by positioning the fiber in an arbitrary initial shape and measuring strain over the fiber's length using strain sensors. A three-coordinate p-vector is defined for each core as a function of the distance of the corresponding cores from a center point of the fiber and a bending angle of the cores. The method includes calculating, via a controller, an applied strain value of the fiber using the p-vector and the measured strain for each core, and calculating strain due to bending as a function of the measured and the applied strain values. Additionally, an apparent local curvature vector is defined for each core as a function of the calculated strain due to bending. Curvature and bend direction are calculated using the apparent local curvature vector, and fiber shape is determined via the controller using the calculated curvature and bend direction.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebe, M; Pacaci, P; Mabhouti, H
Purpose: In this study, the two available calculation algorithms of the Varian Eclipse treatment planning system(TPS), the electron Monte Carlo(eMC) and General Gaussian Pencil Beam(GGPB) algorithms were used to compare measured and calculated peripheral dose distribution of electron beams. Methods: Peripheral dose measurements were carried out for 6, 9, 12, 15, 18 and 22 MeV electron beams of Varian Triology machine using parallel plate ionization chamber and EBT3 films in the slab phantom. Measurements were performed for 6×6, 10×10 and 25×25cm{sup 2} cone sizes at dmax of each energy up to 20cm beyond the field edges. Using the same filmmore » batch, the net OD to dose calibration curve was obtained for each energy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution measured using parallel plate ionization chamber and EBT3 film and calculated by eMC and GGPB algorithms were compared. The measured and calculated data were then compared to find which algorithm calculates peripheral dose distribution more accurately. Results: The agreement between measurement and eMC was better than GGPB. The TPS underestimated the out of field doses. The difference between measured and calculated doses increase with the cone size. The largest deviation between calculated and parallel plate ionization chamber measured dose is less than 4.93% for eMC, but it can increase up to 7.51% for GGPB. For film measurement, the minimum gamma analysis passing rates between measured and calculated dose distributions were 98.2% and 92.7% for eMC and GGPB respectively for all field sizes and energies. Conclusion: Our results show that the Monte Carlo algorithm for electron planning in Eclipse is more accurate than previous algorithms for peripheral dose distributions. It must be emphasized that the use of GGPB for planning large field treatments with 6 MeV could lead to inaccuracies of clinical significance.« less
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.
Positron scattering from pyridine
NASA Astrophysics Data System (ADS)
Stevens, D.; Babij, T. J.; Machacek, J. R.; Buckman, S. J.; Brunger, M. J.; White, R. D.; García, G.; Blanco, F.; Ellis-Gibbings, L.; Sullivan, J. P.
2018-04-01
We present a range of cross section measurements for the low-energy scattering of positrons from pyridine, for incident positron energies of less than 20 eV, as well as the independent atom model with the screening corrected additivity rule including interference effects calculation, of positron scattering from pyridine, with dipole rotational excitations accounted for using the Born approximation. Comparisons are made between the experimental measurements and theoretical calculations. For the positronium formation cross section, we also compare with results from a recent empirical model. In general, quite good agreement is seen between the calculations and measurements although some discrepancies remain which may require further investigation. It is hoped that the present study will stimulate development of ab initio level theoretical methods to be applied to this important scattering system.
Theoretical and experimental power from large horizontal-axis wind turbines
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Janetzke, D. C.
1982-01-01
A method for calculating the output power from large horizontal-axis wind turbines is presented. Modifications to the airfoil characteristics and the momentum portion of classical blade element-momentum theory are given that improve correlation with measured data. Improvement is particularly evident at low tip-speed ratios where aerodynamic stall can occur as the blade experiences high angles of attack. Output power calculated using the modified theory is compared with measured data for several large wind turbines. These wind turbines range in size from the DOE/NASA 100 kW Mod-0 (38 m rotor diameter) to the 2000 kW Mod-1 (61 m rotor diameter). The calculated results are in good agreement with measured data from these machines.
Fatigue crack growth under general-yielding cyclic-loading
NASA Technical Reports Server (NTRS)
Minzhong, Z.; Liu, H. W.
1986-01-01
In low cycle fatigue, cracks are initiated and propagated under general yielding cyclic loading. For general yielding cyclic loading, Dowling and Begley have shown that fatigue crack growth rate correlates well with the measured delta J. The correlation of da/dN with delta J was also studied by a number of other investigators. However, none of thse studies have correlated da/dN with delta J calculated specifically for the test specimens. Solomon measured fatigue crack growth in specimens in general yielding cyclic loading. The crack tips fields for Solomon's specimens are calculated using the finite element method and the J values of Solomon's tests are evaluated. The measured crack growth rate in Solomon's specimens correlates very well with the calculated delta J.
Automated Speech Rate Measurement in Dysarthria
ERIC Educational Resources Information Center
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-01-01
Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…
Code of Federal Regulations, 2013 CFR
2013-07-01
... must be determined from a source recognized by the industry (such as the load's manufacturer), or by a calculation method recognized by the industry (such as calculating a steel beam from measured dimensions and a...) Swinging locomotive cranes. A locomotive crane must not be swung into a position where railway cars on an...
Code of Federal Regulations, 2012 CFR
2012-07-01
... must be determined from a source recognized by the industry (such as the load's manufacturer), or by a calculation method recognized by the industry (such as calculating a steel beam from measured dimensions and a...) Swinging locomotive cranes. A locomotive crane must not be swung into a position where railway cars on an...
Code of Federal Regulations, 2014 CFR
2014-07-01
... must be determined from a source recognized by the industry (such as the load's manufacturer), or by a calculation method recognized by the industry (such as calculating a steel beam from measured dimensions and a...) Swinging locomotive cranes. A locomotive crane must not be swung into a position where railway cars on an...
Semi-automatic system for ultrasonic measurement of texture
Thompson, R. Bruce; Wormley, Samuel J.
1991-09-17
A means and method for ultrasonic measurement of texture non-destructively and efficiently. Texture characteristics are derived by transmitting ultrasound energy into the material, measuring the time it takes to be received by ultrasound receiving means, and calculating velocity of the ultrasound energy from the timed measurements. Textured characteristics can then be derived from the velocity calculations. One or more sets of ultrasound transmitters and receivers are utilized to derive velocity measurements in different angular orientations through the material and in different ultrasound modes. An ultrasound transmitter is utilized to direct ultrasound energy to the material and one or more ultrasound receivers are utilized to receive the same. The receivers are at a predetermined fixed distance from the transmitter. A control means is utilized to control transmission of the ultrasound, and a processing means derives timing, calculation of velocity and derivation of texture characteristics.
Semi-automatic system for ultrasonic measurement of texture
Thompson, R.B.; Wormley, S.J.
1991-09-17
A means and method are disclosed for ultrasonic measurement of texture nondestructively and efficiently. Texture characteristics are derived by transmitting ultrasound energy into the material, measuring the time it takes to be received by ultrasound receiving means, and calculating velocity of the ultrasound energy from the timed measurements. Textured characteristics can then be derived from the velocity calculations. One or more sets of ultrasound transmitters and receivers are utilized to derive velocity measurements in different angular orientations through the material and in different ultrasound modes. An ultrasound transmitter is utilized to direct ultrasound energy to the material and one or more ultrasound receivers are utilized to receive the same. The receivers are at a predetermined fixed distance from the transmitter. A control means is utilized to control transmission of the ultrasound, and a processing means derives timing, calculation of velocity and derivation of texture characteristics. 5 figures.
Measuring Plant Water Status: A Simple Method for Investigative Laboratories.
ERIC Educational Resources Information Center
Mansfield, Donald H.; Anderson, Jay E.
1980-01-01
Describes a method suitable for quantitative studies of plant water status conducted by high school or college students and the calculation of the relative water content (RWC) of a plant. Materials, methods, procedures, and results are discussed, with sample data figures provided. (CS)
Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M
2000-10-01
Two methods for determining ion chamber calibration factors (Nx) are presented for polychromatic tungsten x-ray beams whose spectra differ from beams with known Nx. Both methods take advantage of known x-ray fluence and kerma spectral distributions. In the first method, the x-ray tube potential is unchanged and spectra of differing filtration are measured. A primary standard ion chamber with known Nx for one beam is used to calculate the x-ray fluence spectrum of a second beam. Accurate air energy absorption coefficients are applied to the x-ray fluence spectra of the second beam to calculate actual air kerma and Nx. In the second method, two beams of differing tube potential and filtration with known Nx are used to bracket a beam of unknown Nx. A heuristically derived Nx interpolation scheme based on spectral characteristics of all three beams is described. Both methods are validated. Both methods improve accuracy over the current half value layer Nx estimating technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
Calculation of unsteady airfoil loads with and without flap deflection at -90 degrees incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1991-01-01
A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This unique method provides for the direct solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body-fitted computational mesh incorporating a staggered grid method. The vorticity is determined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and for the conservation of mass at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the conservation of mass to machine zero at each time-step. The results of the present analysis and experimental results obtained for a XV-15 airfoil are compared. The comparisons indicate that the calculated drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results. Comparisons of the numerical results of the present method for several airfoils demonstrate the significant influence of airfoil curvature and flap deflection on the predicted download.
An algorithm to estimate unsteady and quasi-steady pressure fields from velocity field measurements.
Dabiri, John O; Bose, Sanjeeb; Gemmell, Brad J; Colin, Sean P; Costello, John H
2014-02-01
We describe and characterize a method for estimating the pressure field corresponding to velocity field measurements such as those obtained by using particle image velocimetry. The pressure gradient is estimated from a time series of velocity fields for unsteady calculations or from a single velocity field for quasi-steady calculations. The corresponding pressure field is determined based on median polling of several integration paths through the pressure gradient field in order to reduce the effect of measurement errors that accumulate along individual integration paths. Integration paths are restricted to the nodes of the measured velocity field, thereby eliminating the need for measurement interpolation during this step and significantly reducing the computational cost of the algorithm relative to previous approaches. The method is validated by using numerically simulated flow past a stationary, two-dimensional bluff body and a computational model of a three-dimensional, self-propelled anguilliform swimmer to study the effects of spatial and temporal resolution, domain size, signal-to-noise ratio and out-of-plane effects. Particle image velocimetry measurements of a freely swimming jellyfish medusa and a freely swimming lamprey are analyzed using the method to demonstrate the efficacy of the approach when applied to empirical data.
Traffic Data Quality Measurement : Final Report
DOT National Transportation Integrated Search
2004-09-15
One of the foremost recommendations from the FHWA sponsored workshops on Traffic Data Quality (TDQ) in 2003 was a call for "guidelines and standards for calculating data quality measures." These guidelines and standards are expected to contain method...
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review
Morris, Tom; Gray, Laura
2017-01-01
Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637
MTF measurements on real time for performance analysis of electro-optical systems
NASA Astrophysics Data System (ADS)
Stuchi, Jose Augusto; Signoreto Barbarini, Elisa; Vieira, Flavio Pascoal; dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fatima Maria Mitsue; Castro Neto, Jarbas C.; Linhari Rodrigues, Evandro Luis
2012-06-01
The need of methods and tools that assist in determining the performance of optical systems is actually increasing. One of the most used methods to perform analysis of optical systems is to measure the Modulation Transfer Function (MTF). The MTF represents a direct and quantitative verification of the image quality. This paper presents the implementation of the software, in order to calculate the MTF of electro-optical systems. The software was used for calculating the MTF of Digital Fundus Camera, Thermal Imager and Ophthalmologic Surgery Microscope. The MTF information aids the analysis of alignment and measurement of optical quality, and also defines the limit resolution of optical systems. The results obtained with the Fundus Camera and Thermal Imager was compared with the theoretical values. For the Microscope, the results were compared with MTF measured of Microscope Zeiss model, which is the quality standard of ophthalmological microscope.
NASA Astrophysics Data System (ADS)
Morozov, A. N.
2017-11-01
The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.
NASA Technical Reports Server (NTRS)
Baer-Riedhart, J. L.
1982-01-01
A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.
Customer loads of two-wheeled vehicles
NASA Astrophysics Data System (ADS)
Gorges, C.; Öztürk, K.; Liebich, R.
2017-12-01
Customer usage profiles are the most unknown influences in vehicle design targets and they play an important role in durability analysis. This publication presents a customer load acquisition system for two-wheeled vehicles that utilises the vehicle's onboard signals. A road slope estimator was developed to reveal the unknown slope resistance force with the help of a linear Kalman filter. Furthermore, an automated mass estimator was developed to consider the correct vehicle loading. The mass estimation is performed by an extended Kalman filter. Finally, a model-based wheel force calculation was derived, which is based on the superposition of forces calculated from measured onboard signals. The calculated wheel forces were validated by measurements with wheel-load transducers through the comparison of rainflow matrices. The calculated wheel forces correspond with the measured wheel forces in terms of both quality and quantity. The proposed methods can be used to gather field data for improved vehicle design loads.
ANALYSIS OF THE MOMENTS METHOD EXPERIMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kloster, R.L.
1959-09-01
Monte Cario calculations show the effects of a plane water-air boundary on both fast neutron and gamma dose rates. Multigroup diffusion theory calculation for a reactor source shows the effects of a plane water-air boundary on thermal neutron dose rate. The results of Monte Cario and multigroup calculations are compared with experimental values. The predicted boundary effect for fast neutrons of 7.3% agrees within 16% with the measured effect of 6.3%. The gamma detector did not measure a boundary effect because it lacked sensitivity at low energies. However, the effect predicted for gamma rays of 5 to 10% is asmore » large as that for neutrons. An estimate of the boundary effect for thermal neutrons from a PoBe source is obtained from the results of muitigroup diffusion theory calcuiations for a reactor source. The calculated boundary effect agrees within 13% with the measured values. (auth)« less
NASA Astrophysics Data System (ADS)
Subashchandrabose, S.; Ramesh Babu, N.; Saleem, H.; Syed Ali Padusha, M.
2015-08-01
The (E)-1-((pyridine-2-yl)methylene)semicarbazide (PMSC) was synthesized. The experimental and theoretical study on molecular structure and vibrational spectra were carried out. The FT-IR (400-4000 cm-1), FT-Raman (50-3500 cm-1) and UV-Vis (200-500 nm) spectra of PMSC were recorded. The geometric structure, conformational analysis, vibrational wavenumbers of PMSC in the ground state have been calculated using B3LYP method of 6-311++G(d,p) basis set. The complete vibrational assignments were made on the basis of TED, calculated by SQM method. The Non-linear optical activity was measured by means of first order hyperpolarizability calculation and π-electrons of conjugative bond in the molecule. The intra-molecular charge transfer, mode hyperconjugative interaction and molecular stabilization energies were calculated. The band gap energies between occupied and unoccupied molecular orbitals were analyzed; it proposes lesser band gap with more reactivity. To understand the electronic properties of this molecule the Mulliken charges were also calculated.
Hirarchical emotion calculation model for virtual human modellin - biomed 2010.
Zhao, Yue; Wright, David
2010-01-01
This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.
Electronic structure of LiGaS 2
NASA Astrophysics Data System (ADS)
Atuchin, V. V.; Isaenko, L. I.; Kesler, V. G.; Lobanov, S.; Huang, H.; Lin, Z. S.
2009-04-01
X-ray photoelectron spectroscopy (XPS) measurement has been performed to determine the valence band structure of LiGaS 2 crystals. The experimental measurement is compared with the electronic structure obtained from the density functional calculations. It is found that the Ga 3d states in the XPS spectrum are much higher than the calculated results. In order to eliminate this discrepancy, the LDA+ U method is employed and reasonable agreement is achieved. Further calculations show that the difference of the linear and nonlinear optical coefficients between LDA and LDA+ U calculations is negligibly small, indicating that the Ga 3d states are actually independent of the excited properties of LiGaS 2 crystals since they are located at a very deep position in the valence bands.
NASA Astrophysics Data System (ADS)
Smyth, R. T.; Ballance, C. P.; Ramsbottom, C. A.; Johnson, C. A.; Ennis, D. A.; Loch, S. D.
2018-05-01
Neutral tungsten is the primary candidate as a wall material in the divertor region of the International Thermonuclear Experimental Reactor (ITER). The efficient operation of ITER depends heavily on precise atomic physics calculations for the determination of reliable erosion diagnostics, helping to characterize the influx of tungsten impurities into the core plasma. The following paper presents detailed calculations of the atomic structure of neutral tungsten using the multiconfigurational Dirac-Fock method, drawing comparisons with experimental measurements where available, and includes a critical assessment of existing atomic structure data. We investigate the electron-impact excitation of neutral tungsten using the Dirac R -matrix method, and by employing collisional-radiative models, we benchmark our results with recent Compact Toroidal Hybrid measurements. The resulting comparisons highlight alternative diagnostic lines to the widely used 400.88-nm line.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Bernard, D.; Dos Santos, N.
This paper describes the method to define relevant targeted integral measurements that allow the improvement of nuclear data evaluations and the determination of corresponding reliable covariances. {sup 235}U and {sup 56}Fe examples are pointed out for the improvement of JEFF3 data. Utilizations of these covariances are shown for Sensitivity and Representativity studies, Uncertainty calculations, and Transposition of experimental results to industrial applications. S/U studies are more and more used in Reactor Physics and Safety-Criticality. However, the reliability of study results relies strongly on the ND covariance relevancy. Our method derives the real uncertainty associated with each evaluation from calibration onmore » targeted integral measurements. These realistic covariance matrices allow reliable JEFF3.1.1 calculation of prior uncertainty due to nuclear data, as well as uncertainty reduction based on representative integral experiments, in challenging design calculations such as GEN3 and RJH reactors.« less
Asymptotic Energies and QED Shifts for Rydberg States of Helium
NASA Technical Reports Server (NTRS)
Drake, G.W.F.
2007-01-01
This paper reviews progress that has been made in obtaining essentially exact solutions to the nonrelativistic three-body problem for helium by a combination of variational and asymptotic expansion methods. The calculation of relativistic and quantum electrodynamic corrections by perturbation theory is discussed, and in particular, methods for the accurate calculation of the Bethe logarithm part of the electron self energy are presented. As an example, the results are applied to the calculation of isotope shifts for the short-lived 'halo' nucleus He-6 relative to He-4 in order to determine the nuclear charge radius of He-6 from high precision spectroscopic measurements carried out at the Argonne National Laboratory. The results demonstrate that the high precision that is now available from atomic theory is creating new opportunities to create novel measurement tools, and helium, along with hydrogen, can be regarded as a fundamental atomic system whose spectrum is well understood for all practical purposes.
Electron-positron momentum density in Tl 2Ba 2CuO 6
NASA Astrophysics Data System (ADS)
Barbiellini, B.; Gauthier, M.; Hoffmann, L.; Jarlborg, T.; Manuel, A. A.; Massidda, S.; Peter, M.; Triscone, G.
1994-08-01
We present calculations of the electron-positron momentum density for the high- Tc superconductor Tl 2Ba 2CuO 6, together with some preliminary two-dimensional angular correlation of the annihilation radiation (2D-ACAR) measurements. The calculations are based on the first-principles electronic structure obtained using the full-potential linearized augmented plane wave (FLAPW) and the linear muffin-tin orbital (LMTO) methods. We also use a linear combination of the atomic orbitals-molecular orbital method (LCAO-MO) to discuss orbital contributions to the anisotropies. Some agreement between calculated and measured 2D-ACAR anisotropies encourage sample improvement for further Fermi surface investigations. Indeed, our results indicate a non-negligle overlap of the positron wave function with the CuOo 2 plane electrons. Therefore, this compound may be well suited for investigating the relevant CuO 2 Fermi surface by 2D-ACAR.
A method to measure internal contact angle in opaque systems by magnetic resonance imaging.
Zhu, Weiqin; Tian, Ye; Gao, Xuefeng; Jiang, Lei
2013-07-23
Internal contact angle is an important parameter for internal wettability characterization. However, due to the limitation of optical imaging, methods available for contact angle measurement are only suitable for transparent or open systems. For most of the practical situations that require contact angle measurement in opaque or enclosed systems, the traditional methods are not effective. Based upon the requirement, a method suitable for contact angle measurement in nontransparent systems is developed by employing MRI technology. In the Article, the method is demonstrated by measuring internal contact angles in opaque cylindrical tubes. It proves that the method also shows great feasibility in transparent situations and opaque capillary systems. By using the method, contact angle in opaque systems could be measured successfully, which is significant in understanding the wetting behaviors in nontransparent systems and calculating interfacial parameters in enclosed systems.
Real-time measurement and monitoring of absorbed dose for electron beams
NASA Astrophysics Data System (ADS)
Korenev, Sergey; Korenev, Ivan; Rumega, Stanislav; Grossman, Leon
2004-09-01
The real-time method and system for measurement and monitoring of absorbed dose for industrial and research electron accelerators is considered in the report. The system was created on the basis of beam parameters method. The main concept of this method consists in the measurement of dissipated kinetic energy of electrons in the irradiated product, determination of number of electrons and mass of irradiated product in the same cell by following calculation of absorbed dose in the cell. The manual and automation systems for dose measurements are described. The systems are acceptable for all types of electron accelerators.
NASA Technical Reports Server (NTRS)
Frohberg, M. G.; Betz, G.
1982-01-01
A method was tested for measuring the enthalpies of mixing of liquid metallic alloying systems, involving the combination of two samples in the electromagnetic field of an induction coil. The heat of solution is calculated from the pyrometrically measured temperature effect, the heat capacity of the alloy, and the heat content of the added sample. The usefulness of the method was tested experimentally with iron-copper and niobium-silicon systems. This method should be especially applicable to high-melting alloys, for which conventional measurements have failed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moura, Eduardo S., E-mail: emoura@wisc.edu; Micka, John A.; Hammer, Cliff G.
Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. Tomore » compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The TPS relative differences with the Acuros™ algorithm were similar in both experimental and simulated setups. The discrepancy between the BrachyVision™, Acuros™, and TG-43 dose responses in the phantom described by this work exceeded 12% for certain setups. Conclusions: The results derived from the phantom measurements show good agreement with the simulations and TPS calculations, using Acuros™ algorithm. Differences in the dose responses were evident in the experimental results when heterogeneous materials were introduced. These measurements prove the usefulness of the heterogeneous phantom for verification of HDR treatment planning systems based on model-based dose calculation algorithms.« less
Ritchie, Andrew W; Webb, Lauren J
2014-07-17
We have examined the effects of including explicit, near-probe solvent molecules in a continuum electrostatics strategy using the linear Poisson-Boltzmann equation with the Adaptive Poisson-Boltzmann Solver (APBS) to calculate electric fields at the midpoint of a nitrile bond both at the surface of a monomeric protein and when docked at a protein-protein interface. Results were compared to experimental vibrational absorption energy measurements of the nitrile oscillator. We examined three methods for selecting explicit water molecules: (1) all water molecules within 5 Å of the nitrile nitrogen; (2) the water molecule closest to the nitrile nitrogen; and (3) any single water molecule hydrogen-bonding to the nitrile. The correlation between absolute field strengths with experimental absorption energies were calculated and it was observed that method 1 was only an improvement for the monomer calculations, while methods 2 and 3 were not significantly different from the purely implicit solvent calculations for all protein systems examined. Upon taking the difference in calculated electrostatic fields and comparing to the difference in absorption frequencies, we typically observed an increase in experimental correlation for all methods, with method 1 showing the largest gain, likely due to the improved absolute monomer correlations using that method. These results suggest that, unlike with quantum mechanical methods, when calculating absolute fields using entirely classical models, implicit solvent is typically sufficient and additional work to identify hydrogen-bonding or nearest waters does not significantly impact the results. Although we observed that a sphere of solvent near the field of interest improved results for relative field calculations, it should not be consider a panacea for all situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Guang; Sun, Xin; Wang, Yuxin
A new inverse method was proposed to calculate the anisotropic elastic-plastic properties (flow stress) of thin electrodeposited Ag coating utilizing nanoindentation tests, previously reported inverse method for isotropic materials and three-dimensional (3-D) finite element analyses (FEA). Indentation depth was ~4% of coating thickness (~10 μm) to avoid substrate effect and different indentation responses were observed in the longitudinal (L) and the transverse (T) directions. The estimated elastic-plastic properties were obtained in the newly developed inverse method by matching the predicted indentation responses in the L and T directions with experimental measurements considering indentation size effect (ISE). The results were validatedmore » with tensile flow curves measured from free-standing (FS) Ag film. The current method can be utilized to characterize the anisotropic elastic-plastic properties of coatings and to provide the constitutive properties for coating performance evaluations.« less
Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments
NASA Technical Reports Server (NTRS)
Cole, Kevin D.
2003-01-01
In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.
Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.
Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M
2015-03-06
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.
Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'
Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.
2015-01-01
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468