Sample records for effect measurement method

  1. A Proposal of a Method to Measure and Evaluate the Effect to Apply External Support Measures for Owners by Construction Management Method, etc

    NASA Astrophysics Data System (ADS)

    Tada, Hiroshi; Miyatake, Ichiro; Mouri, Junji; Ajiki, Norihiko; Fueta, Toshiharu

    In Japan, various approaches have been taken to ensure the quality of public works or to support the procurement regime of the governmental agencies, as a means to utilize external resources, which include the procurement support service or the construction management (CM) method. Although discussions on these measures to utilize external resources (hereinafter referred to as external support measure) have been going on, as well as the follow-up surveys showing the positive effects of such measures have been conducted, the surveys only deal with the matters concerning the overall effects of the external support measure on the whole, meaning that the effect of each item of the tasks have not been addressed, and that the extent it dealt with the expectations of the client is unknown. However, the effective use of the external support measure in future cannot be achieved without knowing what was the purpose to introduce the external support measure, and what effect was expected on each task item, and what extent the expectation fulfilled. Furthermore, it is important to clarify not only the effect as compared to the client's expectation (performance), but also the public benefit of this measure (value improvement). From this point of view, there is not an established method to figure out the effect of the client's measure to utilize external resources. In view of this background, this study takes the CM method as an example of the external support measure, and proposes a method to measure and evaluate the effect by each task item, and suggests the future issues and possible responses, in the aim of contributing the promotion, improvement, and proper implementation of the external support measures in future.

  2. A new point contact surface acoustic wave transducer for measurement of acoustoelastic effect of polymethylmethacrylate.

    PubMed

    Lee, Yung-Chun; Kuo, Shi Hoa

    2004-01-01

    A new acoustic transducer and measurement method have been developed for precise measurement of surface wave velocity. This measurement method is used to investigate the acoustoelastic effects for waves propagating on the surface of a polymethylmethacrylate (PMMA) sample. The transducer uses two miniature conical PZT elements for acoustic wave transmitter and receiver on the sample surface; hence, it can be viewed as a point-source/point-receiver transducer. Acoustic waves are excited and detected with the PZT elements, and the wave velocity can be accurately determined with a cross-correlation waveform comparison method. The transducer and its measurement method are particularly sensitive and accurate in determining small changes in wave velocity; therefore, they are applied to the measurement of acoustoelastic effects in PMMA materials. Both the surface skimming longitudinal wave and Rayleigh surface wave can be simultaneously excited and measured. With a uniaxial-loaded PMMA sample, both acoustoelastic effects for surface skimming longitudinal wave and Rayleigh waves of PMMA are measured. The acoustoelastic coefficients for both types of surface wave motions are simultaneously determined. The transducer and its measurement method provide a practical way for measuring surface stresses nondestructively.

  3. Impact of parasitic thermal effects on thermoelectric property measurements by Harman method.

    PubMed

    Kwon, Beomjin; Baek, Seung-Hyub; Kim, Seong Keun; Kim, Jin-Sang

    2014-04-01

    Harman method is a rapid and simple technique to measure thermoelectric properties. However, its validity has been often questioned due to the over-simplified assumptions that this method relies on. Here, we quantitatively investigate the influence of the previously ignored parasitic thermal effects on the Harman method and develop a method to determine an intrinsic ZT. We expand the original Harman relation with three extra terms: heat losses via both the lead wires and radiation, and Joule heating within the sample. Based on the expanded Harman relation, we use differential measurement of the sample geometry to measure the intrinsic ZT. To separately evaluate the parasitic terms, the measured ZTs with systematically varied sample geometries and the lead wire types are fitted to the expanded relation. A huge discrepancy (∼28%) of the measured ZTs depending on the measurement configuration is observed. We are able to separately evaluate those parasitic terms. This work will help to evaluate the intrinsic thermoelectric property with Harman method by eliminating ambiguities coming from extrinsic effects.

  4. A new sampling method for fibre length measurement

    NASA Astrophysics Data System (ADS)

    Wu, Hongyan; Li, Xianghong; Zhang, Junying

    2018-06-01

    This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.

  5. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    PubMed Central

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  6. Repeatability, Reproducibility, Separative Power and Subjectivity of Different Fish Morphometric Analysis Methods

    PubMed Central

    Takács, Péter

    2016-01-01

    We compared the repeatability, reproducibility (intra- and inter-measurer similarity), separative power and subjectivity (measurer effect on results) of four morphometric methods frequently used in ichthyological research, the “traditional” caliper-based (TRA) and truss-network (TRU) distance methods and two geometric methods that compare landmark coordinates on the body (GMB) and scales (GMS). In each case, measurements were performed three times by three measurers on the same specimen of three common cyprinid species (roach Rutilus rutilus (Linnaeus, 1758), bleak Alburnus alburnus (Linnaeus, 1758) and Prussian carp Carassius gibelio (Bloch, 1782)) collected from three closely-situated sites in the Lake Balaton catchment (Hungary) in 2014. TRA measurements were made on conserved specimens using a digital caliper, while TRU, GMB and GMS measurements were undertaken on digital images of the bodies and scales. In most cases, intra-measurer repeatability was similar. While all four methods were able to differentiate the source populations, significant differences were observed in their repeatability, reproducibility and subjectivity. GMB displayed highest overall repeatability and reproducibility and was least burdened by measurer effect. While GMS showed similar repeatability to GMB when fish scales had a characteristic shape, it showed significantly lower reproducability (compared with its repeatability) for each species than the other methods. TRU showed similar repeatability as the GMS. TRA was the least applicable method as measurements were obtained from the fish itself, resulting in poor repeatability and reproducibility. Although all four methods showed some degree of subjectivity, TRA was the only method where population-level detachment was entirely overwritten by measurer effect. Based on these results, we recommend a) avoidance of aggregating different measurer’s datasets when using TRA and GMS methods; and b) use of image-based methods for morphometric surveys. Automation of the morphometric workflow would also reduce any measurer effect and eliminate measurement and data-input errors. PMID:27327896

  7. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  8. Antiepileptic Drug Behavioral Side Effects in Individuals with Mental Retardation and the Use of Behavioral Measurement Techniques.

    ERIC Educational Resources Information Center

    Kalachnik, John E.; And Others

    1995-01-01

    Behavioral psychology measurement methods helped assess antiepileptic drug behavioral side effects in five individuals with mental retardation who could not verbally communicate presence of side effects. When the suspected antiepileptic drug was altered, an 81% reduction of maladaptive behaviors occurred. The measurement methods enabled systematic…

  9. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  10. Effect of Heat Generation of Ultrasound Transducer on Ultrasonic Power Measured by Calorimetric Method

    NASA Astrophysics Data System (ADS)

    Uchida, Takeyoshi; Kikuchi, Tsuneo

    2013-07-01

    Ultrasonic power is one of the key quantities closely related to the safety of medical ultrasonic equipment. An ultrasonic power standard is required for establishment of safety. Generally, an ultrasonic power standard below approximately 20 W is established by the radiation force balance (RFB) method as the most accurate measurement method. However, RFB is not suitable for high ultrasonic power because of thermal damage to the absorbing target. Consequently, an alternative method to RFB is required. We have been developing a measurement technique for high ultrasonic power by the calorimetric method. In this study, we examined the effect of heat generation of an ultrasound transducer on ultrasonic power measured by the calorimetric method. As a result, an excessively high ultrasonic power was measured owing to the effect of heat generation from internal loss in the transducer. A reference ultrasound transducer with low heat generation is required for a high ultrasonic power standard established by the calorimetric method.

  11. Impact of parasitic thermal effects on thermoelectric property measurements by Harman method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Beomjin, E-mail: bkwon@kist.re.kr; Baek, Seung-Hyub; Keun Kim, Seong

    2014-04-15

    Harman method is a rapid and simple technique to measure thermoelectric properties. However, its validity has been often questioned due to the over-simplified assumptions that this method relies on. Here, we quantitatively investigate the influence of the previously ignored parasitic thermal effects on the Harman method and develop a method to determine an intrinsic ZT. We expand the original Harman relation with three extra terms: heat losses via both the lead wires and radiation, and Joule heating within the sample. Based on the expanded Harman relation, we use differential measurement of the sample geometry to measure the intrinsic ZT. Tomore » separately evaluate the parasitic terms, the measured ZTs with systematically varied sample geometries and the lead wire types are fitted to the expanded relation. A huge discrepancy (∼28%) of the measured ZTs depending on the measurement configuration is observed. We are able to separately evaluate those parasitic terms. This work will help to evaluate the intrinsic thermoelectric property with Harman method by eliminating ambiguities coming from extrinsic effects.« less

  12. Micro-scale temperature measurement method using fluorescence polarization

    NASA Astrophysics Data System (ADS)

    Tatsumi, K.; Hsu, C.-H.; Suzuki, A.; Nakabe, K.

    2016-09-01

    A novel method that can measure the fluid temperature in microscopic scale by measuring the fluorescence polarization is described in this paper. The measurement technique is not influenced by the quenching effects which appears in conventional LIF methods and is believed to show a higher reliability in temperature measurements. Experiment was performed using a microchannel flow and fluorescent molecule probes, and the effects of the fluid temperature, fluid viscosity, measurement time, and pH of the solution on the measured fluorescence polarization degree are discussed to understand the basic characteristics of the present method. The results showed that fluorescence polarization is considerably less sensible to these quenching factors. A good correlation with the fluid temperature, on the other hand, was obtained and agreed well with the theoretical values confirming the feasibility of the method.

  13. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  14. Transportation safety data and analysis : Volume 1, Analyzing the effectiveness of safety measures using Bayesian methods.

    DOT National Transportation Integrated Search

    2010-12-01

    Recent research suggests that traditional safety evaluation methods may be inadequate in accurately determining the effectiveness of roadway safety measures. In recent years, advanced statistical methods are being utilized in traffic safety studies t...

  15. Estimation of effective refractive index of birefringent particles using a combination of the immersion liquid method and light scattering.

    PubMed

    Niskanen, Ilpo; Räty, Jukka; Peiponen, Kai-Erik

    2008-04-01

    A method to detect the effective refractive index and concentration of birefringent pigments is suggested. The method is based on the utilization of the immersion liquid method and a multifunction spectrophotometer for the measurement of back scattered light. The method has applications in the measurement of the effective refractive index of pigments that are used, e.g., in the paper industry to improve the opacity of paper products.

  16. Grey signal processing and data reconstruction in the non-diffracting beam triangulation measurement system

    NASA Astrophysics Data System (ADS)

    Meng, Hao; Wang, Zhongyu; Fu, Jihua

    2008-12-01

    The non-diffracting beam triangulation measurement system possesses the advantages of longer measurement range, higher theoretical measurement accuracy and higher resolution over the traditional laser triangulation measurement system. Unfortunately the measurement accuracy of the system is greatly degraded due to the speckle noise, the CCD photoelectric noise and the background light noise in practical applications. Hence, some effective signal processing methods must be applied to improve the measurement accuracy. In this paper a novel effective method for removing the noises in the non-diffracting beam triangulation measurement system is proposed. In the method the grey system theory is used to process and reconstruct the measurement signal. Through implementing the grey dynamic filtering based on the dynamic GM(1,1), the noises can be effectively removed from the primary measurement data and the measurement accuracy of the system can be improved as a result.

  17. Timing of nest vegetation measurement may obscure adaptive significance of nest-site characteristics: A simulation study.

    PubMed

    McConnell, Mark D; Monroe, Adrian P; Burger, Loren Wes; Martin, James A

    2017-02-01

    Advances in understanding avian nesting ecology are hindered by a prevalent lack of agreement between nest-site characteristics and fitness metrics such as nest success. We posit this is a result of inconsistent and improper timing of nest-site vegetation measurements. Therefore, we evaluated how the timing of nest vegetation measurement influences the estimated effects of vegetation structure on nest survival. We simulated phenological changes in nest-site vegetation growth over a typical nesting season and modeled how the timing of measuring that vegetation, relative to nest fate, creates bias in conclusions regarding its influence on nest survival. We modeled the bias associated with four methods of measuring nest-site vegetation: Method 1-measuring at nest initiation, Method 2-measuring at nest termination regardless of fate, Method 3-measuring at nest termination for successful nests and at estimated completion for unsuccessful nests, and Method 4-measuring at nest termination regardless of fate while also accounting for initiation date. We quantified and compared bias for each method for varying simulated effects, ranked models for each method using AIC, and calculated the proportion of simulations in which each model (measurement method) was selected as the best model. Our results indicate that the risk of drawing an erroneous or spurious conclusion was present in all methods but greater with Method 2 which is the most common method reported in the literature. Methods 1 and 3 were similarly less biased. Method 4 provided no additional value as bias was similar to Method 2 for all scenarios. While Method 1 is seldom practical to collect in the field, Method 3 is logistically practical and minimizes inherent bias. Implementation of Method 3 will facilitate estimating the effect of nest-site vegetation on survival, in the least biased way, and allow reliable conclusions to be drawn.

  18. Center effect on ankle-brachial index measurement when using the reference method (Doppler and manometer): results from a large cohort study.

    PubMed

    Vierron, Emilie; Halimi, Jean-Michel; Tichet, Jean; Balkau, Beverley; Cogneau, Joel; Giraudeau, Bruno

    2009-07-01

    The ankle-brachial index (ABI) is a simple and noninvasive tool used to detect peripheral arterial disease (PAD). We aimed to assess, in a French multicenter cohort, the center effect associated with arterial pressure (AP) and ABI measurements using the reference method and using a semiautomatic device. This study included baseline and 9-year follow-up data from 3,664 volunteers of 10 health examination centers of the DESIR (Data from an Epidemiological Study on the Insulin Resistance) syndrome French cohort. Ankle and brachial AP were measured at inclusion by the reference method (a mercury sphygmomanometer coupled with a Doppler probe for ankle measurements) and at 9 years by a semiautomatic device (Omron HEM-705CP). The center effect was assessed by the intraclass correlation coefficient (ICC), ratio of the between-center variance to the total variance of the measurement. At inclusion, the sample mean age was 47.5 (s.d. 9.9) years; 49.3% were men. Although ICCs were smaller than 0.05 for brachial AP measurements, they were close to 0.18 and 0.20 for ankle systolic AP (SAP) and ABI measurements, respectively, when the reference method was used. No center effect for measures other than ankle SAP was detected. With the semiautomatic device method, all ICCs, including those for ankle SAP and ABI measurements, were between 0.005 and 0.04. We found an important center effect on ABI measured with a sphygmomanometer and a Doppler probe but not a semiautomatic device. A center effect should be taken into account when planning any multicenter study on ABI measurement.

  19. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Chlorine measurement in the jet singlet oxygen generator considering the effects of the droplets.

    PubMed

    Goodarzi, Mohamad S; Saghafifar, Hossein

    2016-09-01

    A new method is presented to measure chlorine concentration more accurately than conventional method in exhaust gases of a jet-type singlet oxygen generator. One problem in this measurement is the existence of micrometer-sized droplets. In this article, an empirical method is reported to eliminate the effects of the droplets. Two wavelengths from a fiber coupled LED are adopted and the measurement is made on both selected wavelengths. Chlorine is measured by the two-wavelength more accurately than the one-wavelength method by eliminating the droplet term in the equations. This method is validated without the basic hydrogen peroxide injection in the reactor. In this case, a pressure meter value in the diagnostic cell is compared with the optically calculated pressure, which is obtained by the one-wavelength and the two-wavelength methods. It is found that chlorine measurement by the two-wavelength method and pressure meter is nearly the same, while the one-wavelength method has a significant error due to the droplets.

  1. Calibration-free self-absorption model for measuring nitric oxide concentration in a pulsed corona discharge.

    PubMed

    Du, Yanjun; Ding, Yanjun; Liu, Yufeng; Lan, Lijuan; Peng, Zhimin

    2014-08-01

    The effect of self-absorption on emission intensity distributions can be used for species concentration measurements. A calculation model is developed based on the Beer-Lambert law to quantify this effect. And then, a calibration-free measurement method is proposed on the basis of this model by establishing the relationship between gas concentration and absorption strength. The effect of collision parameters and rotational temperature on the method is also discussed. The proposed method is verified by investigating the nitric oxide emission bands (A²Σ⁺→X²∏) that are generated by a pulsed corona discharge at various gas concentrations. Experiment results coincide well with the expectations, thus confirming the precision and accuracy of the proposed measurement method.

  2. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Flexible, multi-measurement guided wave damage detection under varying temperatures

    NASA Astrophysics Data System (ADS)

    Douglass, Alexander C. S.; Harley, Joel B.

    2018-04-01

    Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).

  5. Determination of refractive index, size, and concentration of nonabsorbing colloidal nanoparticles from measurements of the complex effective refractive index.

    PubMed

    Márquez-Islas, Roberto; Sánchez-Pérez, Celia; García-Valenzuela, Augusto

    2014-02-01

    We describe a method for obtaining the refractive index (RI), size, and concentration of nonabsorbing nanoparticles in suspension from relatively simple optical measurements. The method requires measuring the complex effective RI of two dilute suspensions of the particles in liquids of different refractive indices. We describe the theoretical basis of the proposed method and provide experimental results validating the procedure.

  6. Study on the application of ambient vibration tests to evaluate the effectiveness of seismic retrofitting

    NASA Astrophysics Data System (ADS)

    Liang, Li; Takaaki, Ohkubo; Guang-hui, Li

    2018-03-01

    In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.

  7. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    PubMed Central

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  8. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  9. Effects of Sample Preparation on the Infrared Reflectance Spectra of Powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.

    2015-05-22

    While reflectance spectroscopy is a useful tool in identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-packedmore » as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.« less

  10. Effects of sample preparation on the infrared reflectance spectra of powders

    NASA Astrophysics Data System (ADS)

    Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.; Su, Yin-Fong; Blake, Thomas A.; Forland, Brenda M.

    2015-05-01

    While reflectance spectroscopy is a useful tool for identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-loaded as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample can have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.

  11. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  12. Mapping Urban Environmental Noise Using Smartphones.

    PubMed

    Zuo, Jinbo; Xia, Hao; Liu, Shuo; Qiao, Yanyou

    2016-10-13

    Noise mapping is an effective method of visualizing and accessing noise pollution. In this paper, a noise-mapping method based on smartphones to effectively and easily measure environmental noise is proposed. By using this method, a noise map of an entire area can be created using limited measurement data. To achieve the measurement with certain precision, a set of methods was designed to calibrate the smartphones. Measuring noise with mobile phones is different from the traditional static observations. The users may be moving at any time. Therefore, a method of attaching an additional microphone with a windscreen is proposed to reduce the wind effect. However, covering an entire area is impossible. Therefore, an interpolation method is needed to achieve full coverage of the area. To reduce the influence of spatial heterogeneity and improve the precision of noise mapping, a region-based noise-mapping method is proposed in this paper, which is based on the distribution of noise in different region types tagged by volunteers, to interpolate and combine them to create a noise map. To validate the effect of the method, a comparison of the interpolation results was made to analyse our method and the ordinary Kriging method. The result shows that our method is more accurate in reflecting the local distribution of noise and has better interpolation precision. We believe that the proposed noise-mapping method is a feasible and low-cost noise-mapping solution.

  13. Mapping Urban Environmental Noise Using Smartphones

    PubMed Central

    Zuo, Jinbo; Xia, Hao; Liu, Shuo; Qiao, Yanyou

    2016-01-01

    Noise mapping is an effective method of visualizing and accessing noise pollution. In this paper, a noise-mapping method based on smartphones to effectively and easily measure environmental noise is proposed. By using this method, a noise map of an entire area can be created using limited measurement data. To achieve the measurement with certain precision, a set of methods was designed to calibrate the smartphones. Measuring noise with mobile phones is different from the traditional static observations. The users may be moving at any time. Therefore, a method of attaching an additional microphone with a windscreen is proposed to reduce the wind effect. However, covering an entire area is impossible. Therefore, an interpolation method is needed to achieve full coverage of the area. To reduce the influence of spatial heterogeneity and improve the precision of noise mapping, a region-based noise-mapping method is proposed in this paper, which is based on the distribution of noise in different region types tagged by volunteers, to interpolate and combine them to create a noise map. To validate the effect of the method, a comparison of the interpolation results was made to analyse our method and the ordinary Kriging method. The result shows that our method is more accurate in reflecting the local distribution of noise and has better interpolation precision. We believe that the proposed noise-mapping method is a feasible and low-cost noise-mapping solution. PMID:27754359

  14. Multivariate meta-analysis of prognostic factor studies with multiple cut-points and/or methods of measurement.

    PubMed

    Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P

    2015-07-30

    A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. Tilt measurement using inclinometer based on redundant configuration of MEMS accelerometers

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Liu, Xuecong; Zhang, Hao

    2018-05-01

    Inclinometers are widely used in tilt measurement and their required accuracy is becoming ever higher. Most existing methods can effectively work only when the tilt is less than 60°, and the accuracy still can be improved. A redundant configuration of micro-electro mechanical system accelerometers is proposed in this paper and a least squares method and data processing normalization are used. A rigorous mathematical derivation is given. Simulation and experiment are used to verify its feasibility. The results of a Monte Carlo simulation, repeated 3000 times, and turntable reference experiments have shown that the tilt measure range can be expanded to 0°–90° by this method and that the measurement accuracy of θ can be improved by more than 10 times and the measurement accuracy of γ can be also improved effectively. The proposed method is proved to be effective and significant in practical application.

  16. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    NASA Astrophysics Data System (ADS)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; Radney, James G.; Kolesar, Katheryn R.; Zhang, Qi; Setyan, Ari; O'Neill, Norman T.; Cappa, Christopher D.

    2018-04-01

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM1 and PM10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.

  17. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE PAGES

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; ...

    2018-04-23

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  18. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  19. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well withmore » other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  20. Measurement of pattern roughness and local size variation using CD-SEM: current status

    NASA Astrophysics Data System (ADS)

    Fukuda, Hiroshi; Kawasaki, Takahiro; Kawada, Hiroki; Sakai, Kei; Kato, Takashi; Yamaguchi, Satoru; Ikota, Masami; Momonoi, Yoshinori

    2018-03-01

    Measurement of line edge roughness (LER) is discussed from four aspects: edge detection, PSD prediction, sampling strategy, and noise mitigation, and general guidelines and practical solutions for LER measurement today are introduced. Advanced edge detection algorithms such as wave-matching method are shown effective for robustly detecting edges from low SNR images, while conventional algorithm with weak filtering is still effective in suppressing SEM noise and aliasing. Advanced PSD prediction method such as multi-taper method is effective in suppressing sampling noise within a line edge to analyze, while number of lines is still required for suppressing line to line variation. Two types of SEM noise mitigation methods, "apparent noise floor" subtraction method and LER-noise decomposition using regression analysis are verified to successfully mitigate SEM noise from PSD curves. These results are extended to LCDU measurement to clarify the impact of SEM noise and sampling noise on LCDU.

  1. Calibration-independent measurement of complex permittivity of liquids using a coaxial transmission line

    NASA Astrophysics Data System (ADS)

    Guoxin, Cheng

    2015-01-01

    In recent years, several calibration-independent transmission/reflection methods have been developed to determine the complex permittivity of liquid materials. However, these methods experience their own respective defects, such as the requirement of multi measurement cells, or the presence of air gap effect. To eliminate these drawbacks, a fast calibration-independent method is proposed in this paper. There are two main advantages of the present method over those in the literature. First, only one measurement cell is required. The cell is measured when it is empty and when it is filled with liquid. This avoids the air gap effect in the approach, in which the structure with two reference ports connected with each other is needed to be measured. Second, it eliminates the effects of uncalibrated coaxial cables, adaptors, and plug sections; systematic errors caused by the experimental setup are avoided by the wave cascading matrix manipulations. Using this method, three dielectric reference liquids, i.e., ethanol, ethanediol, and pure water, and low-loss transformer oil are measured over a wide frequency range to validate the proposed method. Their accuracy is assessed by comparing the results with those obtained from the other well known techniques. It is demonstrated that this proposed method can be used as a robust approach for fast complex permittivity determination of liquid materials.

  2. [A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].

    PubMed

    Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng

    2015-12-01

    Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.

  3. A novel method for effective diffusion coefficient measurement in gas diffusion media of polymer electrolyte fuel cells

    NASA Astrophysics Data System (ADS)

    Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan

    2014-07-01

    A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.

  4. 12 CFR Appendix B to Part 741 - Guidance for an Interest Rate Risk Policy and an Effective Program

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Measurement Methods C. Components of IRR Measurement Methods V. Internal Controls VI. Decision-Making Informed... effective IRR management program identifies, measures, monitors, and controls IRR and is central to safe and... critical to the control of IRR exposure. All FICUs required to have an IRR policy and program should...

  5. 12 CFR Appendix B to Part 741 - Guidance for an Interest Rate Risk Policy and an Effective Program

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Measurement Methods C. Components of IRR Measurement Methods V. Internal Controls VI. Decision-Making Informed... effective IRR management program identifies, measures, monitors, and controls IRR and is central to safe and... critical to the control of IRR exposure. All FICUs required to have an IRR policy and program should...

  6. The Comparison of Matching Methods Using Different Measures of Balance: Benefits and Risks Exemplified within a Study to Evaluate the Effects of German Disease Management Programs on Long-Term Outcomes of Patients with Type 2 Diabetes.

    PubMed

    Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje

    2016-10-01

    To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.

  7. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data.

    PubMed

    Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-10-13

    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  8. Prognostic score–based balance measures for propensity score methods in comparative effectiveness research

    PubMed Central

    Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.

    2013-01-01

    Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158

  9. Multiobjective decision-making in integrated water management

    NASA Astrophysics Data System (ADS)

    Pouwels, I. H. M.; Wind, H. G.; Witter, V. J.

    1995-08-01

    Traditionally, decision-making by water authorities in the Netherlands is largely based on intuition. Their tasks were, after all, relatively few and straight-forward. The growing number of tasks, together with the new integrated approach on water management issues, however, induces water authorities to rationalise their decision process. In order to choose the most effective water management measures, the external effects of these measures need to be taken into account. Therefore, methods have been developed to incorporate these effects in the decision-making phase. Using analytical evaluation methods, the effects of various measures on the water system (physical and chemical quality, ecology and quantity) can be taken into consideration. In this manner a more cognitive way of choosing between alternative measures can be obtained. This paper describes an application of such a decision method on a river basin scale. Main topics, in this paper, are the extent to which uncertainties (in technical information and deficiencies in the techniques applied) limit the usefulness of these methods, and also the question whether these techniques can really be used to select measures that give maximum environmental benefit for minimum cost. It is shown that the influence of these restrictions on the validity of the outcome of the decision methods can be profound. Using these results, improvement of the methods can be realised.

  10. Comparing Hall Effect and Field Effect Measurements on the Same Single Nanowire.

    PubMed

    Hultin, Olof; Otnes, Gaute; Borgström, Magnus T; Björk, Mikael; Samuelson, Lars; Storm, Kristian

    2016-01-13

    We compare and discuss the two most commonly used electrical characterization techniques for nanowires (NWs). In a novel single-NW device, we combine Hall effect and back-gated and top-gated field effect measurements and quantify the carrier concentrations in a series of sulfur-doped InP NWs. The carrier concentrations from Hall effect and field effect measurements are found to correlate well when using the analysis methods described in this work. This shows that NWs can be accurately characterized with available electrical methods, an important result toward better understanding of semiconductor NW doping.

  11. Compensation of Verdet Constant Temperature Dependence by Crystal Core Temperature Measurement

    PubMed Central

    Petricevic, Slobodan J.; Mihailovic, Pedja M.

    2016-01-01

    Compensation of the temperature dependence of the Verdet constant in a polarimetric extrinsic Faraday sensor is of major importance for applying the magneto-optical effect to AC current measurements and magnetic field sensing. This paper presents a method for compensating the temperature effect on the Faraday rotation in a Bi12GeO20 crystal by sensing its optical activity effect on the polarization of a light beam. The method measures the temperature of the same volume of crystal that effects the beam polarization in a magnetic field or current sensing process. This eliminates the effect of temperature difference found in other indirect temperature compensation methods, thus allowing more accurate temperature compensation for the temperature dependence of the Verdet constant. The method does not require additional changes to an existing Δ/Σ configuration and is thus applicable for improving the performance of existing sensing devices. PMID:27706043

  12. An orthogonal return method for linearly polarized beam based on the Faraday effect and its application in interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Benyong, E-mail: chenby@zstu.edu.cn; Zhang, Enzheng; Yan, Liping

    2014-10-15

    Correct return of the measuring beam is essential for laser interferometers to carry out measurement. In the actual situation, because the measured object inevitably rotates or laterally moves, not only the measurement accuracy will decrease, or even the measurement will be impossibly performed. To solve this problem, a novel orthogonal return method for linearly polarized beam based on the Faraday effect is presented. The orthogonal return of incident linearly polarized beam is realized by using a Faraday rotator with the rotational angle of 45°. The optical configuration of the method is designed and analyzed in detail. To verify its practicabilitymore » in polarization interferometry, a laser heterodyne interferometer based on this method was constructed and precision displacement measurement experiments were performed. These results show that the advantage of the method is that the correct return of the incident measuring beam is ensured when large lateral displacement or angular rotation of the measured object occurs and then the implementation of interferometric measurement can be ensured.« less

  13. Technique for active measurement of atmospheric transmittance using an imaging system: implementation at 10.6-μm wavelength

    NASA Astrophysics Data System (ADS)

    Sadot, Dan; Zaarur, O.; Zaarur, S.; Kopeika, Norman S.

    1994-10-01

    An active method is presented for measuring atmospheric transmittance with an imaging system. In comparison to other measurement methods, this method has the advantage of immunity to background noise, independence of atmospheric conditions such as solar radiation, and an improved capability to evaluate effects of turbulence on the measurements. Other significant advantages are integration over all particulate size distribution effects including very small and very large particulates whose concentration is hard to measure, and the fact that this method is a path-integrated measurement. In this implementation attenuation deriving from molecular absorption and from small and large particulate scatter and absorption and their weather dependences are separated out. Preliminary results indicate high correlation with direct transmittance calculations via particle size distribution measurement, and that even at 10.6 micrometers wavelength atmospheric transmission depends noticeably on aerosol size distribution and concentration.

  14. A technique for active measurement of atmospheric transmittance using an imaging system: implementation at 10.6 μm wavelength

    NASA Astrophysics Data System (ADS)

    Sadot, D.; Zaarur, O.; Zaarur, S.

    1995-12-01

    An active method is presented for measuring atmospheric transmittance with an imaging system. In comparison to other measurement methods, this method has the advantage of immunity to background noise, independence of atmospheric conditions such as solar radiation, and an improved capability to evaluate effects of turbulence on the measurements. Other significant advantages are integration over all particulate size distribution effects including very small and very large particulates whose concentration is hard to measure, and the fact that this method is a path-integrated measurement. Attenuation deriving from molecular absorption and from small and large particulate scatter and absorption and their weather dependences are separated out. Preliminary results indicate high correlation with direct transmittance calculations via particle size distribution measurement, and that even at 10.6 μm wavelength atmospheric transmission depends noticeably on aerosol size distribution and concentration.

  15. Gate frequency sweep: An effective method to evaluate the dynamic performance of AlGaN/GaN power heterojunction field effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, C. de; Meneghini, M., E-mail: matteo.meneghini@dei.unipd.it; Meneghesso, G.

    2014-08-18

    With this paper we propose a test method for evaluating the dynamic performance of GaN-based transistors, namely, gate-frequency sweep measurements: the effectiveness of the method is verified by characterizing the dynamic performance of Gate Injection Transistors. We demonstrate that this method can provide an effective description of the impact of traps on the transient performance of Heterojunction Field Effect Transistors, and information on the properties (activation energy and cross section) of the related defects. Moreover, we discuss the relation between the results obtained by gate-frequency sweep measurements and those collected by conventional drain current transients and double pulse characterization.

  16. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  17. Internal Stress Monitoring of In-Service Structural Steel Members with Ultrasonic Method

    PubMed Central

    Li, Zuohua; He, Jingbo; Teng, Jun; Wang, Ying

    2016-01-01

    Internal stress in structural steel members is an important parameter for steel structures in their design, construction, and service stages. However, it is hard to measure via traditional approaches. Among the existing non-destructive testing (NDT) methods, the ultrasonic method has received the most research attention. Longitudinal critically refracted (Lcr) waves, which propagate parallel to the surface of the material within an effective depth, have shown great potential as an effective stress measurement approach. This paper presents a systematic non-destructive evaluation method to determine the internal stress in in-service structural steel members using Lcr waves. Based on theory of acoustoelasticity, a stress evaluation formula is derived. Factor of stress to acoustic time difference is used to describe the relationship between stress and measurable acoustic results. A testing facility is developed and used to demonstrate the performance of the proposed method. Two steel members are measured by using the proposed method and the traditional strain gauge method for verification. Parametric studies are performed on three steel members and the aluminum plate to investigate the factors that influence the testing results. The results show that the proposed method is effective and accurate for determining stress in in-service structural steel members. PMID:28773347

  18. Internal Stress Monitoring of In-Service Structural Steel Members with Ultrasonic Method.

    PubMed

    Li, Zuohua; He, Jingbo; Teng, Jun; Wang, Ying

    2016-03-23

    Internal stress in structural steel members is an important parameter for steel structures in their design, construction, and service stages. However, it is hard to measure via traditional approaches. Among the existing non-destructive testing (NDT) methods, the ultrasonic method has received the most research attention. Longitudinal critically refracted (Lcr) waves, which propagate parallel to the surface of the material within an effective depth, have shown great potential as an effective stress measurement approach. This paper presents a systematic non-destructive evaluation method to determine the internal stress in in-service structural steel members using Lcr waves. Based on theory of acoustoelasticity, a stress evaluation formula is derived. Factor of stress to acoustic time difference is used to describe the relationship between stress and measurable acoustic results. A testing facility is developed and used to demonstrate the performance of the proposed method. Two steel members are measured by using the proposed method and the traditional strain gauge method for verification. Parametric studies are performed on three steel members and the aluminum plate to investigate the factors that influence the testing results. The results show that the proposed method is effective and accurate for determining stress in in-service structural steel members.

  19. Simplifying Nanowire Hall Effect Characterization by Using a Three-Probe Device Design.

    PubMed

    Hultin, Olof; Otnes, Gaute; Samuelson, Lars; Storm, Kristian

    2017-02-08

    Electrical characterization of nanowires is a time-consuming and challenging task due to the complexity of single nanowire device fabrication and the difficulty in interpreting the measurements. We present a method to measure Hall effect in nanowires using a three-probe device that is simpler to fabricate than previous four-probe nanowire Hall devices and allows characterization of nanowires with smaller diameter. Extraction of charge carrier concentration from the three-probe measurements using an analytical model is discussed and compared to simulations. The validity of the method is experimentally verified by a comparison between results obtained with the three-probe method and results obtained using four-probe nanowire Hall measurements. In addition, a nanowire with a diameter of only 65 nm is characterized to demonstrate the capabilities of the method. The three-probe Hall effect method offers a relatively fast and simple, yet accurate way to quantify the charge carrier concentration in nanowires and has the potential to become a standard characterization technique for nanowires.

  20. Remote atmospheric probing by ground to ground line of sight optical methods

    NASA Technical Reports Server (NTRS)

    Lawrence, R. S.

    1969-01-01

    The optical effects arising from refractive-index variations in the clear air are qualitatively described, and the possibilities are discussed of using those effects for remotely sensing the physical properties of the atmosphere. The effects include scintillations, path length fluctuations, spreading of a laser beam, deflection of the beam, and depolarization. The physical properties that may be measured include the average temperature along the path, the vertical temperature gradient, and the distribution along the path of the strength of turbulence and the transverse wind velocity. Line-of-sight laser beam methods are clearly effective in measuring the average properties, but less effective in measuring distributions along the path. Fundamental limitations to the resolution are pointed out and experiments are recommended to investigate the practicality of the methods.

  1. Theoretical and numerical evaluation of polarimeter using counter-circularly-polarized-probing-laser under the coupling between Faraday and Cotton-Mouton effect.

    PubMed

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2016-04-01

    This study evaluated an effect of an coupling between the Faraday and Cotton-Mouton effect to a measurement signal of the Dodel-Kunz method which uses counter-circular-polarized probing-laser for measuring the Faraday effect. When the coupling is small (the Faraday effect is dominant and the characteristic eigenmodes are approximately circularly polarized), the measurement signal can be algebraically expressed and it is shown that the finite effect of the coupling is still significant. When the Faraday effect is not dominant, a numerical calculation is necessary. The numerical calculation under an ITER-like condition (Bt = 5.3 T, Ip = 15 MA, a = 2 m, ne = 10(20) m(-3) and λ = 119 μm) showed that difference between the pure Faraday rotation and the measurement signal of the Dodel-Kunz method was an order of one degree, which exceeds allowable error of ITER poloidal polarimeter. In conclusion, similar to other polarimeter techniques, the Dodel-Kunz method is not free from the coupling between the Faraday and Cotton-Mouton effect.

  2. An Information Transmission Measure for the Analysis of Effective Connectivity among Cortical Neurons

    PubMed Central

    Law, Andrew J.; Sharma, Gaurav; Schieber, Marc H.

    2014-01-01

    We present a methodology for detecting effective connections between simultaneously recorded neurons using an information transmission measure to identify the presence and direction of information flow from one neuron to another. Using simulated and experimentally-measured data, we evaluate the performance of our proposed method and compare it to the traditional transfer entropy approach. In simulations, our measure of information transmission outperforms transfer entropy in identifying the effective connectivity structure of a neuron ensemble. For experimentally recorded data, where ground truth is unavailable, the proposed method also yields a more plausible connectivity structure than transfer entropy. PMID:21096617

  3. Dynamic gas temperature measurement system

    NASA Technical Reports Server (NTRS)

    Elmore, D. L.; Robinson, W. W.; Watkins, W. B.

    1983-01-01

    A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.

  4. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  5. Modulation infrared thermometry of caloric effects at up to kHz frequencies

    NASA Astrophysics Data System (ADS)

    Döntgen, Jago; Rudolph, Jörg; Waske, Anja; Hägele, Daniel

    2018-03-01

    We present a novel non-contact method for the direct measurement of caloric effects in low volume samples. The adiabatic temperature change ΔT of a magnetocaloric sample is very sensitively determined from thermal radiation. Rapid modulation of ΔT is induced by an oscillating external magnetic field. Detection of thermal radiation with a mercury-cadmium-telluride detector allows for measurements at field frequencies exceeding 1 kHz. In contrast to thermoacoustic methods, our method can be employed in vacuum which enhances adiabatic conditions especially in the case of small volume samples. Systematic measurements of the magnetocaloric effect as a function of temperature, magnetic field amplitude, and modulation frequency give a detailed picture of the thermal behavior of the sample. Highly sensitive measurements of the magnetocaloric effect are demonstrated on a 2 mm thick sample of gadolinium and a 60 μm thick Fe80B12Nb8 ribbon.

  6. Global statistics of liquid water content and effective number concentration of water clouds over ocean derived from combined CALIPSO and MODIS measurements

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.

    2007-06-01

    This study presents an empirical relation that links the volume extinction coefficients of water clouds, the layer integrated depolarization ratios measured by lidar, and the effective radii of water clouds derived from collocated passive sensor observations. Based on Monte Carlo simulations of CALIPSO lidar observations, this method combines the cloud effective radius reported by MODIS with the lidar depolarization ratios measured by CALIPSO to estimate both the liquid water content and the effective number concentration of water clouds. The method is applied to collocated CALIPSO and MODIS measurements obtained during July and October of 2006, and January 2007. Global statistics of the cloud liquid water content and effective number concentration are presented.

  7. Method for the substantial reduction of quenching effects in luminescence spectrometry

    DOEpatents

    Demas, James N.; Jones, Wesley M.; Keller, Richard A.

    1989-01-01

    Method for reducing quenching effects in analytical luminescence measurements. Two embodiments of the present invention are described which relate to a form of time resolution based on the amplitudes and phase shifts of modulated emission signals. In the first embodiment, the measured modulated emission signal is substantially independent of sample quenching at sufficiently high frequenices. In the second embodiment, the modulated amplitude and the phase shift between the emission signal and the excitation source are simultaneously measured. Using either method, the observed modulated amplitude may reduced to tis unquenched value.

  8. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  9. Simple, Low-Cost Data Collection Methods for Agricultural Field Studies.

    ERIC Educational Resources Information Center

    Koenig, Richard T.; Winger, Marlon; Kitchen, Boyd

    2000-01-01

    Summarizes relatively simple and inexpensive methods for collecting data from agricultural field studies. Describes methods involving on-farm testing, crop yield measurement, quality evaluations, weed control effectiveness, plant nutrient status, and other measures. Contains 29 references illustrating how these methods were used to conduct…

  10. Analysis and Correction of Diffraction Effect on the B/A Measurement at High Frequencies

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Gong, Xiu-Fen; Liu, Xiao-Zhou; Kushibiki, Jun-ichi; Nishino, Hideo

    2004-01-01

    A numerical method is developed to analyse and to correct the diffraction effect in the measurement of acoustic nonlinearity parameter B/A at high frequencies. By using the KZK nonlinear equation and the superposition approach of Gaussian beams, an analytical model is derived to describe the second harmonic generation through multi-layer medium SiO2/liquid specimen/SiO2. Frequency dependence of the nonlinear characterization curve for water in 110-155 MHz is numerically and experimentally investigated. With the measured dip position and the new model, values of B/A for water are evaluated. The results show that the present method can effectively correct the diffraction effect in the measurement.

  11. A Method to Determine the Impact of Patient-Centered Care Interventions in Primary Care

    PubMed Central

    Daaleman, Timothy P.; Shea, Christopher M.; Halladay, Jacqueline; Reed, David

    2014-01-01

    INTRODUCTION The implementation of patient-centered care (PCC) innovations continues to be poorly understood. We used the implementation effectiveness framework to pilot a method for measuring the impact of a PCC innovation in primary care practices. METHODS We analyzed data from a prior study that assessed the implementation of an electronic geriatric quality-of-life (QOL) module in 3 primary care practices in central North Carolina in 2011–12. Patients responded to the items and the subsequent patient-provider encounter was coded using the Roter Interaction Analysis System (RIAS) system. We developed an implementation effectiveness measure specific to the QOL module (i.e., frequency of usage during the encounter) using RIAS and then tested if there were differences with RIAS codes using analysis of variance. RESULTS A total of 60 patient-provider encounters examined differences in the uptake of the QOL module (i.e., implementation-effectiveness measure) with the frequency of RIAS codes during the encounter (i.e., patient-centeredness measure). There was a significant association between the effectiveness measure and patient-centered RIAS codes. CONCLUSION The concept of implementation effectiveness provided a useful framework determine the impact of a PCC innovation. PRACTICE IMPLICATIONS A method that captures real-time interactions between patients and care staff over time can meaningfully evaluate PCC innovations. PMID:25269410

  12. Validation of a T1 and T2* leakage correction method based on multi-echo DSC-MRI using MION as a reference standard

    PubMed Central

    Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad

    2015-01-01

    Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714

  13. The non-contact biometric identified bio signal measurement sensor and algorithms.

    PubMed

    Kim, Chan-Il; Lee, Jong-Ha

    2018-01-01

    In these days, wearable devices have been developed for effectively measuring biological data. However, these devices have tissue allege and noise problem. To solve these problems, biometric measurement based on a non-contact method, such as face image sequencing is developed. This makes it possible to measure biometric data without any operation and side effects. However, it is impossible for a remote center to identify the person whose data are measured by the novel methods. In this paper, we propose the novel non-contact heart rate and blood pressure imaging system, Deep Health Eye. This system has authentication process at the same time as measuring bio signals, through non-contact method. In the future, this system can be convenient home bio signal monitoring system by combined with smart mirror.

  14. Measuring Multi-Ethnic Desegregation

    ERIC Educational Resources Information Center

    Straus, Ryane McAuliffe

    2010-01-01

    This article proposes a new method for measuring school desegregation in multiracial districts, and uses the new method to measure the desegregation effects of magnet schools in Los Angeles. Rather than measuring desegregation between only two groups at a time, I compute the index of interracial exposure for Blacks, Whites, Latinos, and Asians.…

  15. Effect of two sweating simulation methods on clothing evaporative resistance in a so-called isothermal condition.

    PubMed

    Lu, Yehu; Wang, Faming; Peng, Hui

    2016-07-01

    The effect of sweating simulation methods on clothing evaporative resistance was investigated in a so-called isothermal condition (T manikin  = T a  = T r ). Two sweating simulation methods, namely, the pre-wetted fabric "skin" (PW) and the water supplied sweating (WS), were applied to determine clothing evaporative resistance on a "Newton" thermal manikin. Results indicated that the clothing evaporative resistance determined by the WS method was significantly lower than that measured by the PW method. In addition, the evaporative resistances measured by the two methods were correlated and exhibited a linear relationship. Validation experiments demonstrated that the empirical regression equation showed highly acceptable estimations. The study contributes to improving the accuracy of measurements of clothing evaporative resistance by means of a sweating manikin.

  16. Effect Size as the Essential Statistic in Developing Methods for mTBI Diagnosis.

    PubMed

    Gibson, Douglas Brandt

    2015-01-01

    The descriptive statistic known as "effect size" measures the distinguishability of two sets of data. Distingishability is at the core of diagnosis. This article is intended to point out the importance of effect size in the development of effective diagnostics for mild traumatic brain injury and to point out the applicability of the effect size statistic in comparing diagnostic efficiency across the main proposed TBI diagnostic methods: psychological, physiological, biochemical, and radiologic. Comparing diagnostic approaches is difficult because different researcher in different fields have different approaches to measuring efficacy. Converting diverse measures to effect sizes, as is done in meta-analysis, is a relatively easy way to make studies comparable.

  17. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  18. A non-iterative twin image elimination method with two in-line digital holograms

    NASA Astrophysics Data System (ADS)

    Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young

    2018-02-01

    We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.

  19. Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication

    PubMed Central

    Chen, Chien-Sheng

    2015-01-01

    To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755

  20. Flip-angle profile of slice-selective excitation and the measurement of the MR longitudinal relaxation time with steady-state magnetization

    NASA Astrophysics Data System (ADS)

    Hsu, Jung-Jiin

    2015-08-01

    In MRI, the flip angle (FA) of slice-selective excitation is not uniform across the slice-thickness dimension. This work investigates the effect of the non-uniform FA profile on the accuracy of a commonly-used method for the measurement, in which the T1 value, i.e., the longitudinal relaxation time, is determined from the steady-state signals of an equally-spaced RF pulse train. By using the numerical solutions of the Bloch equation, it is shown that, because of the non-uniform FA profile, the outcome of the T1 measurement depends significantly on T1 of the specimen and on the FA and the inter-pulse spacing τ of the pulse train. A new method to restore the accuracy of the T1 measurement is described. Different from the existing approaches, the new method also removes the FA profile effect for the measurement of the FA, which is normally a part of the T1 measurement. In addition, the new method does not involve theoretical modeling, approximation, or modification to the underlying principle of the T1 measurement. An imaging experiment is performed, which shows that the new method can remove the FA-, the τ-, and the T1-dependence and produce T1 measurements in excellent agreement with the ones obtained from a gold standard method (the inversion-recovery method).

  1. A Pilot Study of a Novel Method of Measuring Stigma about Depression Developed for Latinos in the Faith-Based Setting.

    PubMed

    Caplan, Susan

    2016-08-01

    In order to understand the effects of interventions designed to reduce stigma about mental illness, we need valid measures. However, the validity of commonly used measures is compromised by social desirability bias. The purpose of this pilot study was to test an anonymous method of measuring stigma in the community setting. The method of data collection, Preguntas con Cartas (Questions with Cards) used numbered playing cards to conduct anonymous group polling about stigmatizing beliefs during a mental health literacy intervention. An analysis of the difference between Preguntas con Cartas stigma votes and corresponding face-to-face individual survey results for the same seven stigma questions indicated that there was a statistically significant differences in the distributions between the two methods of data collection (χ(2) = 8.27, p = 0.016). This exploratory study has shown the potential effectiveness of Preguntas con Cartas as a novel method of measuring stigma in the community-based setting.

  2. New Analysis Methods Estimate a Critical Property of Ethanol Fuel Blends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-03-01

    To date there have been no adequate methods for measuring the heat of vaporization of complex mixtures. This research developed two separate methods for measuring this key property of ethanol and gasoline blends, including the ability to estimate heat of vaporization at multiple temperatures. Methods for determining heat of vaporization of gasoline-ethanol blends by calculation from a compositional analysis and by direct calorimetric measurement were developed. Direct measurement produced values for pure compounds in good agreement with literature. A range of hydrocarbon gasolines were shown to have heat of vaporization of 325 kJ/kg to 375 kJ/kg. The effect of addingmore » ethanol at 10 vol percent to 50 vol percent was significantly larger than the variation between hydrocarbon gasolines (E50 blends at 650 kJ/kg to 700 kJ/kg). The development of these new and accurate methods allows researchers to begin to both quantify the effect of fuel evaporative cooling on knock resistance, and exploit this effect for combustion of hydrocarbon-ethanol fuel blends in high-efficiency SI engines.« less

  3. Mechanics of Ballast Compaction. Volume 2 : Field Methods for Ballast Physical State Measurement

    DOT National Transportation Integrated Search

    1982-03-01

    Field methods for measuring ballast physical state are needed to study the effects of ballast compaction. Following a consideration of various alternatives, three methods were selected for development and evaluation. The first was in-place density, w...

  4. Improvement of photon correlation spectroscopy method for measuring nanoparticle size by using attenuated total reflectance.

    PubMed

    Krishtop, Victor; Doronin, Ivan; Okishev, Konstantin

    2012-11-05

    Photon correlation spectroscopy is an effective method for measuring nanoparticle sizes and has several advantages over alternative methods. However, this method suffers from a disadvantage in that its measuring accuracy reduces in the presence of convective flows of fluid containing nanoparticles. In this paper, we propose a scheme based on attenuated total reflectance in order to reduce the influence of convection currents. The autocorrelation function for the light-scattering intensity was found for this case, and it was shown that this method afforded a significant decrease in the time required to measure the particle sizes and an increase in the measuring accuracy.

  5. Size-dependent magnetic anisotropy of PEG coated Fe3O4 nanoparticles; comparing two magnetization methods

    NASA Astrophysics Data System (ADS)

    Nayek, C.; Manna, K.; Imam, A. A.; Alqasrawi, A. Y.; Obaidat, I. M.

    2018-02-01

    Understanding the size dependent magnetic anisotropy of iron oxide nanoparticles is essential for the successful application of these nanoparticles in several technological and medical fields. PEG-coated iron oxide (Fe3O4) nanoparticles with core diameters of 12 nm, 15 nm, and 16 nm were synthesized by the usual co-precipitation method. The morphology and structure of the nanoparticles were investigated using transmission electron microscopy (TEM), high resolution transmission electron microscopy (HRTEM), selected area electron diffraction (SAED), and X-ray diffraction (XRD). Magnetic measurements were conducted using a SQUID. The effective magnetic anisotropy was calculated using two methods from the magnetization measurements. In the first method the zero-field-cooled magnetization versus temperature measurements were used at several applied magnetic fields. In the second method we used the temperature-dependent coercivity curves obtained from the zero-field-cooled magnetization versus magnetic field hysteresis loops. The role of the applied magnetic field on the effective magnetic anisotropy, calculated form the zero-field-cooled magnetization versus temperature measurements, was revealed. The size dependence of the effective magnetic anisotropy constant Keff obtained by the two methods are compared and discussed.

  6. Method for the substantial reduction of quenching effects in luminescence spectrometry

    DOEpatents

    Demas, J.N.; Jones, W.M.; Keller, R.A.

    1987-06-26

    Method for reducing quenching effects in analytical luminescence measurements. Two embodiments of the present invention are described which relate to a form of time resolution based on the amplitudes and phase shifts of modulated emission signals. In the first embodiment, the measured modulated emission signal is substantially independent of sample quenching at sufficiently high frequencies. In the second embodiment, the modulated amplitude and the phase shift between the emission signal and the excitation source are simultaneously measured. Using either method, the observed modulated amplitude may be reduced to its unquenched value. 3 figs.

  7. Evaluation of effectiveness of combined sewer overflow control measures by operational data.

    PubMed

    Schroeder, K; Riechel, M; Matzinger, A; Rouault, P; Sonnenberg, H; Pawlowsky-Reusing, E; Gnirss, R

    2011-01-01

    The effect of combined sewer overflow (CSO) control measures should be validated during operation based on monitoring of CSO activity and subsequent comparison with (legal) requirements. However, most CSO monitoring programs have been started only recently and therefore no long-term data is available for reliable efficiency control. A method is proposed that focuses on rainfall data for evaluating the effectiveness of CSO control measures. It is applicable if a sufficient time-series of rainfall data and a limited set of data on CSO discharges are available. The method is demonstrated for four catchments of the Berlin combined sewer system. The analysis of the 2000-2007 data shows the effect of CSO control measures, such as activation of in-pipe storage capacities within the Berlin system. The catchment, where measures are fully implemented shows less than 40% of the CSO activity of those catchments, where measures have not yet or not yet completely been realised.

  8. Estimation of effective wind speed

    NASA Astrophysics Data System (ADS)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  9. A meta-analytic review of self-reported, clinician-rated, and performance-based motivation measures in schizophrenia: Are we measuring the same "stuff"?

    PubMed

    Luther, Lauren; Firmin, Ruth L; Lysaker, Paul H; Minor, Kyle S; Salyers, Michelle P

    2018-04-07

    An array of self-reported, clinician-rated, and performance-based measures has been used to assess motivation in schizophrenia; however, the convergent validity evidence for these motivation assessment methods is mixed. The current study is a series of meta-analyses that summarize the relationships between methods of motivation measurement in 45 studies of people with schizophrenia. The overall mean effect size between self-reported and clinician-rated motivation measures (r = 0.27, k = 33) was significant, positive, and approaching medium in magnitude, and the overall effect size between performance-based and clinician-rated motivation measures (r = 0.21, k = 11) was positive, significant, and small in magnitude. The overall mean effect size between self-reported and performance-based motivation measures was negligible and non-significant (r = -0.001, k = 2), but this meta-analysis was underpowered. Findings suggest modest convergent validity between clinician-rated and both self-reported and performance-based motivation measures, but additional work is needed to clarify the convergent validity between self-reported and performance-based measures. Further, there is likely more variability than similarity in the underlying construct that is being assessed across the three methods, particularly between the performance-based and other motivation measurement types. These motivation assessment methods should not be used interchangeably, and measures should be more precisely described as the specific motivational construct or domain they are capturing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Measurement method for determining the magnetic hysteresis effects of reluctance actuators by evaluation of the force and flux variation.

    PubMed

    Vrijsen, N H; Jansen, J W; Compter, J C; Lomonova, E A

    2013-07-01

    A measurement method is presented which identifies the magnetic hysteresis effects present in the force of linear reluctance actuators. The measurement method is applied to determine the magnetic hysteresis in the force of an E-core reluctance actuator, with and without pre-biasing permanent magnet. The force measurements are conducted with a piezoelectric load cell (Kistler type 9272). This high-bandwidth force measurement instrument is identified in the frequency domain using a voice-coil actuator that has negligible magnetic hysteresis and eddy currents. Specifically, the phase delay between the current and force of the voice-coil actuator is used for the calibration of the measurement instrument. This phase delay is also obtained by evaluation of the measured force and flux variation in the E-core actuator, both with and without permanent magnet on the middle tooth. The measured magnetic flux variation is used to distinguish the phase delay due to magnetic hysteresis from the measured phase delay between the current and the force of the E-core actuator. Finally, an open loop steady-state ac model is presented that predicts the magnetic hysteresis effects in the force of the E-core actuator.

  11. Measurement of inflammation in man and animals by radiometry.

    PubMed

    Collins, A J; Ring, E F

    1972-01-01

    1. A radiometer is described, which is sensitive to infrared radiation in the range 0-25 mum, and which, after calibration with a black body standard can be used as a non-contact, fast reading thermometer.2. An example of acute joint inflammation in a patient with rheumatoid arthritis is described. The temperatures over the joint measured by radiometry, followed inflammatory changes in the joint effusion.3. Using rats, the method of measuring inflammation by radiometry was compared with measurements of increase in joint size. Changes measured by radiometry preceded changes shown by increase in joint size.4. The radiometer method was able to demonstrate the effect of an anti-inflammatory drug, given orally, against carrageenin inflammation.5. The procedure was found to be an accurate means of measuring inflammation and the anti-inflammatory effects of drugs. It was faster and less tedious than the other methods for the quantitative measurement of inflammation in man and animals.

  12. Effective Biot theory and its generalization to poroviscoelastic models

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark

    2018-02-01

    A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhiqiang; Department of Materials Science and Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706; Geng, Dalong

    A simple and effective decoupled finite element analysis method was developed for simulating both the piezoelectric and flexoelectric effects of zinc oxide (ZnO) and barium titanate (BTO) nanowires (NWs). The piezoelectric potential distribution on a ZnO NW was calculated under three deformation conditions (cantilever, three-point, and four-point bending) and compared to the conventional fully coupled method. The discrepancies of the electric potential maximums from these two methods were found very small, validating the accuracy and effectiveness of the decoupled method. Both ZnO and BTO NWs yielded very similar potential distributions. Comparing the potential distributions induced by the piezoelectric and flexoelectricmore » effects, we identified that the middle segment of a four-point bending NW beam is the ideal place for measuring the flexoelectric coefficient, because the uniform parallel plate capacitor-like potential distribution in this region is exclusively induced by the flexoelectric effect. This decoupled method could provide a valuable guideline for experimental measurements of the piezoelectric effects and flexoelectric effects in the nanometer scale.« less

  14. Predicting S-wave velocities for unconsolidated sediments at low effective pressure

    USGS Publications Warehouse

    Lee, Myung W.

    2010-01-01

    Accurate S-wave velocities for shallow sediments are important in performing a reliable elastic inversion for gas hydrate-bearing sediments and in evaluating velocity models for predicting S-wave velocities, but few S-wave velocities are measured at low effective pressure. Predicting S-wave velocities by using conventional methods based on the Biot-Gassmann theory appears to be inaccurate for laboratory-measured velocities at effective pressures less than about 4-5 megapascals (MPa). Measured laboratory and well log velocities show two distinct trends for S-wave velocities with respect to P-wave velocity: one for the S-wave velocity less than about 0.6 kilometer per second (km/s) which approximately corresponds to effective pressure of about 4-5 MPa, and the other for S-wave velocities greater than 0.6 km/s. To accurately predict S-wave velocities at low effective pressure less than about 4-5 MPa, a pressure-dependent parameter that relates the consolidation parameter to shear modulus of the sediments at low effective pressure is proposed. The proposed method in predicting S-wave velocity at low effective pressure worked well for velocities of water-saturated sands measured in the laboratory. However, this method underestimates the well-log S-wave velocities measured in the Gulf of Mexico, whereas the conventional method performs well for the well log velocities. The P-wave velocity dispersion due to fluid in the pore spaces, which is more pronounced at high frequency with low effective pressures less than about 4 MPa, is probably a cause for this discrepancy.

  15. Study on the effect of measuring methods on incident photon-to-electron conversion efficiency of dye-sensitized solar cells by home-made setup.

    PubMed

    Guo, Xiao-Zhi; Luo, Yan-Hong; Zhang, Yi-Duo; Huang, Xiao-Chun; Li, Dong-Mei; Meng, Qing-Bo

    2010-10-01

    An experimental setup is built for the measurement of monochromatic incident photon-to-electron conversion efficiency (IPCE) of solar cells. With this setup, three kinds of IPCE measuring methods as well as the convenient switching between them are achieved. The setup can also measure the response time and waveform of the short-circuit current of solar cell. Using this setup, IPCE results of dye-sensitized solar cells (DSCs) are determined and compared under different illumination conditions with each method. It is found that the IPCE values measured by AC method involving the lock-in technique are sincerely influenced by modulation frequency and bias illumination. Measurements of the response time and waveform of short-circuit current have revealed that this effect can be explained by the slow response of DSCs. To get accurate IPCE values by this method, the measurement should be carried out with a low modulation frequency and under bias illumination. The IPCE values measured by DC method under the bias light illumination will be disturbed since the short-circuit current increased with time continuously due to the temperature rise of DSC. Therefore, temperature control of DSC is considered necessary for IPCE measurement especially in DC method with bias light illumination. Additionally, high bias light intensity (>2 sun) is found to decrease the IPCE values due to the ion transport limitation of the electrolyte.

  16. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  17. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  18. Simple, Fast and Effective Correction for Irradiance Spatial Nonuniformity in Measurement of IVs of Large Area Cells at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Tom

    The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.

  19. Development and accuracy of a multipoint method for measuring visibility.

    PubMed

    Tai, Hongda; Zhuang, Zibo; Sun, Dongsong

    2017-10-01

    Accurate measurements of visibility are of great importance in many fields. This paper reports a multipoint visibility measurement (MVM) method to measure and calculate the atmospheric transmittance, extinction coefficient, and meteorological optical range (MOR). The relative errors of atmospheric transmittance and MOR measured by the MVM method and traditional transmissometer method are analyzed and compared. Experiments were conducted indoors, and the data were simultaneously processed. The results revealed that the MVM can effectively improve the accuracy under different visibility conditions. The greatest improvement of accuracy was 27%. The MVM can be used to calibrate and evaluate visibility meters.

  20. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    NASA Astrophysics Data System (ADS)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .

  1. Aspheric surface measurement using capacitive probes

    NASA Astrophysics Data System (ADS)

    Tao, Xin; Yuan, Daocheng; Li, Shaobo

    2017-02-01

    With the application of aspheres in optical fields, high precision and high efficiency aspheric surface metrology becomes a hot research topic. We describe a novel method of non-contact measurement of aspheric surface with capacitive probe. Taking an eccentric spherical surface as the object of study, the averaging effect of capacitive probe measurement and the influence of tilting the capacitive probe on the measurement results are investigated. By comparing measurement results from simultaneous measurement of the capacitive probe and contact probe of roundness instrument, this paper indicates the feasibility of using capacitive probes to test aspheric surface and proposes the compensation method of measurement error caused by averaging effect and the tilting of the capacitive probe.

  2. Comparing different methods for assessing contaminant bioavailability during sediment remediation.

    PubMed

    Jia, Fang; Liao, Chunyang; Xue, Jiaying; Taylor, Allison; Gan, Jay

    2016-12-15

    Sediment contamination by persistent organic pollutants from historical episodes is widespread and remediation is often needed to clean up severely contaminated sites. Measuring contaminant bioavailability in a before-and-after manner lends to improved assessment of remediation effectiveness. However, a number of bioavailability measurement methods have been developed, posing a challenge in method selection for practitioners. In this study, three different bioavailability measurement methods, i.e., solid phase microextraction (SPME), Tenax desorption, and isotope dilution method (IDM), were compared in evaluating changes in bioavailability of DDT and its degradates in sediment following simulated remediation treatments. When compared to the unamended sediments, all three methods predicted essentially the same degrees of changes in bioavailability after amendment with activated carbon, charcoal or sand. After normalizing over the unamended control, measurements by different methods were linearly correlated with each other, with slopes close to 1. The same observation was further made with a Superfund site marine sediment. This finding suggests that different methods may be used in evaluating remediation efficiency. However, Tenax desorption or IDM consistently offered better sensitivity than SPME in detecting bioavailability changes. Results from this study highlight the value of considering bioavailability when evaluating remediation effectiveness and provide guidance on the selection of bioavailability measurement methods in such assessments. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Measuring systems of hard to get objects: problems with analysis of measurement results

    NASA Astrophysics Data System (ADS)

    Gilewska, Grazyna

    2005-02-01

    The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.

  4. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model

    PubMed Central

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.

    2014-01-01

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625

  5. Measurement of company effectiveness using analytic network process method

    NASA Astrophysics Data System (ADS)

    Goran, Janjić; Zorana, Tanasić; Borut, Kosec

    2017-07-01

    The sustainable development of an organisation is monitored through the organisation's performance, which beforehand incorporates all stakeholders' requirements in its strategy. The strategic management concept enables organisations to monitor and evaluate their effectiveness along with efficiency by monitoring of the implementation of set strategic goals. In the process of monitoring and measuring effectiveness, an organisation can use multiple-criteria decision-making methods as help. This study uses the method of analytic network process (ANP) to define the weight factors of the mutual influences of all the important elements of an organisation's strategy. The calculation of an organisation's effectiveness is based on the weight factors and the degree of fulfilment of the goal values of the strategic map measures. New business conditions influence the changes in the importance of certain elements of an organisation's business in relation to competitive advantage on the market, and on the market, increasing emphasis is given to non-material resources in the process of selection of the organisation's most important measures.

  6. The performance of different propensity score methods for estimating absolute effects of treatments on survival outcomes: A simulation study.

    PubMed

    Austin, Peter C; Schuster, Tibor

    2016-10-01

    Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.

  7. Osmotic fragility changes in preserved blood: measurements by coil planet centrifuge and parpart methods.

    PubMed

    Sasakawa, S; Tokunaga, E; Hasegawa, G; Nakagawa, S

    1977-09-01

    The coil planet centrifuge (CPC) can be used to measure the osmotic fragility of erythrocytes. Fragility measured by this method alters when different salts are used. The CPC and Parpart methods were used to measure the changes during storage in red cell osmotic fragility in ACD or CPD blood with or without adenine. More marked changes were detected by the CPC method, especially in old cells. The changes of fragility of erythrocytes during storage seem to occur mainly in old cells. Adenine is effective in preventing such changes.

  8. Dent detection method by high gradation photometric stereo

    NASA Astrophysics Data System (ADS)

    Hasebe, Akihisa; Kato, Kunihito; Tanahashi, Hideki; Kubota, Naoki

    2017-03-01

    This paper describes an automatic detection method for small dents on a metal plate. We adopted the photometric stereo as a three-dimensional measurement method, which has advantages in terms of low cost and short measurement time. In addition, a high precision measurement system was realized by using an 18bit camera. Furthermore, the small dent on the surface of the metal plate is detected by the inner product of the measured normal vectors using photometric stereo. Finally, the effectiveness of our method was confirmed by detection experiments.

  9. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  10. Derivation and interpretation of hazard quotients to assess ecological risks from the cultivation of insect-resistant transgenic crops.

    PubMed

    Raybould, Alan; Caron-Lormier, Geoffrey; Bohan, David A

    2011-06-08

    Cost-effective and rigorous risk assessments for chemicals may be based on hazard quotients (HQs): the ratio of a measure of exposure to a substance and a measure of the effect of that substance. HQs have been used for many years in ecological risk assessments for the use of synthetic pesticides in agriculture, and methods for calculating pesticide HQs have been adapted for use with transgenic crops. This paper describes how laboratory methods for assessing the ecotoxicological effects of synthetic pesticides have been modified for the measurement of effects of insecticidal proteins, and how these effect measures are combined with exposure estimates to derive HQs for assessing the ecological risks from the cultivation of insect-resistant transgenic crops. The potential for ecological modeling to inform the design of laboratory effects tests for insecticidal proteins is also discussed.

  11. The effect of a lignosulphate type additive on the lead—acid battery positive plate reactions

    NASA Astrophysics Data System (ADS)

    Ovuru, S. E.; Harrison, J. A.

    The electrochemical formation of lead dioxide has been investigated at a lead electrode in a 5 M sulphuric acid solution, and in the presence of phosphoric acid and lignosulphate-type additive. The formation of lead dioxide from lead sulphate, and the reverse reaction, have been investigated by the linear potential sweep method, by an impedance method in which the impedance was measured at the end of each pulse during a potential pulse train, and by a charging curve method in which the current and charge was measured during a similar potential pulse train. The charge measurements prove that the main effect of the additive is to decrease the accompanying oxygen evolution reaction. The impedance measurements, however, show that the additive has a small but significant effect on the structure of the solid lead sulphate and lead dioxide layers.

  12. Quantitative measurement of pass-by noise radiated by vehicles running at high speeds

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin

    2011-03-01

    It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.

  13. A new method for distortion magnetic field compensation of a geomagnetic vector measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Zhongyan; Pan, Mengchun; Tang, Ying; Zhang, Qi; Geng, Yunling; Wan, Chengbiao; Chen, Dixiang; Tian, Wugang

    2016-12-01

    The geomagnetic vector measurement system mainly consists of three-axis magnetometer and an INS (inertial navigation system), which have many ferromagnetic parts on them. The magnetometer is always distorted by ferromagnetic parts and other electric equipments such as INS and power circuit module within the system, which can lead to geomagnetic vector measurement error of thousands of nT. Thus, the geomagnetic vector measurement system has to be compensated in order to guarantee the measurement accuracy. In this paper, a new distortion magnetic field compensation method is proposed, in which a permanent magnet with different relative positions is used to change the ambient magnetic field to construct equations of the error model parameters, and the parameters can be accurately estimated by solving linear equations. In order to verify effectiveness of the proposed method, the experiment is conducted, and the results demonstrate that, after compensation, the components errors of measured geomagnetic field are reduced significantly. It demonstrates that the proposed method can effectively improve the accuracy of the geomagnetic vector measurement system.

  14. Multi-method Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    PubMed Central

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multi-method approach to psychopathy assessment (self-report, interview/file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (PAI; L. Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PMID:20230156

  15. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.

  16. Measurements of the effective atomic numbers of minerals using bremsstrahlung produced by low-energy electrons

    NASA Astrophysics Data System (ADS)

    Czarnecki, S.; Williams, S.

    2017-12-01

    The accuracy of a method for measuring the effective atomic numbers of minerals using bremsstrahlung intensities has been investigated. The method is independent of detector-efficiency and maximum accelerating voltage. In order to test the method, experiments were performed which involved low-energy electrons incident on thick malachite, pyrite, and galena targets. The resultant thick-target bremsstrahlung was compared to bremsstrahlung produced using a standard target, and experimental effective atomic numbers were calculated using data from a previous study (in which the Z-dependence of thick-target bremsstrahlung was studied). Comparisons of the results to theoretical values suggest that the method has potential for implementation in energy-dispersive X-ray spectroscopy systems.

  17. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance.

    PubMed

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  18. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance

    NASA Astrophysics Data System (ADS)

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  19. ANOVA with Rasch Measures.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Various methods of estimating main effects from ordinal data are presented and contrasted. Problems discussed include: (1) at what level to accumulate ordinal data into linear measures; (2) how to maintain scaling across analyses; and (3) the inevitable confounding of within cell variance with measurement error. An example shows three methods of…

  20. A Review of Treatment Adherence Measurement Methods

    ERIC Educational Resources Information Center

    Schoenwald, Sonja K.; Garland, Ann F.

    2013-01-01

    Fidelity measurement is critical for testing the effectiveness and implementation in practice of psychosocial interventions. Adherence is a critical component of fidelity. The purposes of this review were to catalogue adherence measurement methods and assess existing evidence for the valid and reliable use of the scores that they generate and the…

  1. Full-field 3D shape measurement of specular object having discontinuous surfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Huang, Shujun; Gao, Nan; Gao, Feng; Jiang, Xiangqian

    2017-06-01

    This paper presents a novel Phase Measuring Deflectometry (PMD) method to measure specular objects having discontinuous surfaces. A mathematical model is established to directly relate the absolute phase and depth, instead of the phase and gradient. Based on the model, a hardware measuring system has been set up, which consists of a precise translating stage, a projector, a diffuser and a camera. The stage locates the projector and the diffuser together to a known position during measurement. By using the model-based and machine vision methods, system calibration is accomplished to provide the required parameters and conditions. The verification tests are given to evaluate the effectiveness of the developed system. 3D (Three-Dimensional) shapes of a concave mirror and a monolithic multi-mirror array having multiple specular surfaces have been measured. Experimental results show that the proposed method can obtain 3D shape of specular objects having discontinuous surfaces effectively

  2. Film thickness measurement based on nonlinear phase analysis using a Linnik microscopic white-light spectral interferometer.

    PubMed

    Guo, Tong; Chen, Zhuo; Li, Minghui; Wu, Juhong; Fu, Xing; Hu, Xiaotang

    2018-04-20

    Based on white-light spectral interferometry and the Linnik microscopic interference configuration, the nonlinear phase components of the spectral interferometric signal were analyzed for film thickness measurement. The spectral interferometric signal was obtained using a Linnik microscopic white-light spectral interferometer, which includes the nonlinear phase components associated with the effective thickness, the nonlinear phase error caused by the double-objective lens, and the nonlinear phase of the thin film itself. To determine the influence of the effective thickness, a wavelength-correction method was proposed that converts the effective thickness into a constant value; the nonlinear phase caused by the effective thickness can then be determined and subtracted from the total nonlinear phase. A method for the extraction of the nonlinear phase error caused by the double-objective lens was also proposed. Accurate thickness measurement of a thin film can be achieved by fitting the nonlinear phase of the thin film after removal of the nonlinear phase caused by the effective thickness and by the nonlinear phase error caused by the double-objective lens. The experimental results demonstrated that both the wavelength-correction method and the extraction method for the nonlinear phase error caused by the double-objective lens improve the accuracy of film thickness measurements.

  3. Estimating regression to the mean and true effects of an intervention in a four-wave panel study.

    PubMed

    Gmel, Gerhard; Wicki, Matthias; Rehm, Jürgen; Heeb, Jean-Luc

    2008-01-01

    First, to analyse whether a taxation-related decrease in spirit prices had a similar effect on spirit consumption for low-, medium- and high-level drinkers. Secondly, as the relationship between baseline values and post-intervention changes is confounded with regression to the mean (RTM) effects, to apply different approaches for estimating the RTM effect and true change. Consumption of spirits and total alcohol consumption were analysed in a four-wave panel study (one pre-intervention and three post-intervention measurements) of 889 alcohol consumers sampled from the general population of Switzerland. Two correlational methods, one method quantitatively estimating the RTM effect and one growth curve approach based on hierarchical linear models (HLM), were used to estimate RTM effects among low-, medium- and high-level drinkers. Adjusted for RTM effects, high-level drinkers increased consumption more than lighter drinkers in the short term, but this was not a persisting effect. Changes in taxation affected mainly light and moderate drinkers in the long term. All methods concurred that RTM effects were present to a considerable degree, and methods quantifying the RTM effect or adjusting for it yielded similar estimates. Intervention studies have to consider RTM effects both in the study design and in the evaluation methods. Observed changes can be adjusted for RTM effects and true change can be estimated. The recommended method, particularly if the aim is to estimate change not only for the sample as a whole, but for groups of drinkers with different baseline consumption levels, is growth curve modelling. If reliability of measurement instruments cannot be increased, the incorporation of more than one pre-intervention measurement point may be a valuable adjustment of the study design.

  4. Evaluation of Two Methods for Modeling Measurement Errors When Testing Interaction Effects with Observed Composite Scores

    ERIC Educational Resources Information Center

    Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.

    2018-01-01

    Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…

  5. Cost-Effectiveness Analysis of Three Leprosy Case Detection Methods in Northern Nigeria

    PubMed Central

    Ezenduka, Charles; Post, Erik; John, Steven; Suraj, Abdulkarim; Namadi, Abdulahi; Onwujekwe, Obinna

    2012-01-01

    Background Despite several leprosy control measures in Nigeria, child proportion and disability grade 2 cases remain high while new cases have not significantly reduced, suggesting continuous spread of the disease. Hence, there is the need to review detection methods to enhance identification of early cases for effective control and prevention of permanent disability. This study evaluated the cost-effectiveness of three leprosy case detection methods in Northern Nigeria to identify the most cost-effective approach for detection of leprosy. Methods A cross-sectional study was carried out to evaluate the additional benefits of using several case detection methods in addition to routine practice in two north-eastern states of Nigeria. Primary and secondary data were collected from routine practice records and the Nigerian Tuberculosis and Leprosy Control Programme of 2009. The methods evaluated were Rapid Village Survey (RVS), Household Contact Examination (HCE) and Traditional Healers incentive method (TH). Effectiveness was measured as number of new leprosy cases detected and cost-effectiveness was expressed as cost per case detected. Costs were measured from both providers' and patients' perspectives. Additional costs and effects of each method were estimated by comparing each method against routine practise and expressed as incremental cost-effectiveness ratio (ICER). All costs were converted to the U.S. dollar at the 2010 exchange rate. Univariate sensitivity analysis was used to evaluate uncertainties around the ICER. Results The ICER for HCE was $142 per additional case detected at all contact levels and it was the most cost-effective method. At ICER of $194 per additional case detected, THs method detected more cases at a lower cost than the RVS, which was not cost-effective at $313 per additional case detected. Sensitivity analysis showed that varying the proportion of shared costs and subsistent wage for valuing unpaid time did not significantly change the results. Conclusion Complementing routine practice with household contact examination is the most cost-effective approach to identify new leprosy cases and we recommend that, depending on acceptability and feasibility, this intervention is introduced for improved case detection in Northern Nigeria. PMID:23029580

  6. The use of propensity score methods with survival or time-to-event outcomes: reporting measures of effect similar to those used in randomized experiments.

    PubMed

    Austin, Peter C

    2014-03-30

    Propensity score methods are increasingly being used to estimate causal treatment effects in observational studies. In medical and epidemiological studies, outcomes are frequently time-to-event in nature. Propensity-score methods are often applied incorrectly when estimating the effect of treatment on time-to-event outcomes. This article describes how two different propensity score methods (matching and inverse probability of treatment weighting) can be used to estimate the measures of effect that are frequently reported in randomized controlled trials: (i) marginal survival curves, which describe survival in the population if all subjects were treated or if all subjects were untreated; and (ii) marginal hazard ratios. The use of these propensity score methods allows one to replicate the measures of effect that are commonly reported in randomized controlled trials with time-to-event outcomes: both absolute and relative reductions in the probability of an event occurring can be determined. We also provide guidance on variable selection for the propensity score model, highlight methods for assessing the balance of baseline covariates between treated and untreated subjects, and describe the implementation of a sensitivity analysis to assess the effect of unmeasured confounding variables on the estimated treatment effect when outcomes are time-to-event in nature. The methods in the paper are illustrated by estimating the effect of discharge statin prescribing on the risk of death in a sample of patients hospitalized with acute myocardial infarction. In this tutorial article, we describe and illustrate all the steps necessary to conduct a comprehensive analysis of the effect of treatment on time-to-event outcomes. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  7. Density measurements in low pressure, weakly magnetized, RF plasmas: experimental verification of the sheath expansion effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yunchao; Charles, Christine; Boswell, Roderick W.

    2017-07-01

    This experimental study shows the validity of Sheridan's method in determining plasma density in low pressure, weakly magnetized, RF plasmas using ion saturation current data measured by a planar Langmuir probe. The ion density derived from Sheridan's method which takes into account the sheath expansion around the negatively biased probe tip, presents a good consistency with the electron density measured by a cylindrical RF-compensated Langmuir probe using the Druyvesteyn theory. The ion density obtained from the simplified method which neglects the sheath expansion effect, overestimates the true density magnitude, e.g., by a factor of 3 to 12 for the present experiment.

  8. The method for measuring the groove density of variable-line-space gratings with elimination of the eccentricity effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qingbo; Liu, Zhengkun, E-mail: zhkliu@ustc.edu.cn; Chen, Huoyao

    2015-02-15

    To eliminate the eccentricity effect, a new method for measuring the groove density of a variable-line-space grating was adapted. Based on grating equation, groove density is calculated by measuring the internal angles between zeroth-order and first-order diffracted light for two different wavelengths with the same angle of incidence. The measurement system mainly includes two laser sources, a phase plate, plane mirror, and charge coupled device. The measurement results of a variable-line-space grating demonstrate that the experiment data agree well with theoretical values, and the value of measurement error (ΔN/N) is less than 2.72 × 10{sup −4}.

  9. Effects of Barometric Fluctuations on Well Water-Level Measurements and Aquifer Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spane, Frank A.

    1999-12-16

    This report examines the effects of barometric fluctuations on well water-level measurements and evaluates adjustment and removal methods for determining areal aquifer head conditions and aquifer test analysis. Two examples of Hanford Site unconfined aquifer tests are examined that demonstrate baro-metric response analysis and illustrate the predictive/removal capabilities of various methods for well water-level and aquifer total head values. Good predictive/removal characteristics were demonstrated with best corrective results provided by multiple-regression deconvolution methods.

  10. Simple and cost-effective liquid chromatography-mass spectrometry method to measure dabrafenib quantitatively and six metabolites semi-quantitatively in human plasma.

    PubMed

    Vikingsson, Svante; Dahlberg, Jan-Olof; Hansson, Johan; Höiom, Veronica; Gréen, Henrik

    2017-06-01

    Dabrafenib is an inhibitor of BRAF V600E used for treating metastatic melanoma but a majority of patients experience adverse effects. Methods to measure the levels of dabrafenib and major metabolites during treatment are needed to allow development of individualized dosing strategies to reduce the burden of such adverse events. In this study, an LC-MS/MS method capable of measuring dabrafenib quantitatively and six metabolites semi-quantitatively is presented. The method is fully validated with regard to dabrafenib in human plasma in the range 5-5000 ng/mL. The analytes were separated on a C18 column after protein precipitation and detected in positive electrospray ionization mode using a Xevo TQ triple quadrupole mass spectrometer. As no commercial reference standards are available, the calibration curve of dabrafenib was used for semi-quantification of dabrafenib metabolites. Compared to earlier methods the presented method represents a simpler and more cost-effective approach suitable for clinical studies. Graphical abstract Combined multi reaction monitoring transitions of dabrafenib and metabolites in a typical case sample.

  11. Instrumentation and method for measuring NIR light absorbed in tissue during MR imaging in medical NIRS measurements

    NASA Astrophysics Data System (ADS)

    Myllylä, Teemu S.; Sorvoja, Hannu S. S.; Nikkinen, Juha; Tervonen, Osmo; Kiviniemi, Vesa; Myllylä, Risto A.

    2011-07-01

    Our goal is to provide a cost-effective method for examining human tissue, particularly the brain, by the simultaneous use of functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). Due to its compatibility requirements, MRI poses a demanding challenge for NIRS measurements. This paper focuses particularly on presenting the instrumentation and a method for the non-invasive measurement of NIR light absorbed in human tissue during MR imaging. One practical method to avoid disturbances in MR imaging involves using long fibre bundles to enable conducting the measurements at some distance from the MRI scanner. This setup serves in fact a dual purpose, since also the NIRS device will be less disturbed by the MRI scanner. However, measurements based on long fibre bundles suffer from light attenuation. Furthermore, because one of our primary goals was to make the measuring method as cost-effective as possible, we used high-power light emitting diodes instead of more expensive lasers. The use of LEDs, however, limits the maximum output power which can be extracted to illuminate the tissue. To meet these requirements, we improved methods of emitting light sufficiently deep into tissue. We also show how to measure NIR light of a very small power level that scatters from the tissue in the MRI environment, which is characterized by strong electromagnetic interference. In this paper, we present the implemented instrumentation and measuring method and report on test measurements conducted during MRI scanning. These measurements were performed in MRI operating rooms housing 1.5 Tesla-strength closed MRI scanners (manufactured by GE) in the Dept. of Diagnostic Radiology at the Oulu University Hospital.

  12. A method to measure internal contact angle in opaque systems by magnetic resonance imaging.

    PubMed

    Zhu, Weiqin; Tian, Ye; Gao, Xuefeng; Jiang, Lei

    2013-07-23

    Internal contact angle is an important parameter for internal wettability characterization. However, due to the limitation of optical imaging, methods available for contact angle measurement are only suitable for transparent or open systems. For most of the practical situations that require contact angle measurement in opaque or enclosed systems, the traditional methods are not effective. Based upon the requirement, a method suitable for contact angle measurement in nontransparent systems is developed by employing MRI technology. In the Article, the method is demonstrated by measuring internal contact angles in opaque cylindrical tubes. It proves that the method also shows great feasibility in transparent situations and opaque capillary systems. By using the method, contact angle in opaque systems could be measured successfully, which is significant in understanding the wetting behaviors in nontransparent systems and calculating interfacial parameters in enclosed systems.

  13. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    PubMed

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  14. Motion induced interplay effects for VMAT radiotherapy.

    PubMed

    Edvardsson, Anneli; Nordström, Fredrik; Ceberg, Crister; Ceberg, Sofie

    2018-04-19

    The purpose of this study was to develop a method to simulate breathing motion induced interplay effects for volumetric modulated arc therapy (VMAT), to verify the proposed method with measurements, and to use the method to investigate how interplay effects vary with different patient- and machine specific parameters. VMAT treatment plans were created on a virtual phantom in a treatment planning system (TPS). Interplay effects were simulated by dividing each plan into smaller sub-arcs using an in-house developed software and shifting the isocenter for each sub-arc to simulate a sin 6 breathing motion in the superior-inferior direction. The simulations were performed for both flattening-filter (FF) and flattening-filter free (FFF) plans and for different breathing amplitudes, period times, initial breathing phases, dose levels, plan complexities, CTV sizes, and collimator angles. The resulting sub-arcs were calculated in the TPS, generating a dose distribution including the effects of motion. The interplay effects were separated from dose blurring and the relative dose differences to 2% and 98% of the CTV volume (ΔD 98% and ΔD 2% ) were calculated. To verify the simulation method, measurements were carried out, both static and during motion, using a quasi-3D phantom and a motion platform. The results of the verification measurements during motion were comparable to the results of the static measurements. Considerable interplay effects were observed for individual fractions, with the minimum ΔD 98% and maximum ΔD 2% being  -16.7% and 16.2%, respectively. The extent of interplay effects was larger for FFF compared to FF and generally increased for higher breathing amplitudes, larger period times, lower dose levels, and more complex treatment plans. Also, the interplay effects varied considerably with the initial breathing phase, and larger variations were observed for smaller CTV sizes. In conclusion, a method to simulate motion induced interplay effects was developed and verified with measurements, which allowed for a large number of treatment scenarios to be investigated. The simulations showed large interplay effects for individual fractions and that the extent of interplay effects varied with the breathing pattern, FFF/FF, dose level, CTV size, collimator angle, and the complexity of the treatment plan.

  15. Motion induced interplay effects for VMAT radiotherapy

    NASA Astrophysics Data System (ADS)

    Edvardsson, Anneli; Nordström, Fredrik; Ceberg, Crister; Ceberg, Sofie

    2018-04-01

    The purpose of this study was to develop a method to simulate breathing motion induced interplay effects for volumetric modulated arc therapy (VMAT), to verify the proposed method with measurements, and to use the method to investigate how interplay effects vary with different patient- and machine specific parameters. VMAT treatment plans were created on a virtual phantom in a treatment planning system (TPS). Interplay effects were simulated by dividing each plan into smaller sub-arcs using an in-house developed software and shifting the isocenter for each sub-arc to simulate a sin6 breathing motion in the superior–inferior direction. The simulations were performed for both flattening-filter (FF) and flattening-filter free (FFF) plans and for different breathing amplitudes, period times, initial breathing phases, dose levels, plan complexities, CTV sizes, and collimator angles. The resulting sub-arcs were calculated in the TPS, generating a dose distribution including the effects of motion. The interplay effects were separated from dose blurring and the relative dose differences to 2% and 98% of the CTV volume (ΔD98% and ΔD2%) were calculated. To verify the simulation method, measurements were carried out, both static and during motion, using a quasi-3D phantom and a motion platform. The results of the verification measurements during motion were comparable to the results of the static measurements. Considerable interplay effects were observed for individual fractions, with the minimum ΔD98% and maximum ΔD2% being  ‑16.7% and 16.2%, respectively. The extent of interplay effects was larger for FFF compared to FF and generally increased for higher breathing amplitudes, larger period times, lower dose levels, and more complex treatment plans. Also, the interplay effects varied considerably with the initial breathing phase, and larger variations were observed for smaller CTV sizes. In conclusion, a method to simulate motion induced interplay effects was developed and verified with measurements, which allowed for a large number of treatment scenarios to be investigated. The simulations showed large interplay effects for individual fractions and that the extent of interplay effects varied with the breathing pattern, FFF/FF, dose level, CTV size, collimator angle, and the complexity of the treatment plan.

  16. Ranking filter methods for concentrating pathogens in lake water

    USDA-ARS?s Scientific Manuscript database

    Accurately comparing filtration methods for concentrating waterborne pathogens is difficult because of two important water matrix effects on recovery measurements, the effect on PCR quantification and the effect on filter performance. Regarding the first effect, we show how to create a control water...

  17. Full-field measurement of surface topographies and thin film stresses at elevated temperatures by digital gradient sensing method.

    PubMed

    Zhang, Changxing; Qu, Zhe; Fang, Xufei; Feng, Xue; Hwang, Keh-Chih

    2015-02-01

    Thin film stresses in thin film/substrate systems at elevated temperatures affect the reliability and safety of such structures in microelectronic devices. The stresses result from the thermal mismatch strain between the film and substrate. The reflection mode digital gradient sensing (DGS) method, a real-time, full-field optical technique, measures deformations of reflective surface topographies. In this paper, we developed this method to measure topographies and thin film stresses of thin film/substrate systems at elevated temperatures. We calibrated and compensated for the air convection at elevated temperatures, which is a serious problem for optical techniques. We covered the principles for surface topography measurements by the reflection mode DGS method at elevated temperatures and the governing equations to remove the air convection effects. The proposed method is applied to successfully measure the full-field topography and deformation of a NiTi thin film on a silicon substrate at elevated temperatures. The evolution of thin film stresses obtained by extending Stoney's formula implies the "nonuniform" effect the experimental results have shown.

  18. Magnetic Moment Quantifications of Small Spherical Objects in MRI

    PubMed Central

    Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2014-01-01

    Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517

  19. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  20. Why Measure Outcomes?

    PubMed

    Kuhn, John E

    2016-01-01

    The concept of measuring the outcomes of treatment in health care was promoted by Ernest Amory Codman in the early 1900s, but, until recently, his ideas were generally ignored. The forces that have advanced outcome measurement to the forefront of health care include the shift in payers for health care from the patient to large insurance companies or government agencies, the movement toward assessing the care of populations not individuals, and the effort to find value (or cost-effective treatments) amid rising healthcare costs. No ideal method exists to measure outcomes, and the information gathered depends on the reason the outcome information is required. Outcome measures used in research are best able to answer research questions. The methods for assessing physician and hospital performance include process measures, patient-experience measures, structure measures, and measures used to assess the outcomes of treatment. The methods used to assess performance should be validated, be reliable, and reflect a patient's perception of the treatment results. The healthcare industry must measure outcomes to identify which treatments are most effective and provide the most benefit to patients.

  1. Measurement and evaluation practices of factors that contribute to effective health promotion collaboration functioning: A scoping review.

    PubMed

    Stolp, Sean; Bottorff, Joan L; Seaton, Cherisse L; Jones-Bricker, Margaret; Oliffe, John L; Johnson, Steven T; Errey, Sally; Medhurst, Kerensa; Lamont, Sonia

    2017-04-01

    The purpose of this scoping review was to identify promising factors that underpin effective health promotion collaborations, measurement approaches, and evaluation practices. Measurement approaches and evaluation practices employed in 14 English-language articles published between January 2001 and October 2015 were considered. Data extraction included research design, health focus of the collaboration, factors being evaluated, how factors were conceptualized and measured, and outcome measures. Studies were methodologically diverse employing either quantitative methods (n=9), mixed methods (n=4), or qualitative methods (n=1). In total, these 14 studies examined 113 factors, 88 of which were only measured once. Leadership was the most commonly studied factor but was conceptualized differently across studies. Six factors were significantly associated with outcome measures across studies; leadership (n=3), gender (n=2), trust (n=2), length of the collaboration (n=2), budget (n=2) and changes in organizational model (n=2). Since factors were often conceptualized differently, drawing conclusions about their impact on collaborative functioning remains difficult. The use of reliable and validated tools would strengthen evaluation of health promotion collaborations and would support and enhance the effectiveness of collaboration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Methodological Issues in Mobile Computer-Supported Collaborative Learning (mCSCL): What Methods, What to Measure and When to Measure?

    ERIC Educational Resources Information Center

    Song, Yanjie

    2014-01-01

    This study aims to investigate (1) methods utilized in mobile computer-supported collaborative learning (mCSCL) research which focuses on studying, learning and collaboration mediated by mobile devices; (2) whether these methods have examined mCSCL effectively; (3) when the methods are administered; and (4) what methodological issues exist in…

  3. Trends and regional variations in provision of contraception methods in a commercially insured population in the United States based on nationally proposed measures.

    PubMed

    Law, A; Yu, J S; Wang, W; Lin, J; Lynen, R

    2017-09-01

    Three measures to assess the provision of effective contraception methods among reproductive-aged women have recently been endorsed for national public reporting. Based on these measures, this study examined real-world trends and regional variations of contraceptive provision in a commercially insured population in the United States. Women 15-44years old with continuous enrollment in each year from 2005 to 2014 were identified from a commercial claims database. In accordance with the proposed measures, percentages of women (a) provided most effective or moderately effective (MEME) methods of contraception and (b) provided a long-acting reversible contraceptive (LARC) method were calculated in two populations: women at risk for unintended pregnancy and women who had a live birth within 3 and 60days of delivery. During the 10-year period, the percentages of women at risk for unintended pregnancy provided MEME contraceptive methods increased among 15-20-year-olds (24.5%-35.9%) and 21-44-year-olds (26.2%-31.5%), and those provided a LARC method also increased among 15-20-year-olds (0.1%-2.4%) and 21-44-year-olds (0.8%-3.9%). Provision of LARC methods increased most in the North Central and West among both age groups of women. Provision of MEME contraceptives and LARC methods to women who had a live birth within 60days postpartum also increased across age groups and regions. This assessment indicates an overall trend of increasing provision of MEME contraceptive methods in the commercial sector, albeit with age group and regional variations. If implemented, these proposed measures may have impacts on health plan contraceptive access policy. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Application of Mixed Effects Limits of Agreement in the Presence of Multiple Sources of Variability: Exemplar from the Comparison of Several Devices to Measure Respiratory Rate in COPD Patients

    PubMed Central

    Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian

    2016-01-01

    Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556

  5. A method for the modelling of porous and solid wind tunnel walls in computational fluid dynamics codes

    NASA Technical Reports Server (NTRS)

    Beutner, Thomas John

    1993-01-01

    Porous wall wind tunnels have been used for several decades and have proven effective in reducing wall interference effects in both low speed and transonic testing. They allow for testing through Mach 1, reduce blockage effects and reduce shock wave reflections in the test section. Their usefulness in developing computational fluid dynamics (CFD) codes has been limited, however, by the difficulties associated with modelling the effect of a porous wall in CFD codes. Previous approaches to modelling porous wall effects have depended either upon a simplified linear boundary condition, which has proven inadequate, or upon detailed measurements of the normal velocity near the wall, which require extensive wind tunnel time. The current work was initiated in an effort to find a simple, accurate method of modelling a porous wall boundary condition in CFD codes. The development of such a method would allow data from porous wall wind tunnels to be used more readily in validating CFD codes. This would be beneficial when transonic validations are desired, or when large models are used to achieve high Reynolds numbers in testing. A computational and experimental study was undertaken to investigate a new method of modelling solid and porous wall boundary conditions in CFD codes. The method utilized experimental measurements at the walls to develop a flow field solution based on the method of singularities. This flow field solution was then imposed as a pressure boundary condition in a CFD simulation of the internal flow field. The effectiveness of this method in describing the effect of porosity changes on the wall was investigated. Also, the effectiveness of this method when only sparse experimental measurements were available has been investigated. The current work demonstrated this approach for low speed flows and compared the results with experimental data obtained from a heavily instrumented variable porosity test section. The approach developed was simple, computationally inexpensive, and did not require extensive or intrusive measurements of the boundary conditions during the wind tunnel test. It may be applied to both solid and porous wall wind tunnel tests.

  6. A randomized trial to identify accurate and cost-effective fidelity measurement methods for cognitive-behavioral therapy: project FACTS study protocol.

    PubMed

    Beidas, Rinad S; Maclean, Johanna Catherine; Fishman, Jessica; Dorsey, Shannon; Schoenwald, Sonja K; Mandell, David S; Shea, Judy A; McLeod, Bryce D; French, Michael T; Hogue, Aaron; Adams, Danielle R; Lieberman, Adina; Becker-Haimes, Emily M; Marcus, Steven C

    2016-09-15

    This randomized trial will compare three methods of assessing fidelity to cognitive-behavioral therapy (CBT) for youth to identify the most accurate and cost-effective method. The three methods include self-report (i.e., therapist completes a self-report measure on the CBT interventions used in session while circumventing some of the typical barriers to self-report), chart-stimulated recall (i.e., therapist reports on the CBT interventions used in session via an interview with a trained rater, and with the chart to assist him/her) and behavioral rehearsal (i.e., therapist demonstrates the CBT interventions used in session via a role-play with a trained rater). Direct observation will be used as the gold-standard comparison for each of the three methods. This trial will recruit 135 therapists in approximately 12 community agencies in the City of Philadelphia. Therapists will be randomized to one of the three conditions. Each therapist will provide data from three unique sessions, for a total of 405 sessions. All sessions will be audio-recorded and coded using the Therapy Process Observational Coding System for Child Psychotherapy-Revised Strategies scale. This will enable comparison of each measurement approach to direct observation of therapist session behavior to determine which most accurately assesses fidelity. Cost data associated with each method will be gathered. To gather stakeholder perspectives of each measurement method, we will use purposive sampling to recruit 12 therapists from each condition (total of 36 therapists) and 12 supervisors to participate in semi-structured qualitative interviews. Results will provide needed information on how to accurately and cost-effectively measure therapist fidelity to CBT for youth, as well as important information about stakeholder perspectives with regard to each measurement method. Findings will inform fidelity measurement practices in future implementation studies as well as in clinical practice. NCT02820623 , June 3rd, 2016.

  7. Effects of Environmental Toxicants on Metabolic Activity of Natural Microbial Communities

    PubMed Central

    Barnhart, Carole L. H.; Vestal, J. Robie

    1983-01-01

    Two methods of measuring microbial activity were used to study the effects of toxicants on natural microbial communities. The methods were compared for suitability for toxicity testing, sensitivity, and adaptability to field applications. This study included measurements of the incorporation of 14C-labeled acetate into microbial lipids and microbial glucosidase activity. Activities were measured per unit biomass, determined as lipid phosphate. The effects of various organic and inorganic toxicants on various natural microbial communities were studied. Both methods were useful in detecting toxicity, and their comparative sensitivities varied with the system studied. In one system, the methods showed approximately the same sensitivities in testing the effects of metals, but the acetate incorporation method was more sensitive in detecting the toxicity of organic compounds. The incorporation method was used to study the effects of a point source of pollution on the microbiota of a receiving stream. Toxic doses were found to be two orders of magnitude higher in sediments than in water taken from the same site, indicating chelation or adsorption of the toxicant by the sediment. The microbiota taken from below a point source outfall was 2 to 100 times more resistant to the toxicants tested than was that taken from above the outfall. Downstream filtrates in most cases had an inhibitory effect on the natural microbiota taken from above the pollution source. The microbial methods were compared with commonly used bioassay methods, using higher organisms, and were found to be similar in ability to detect comparative toxicities of compounds, but were less sensitive than methods which use standard media because of the influences of environmental factors. PMID:16346432

  8. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  9. Quantifying the measurement uncertainty of results from environmental analytical methods.

    PubMed

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  10. A Noninvasive Method to Study Regulation of Extracellular Fluid Volume in Rats Using Nuclear Magnetic Resonance

    EPA Science Inventory

    Time-domain nuclear magnetic resonance (TD-NMR)-based measurement of body composition of rodents is an effective method to quickly and repeatedly measure proportions of fat, lean, and fluid without anesthesia. TD-NMR provides a measure of free water in a living animal, termed % f...

  11. Measurement of Chlorine Dioxide in Water by DPD Colorimetric Method

    NASA Astrophysics Data System (ADS)

    Song, Min; Yan, Panping; Yao, Jun

    2018-01-01

    In order to solve the problems of chlorine dioxide in water by DPD colorimetric method, this paper discusses the effects of the formulation, temperature, color development time and amount of color reagent on the measurement process, improving the on-line instrument for domestic and drinking water in chlorine dioxide measurement precision and accuracy.

  12. A method for surface topography measurement using a new focus function based on dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Guo, Tong; Yuan, Lin; Chen, Jinping

    2018-01-01

    Surface topography measurement is an important tool widely used in many fields to determine the characteristics and functionality of a part or material. Among existing methods for this purpose, the focus variation method has proved high performance particularly in large slope scenarios. However, its performance depends largely on the effectiveness of focus function. This paper presents a method for surface topography measurement using a new focus measurement function based on dual-tree complex wavelet transform. Experiments are conducted on simulated defocused images to prove its high performance in comparison with other traditional approaches. The results showed that the new algorithm has better unimodality and sharpness. The method was also verified by measuring a MEMS micro resonator structure.

  13. A rough set-based measurement model study on high-speed railway safety operation.

    PubMed

    Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun

    2018-01-01

    Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.

  14. Dynamic characterization of small fibers based on the flexural vibrations of a piezoelectric cantilever probe

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofei; Ye, Xuan; Li, Xide

    2016-08-01

    In this paper, we present a cantilever-probe system excited by a piezoelectric actuator, and use it to measure the dynamic mechanical properties of a micro- and nanoscale fiber. Coupling the fiber to the free end of the cantilever probe, we found the dynamic stiffness and damping coefficient of the fiber from the resonance frequency and the quality factor of the fiber-cantilever-probe system. The properties of Bacillus subtilis fibers measured using our proposed system agreed with tensile measurements, validating our method. Our measurements show that the piezoelectric actuator coupled to cantilever probe can be made equivalent to a clamped cantilever with an effective length, and calculated results show that the errors of measured natural frequency of the system can be ignored if the coupled fiber has an inclination angle of alignment of less than 10°. A sensitivity analysis indicates that the first or second resonant mode is the sensitive mode to test the sample’s dynamic stiffness, while the damping property has different sensitivities for the first four modes. Our theoretical analysis demonstrates that the double-cantilever probe is also an effective sensitive structure that can be used to perform dynamic loading and characterize dynamic response. Our method has the advantage of using amplitude-frequency curves to obtain the dynamic mechanical properties without directly measuring displacements and forces as in tensile tests, and it also avoids the effects of the complex surface structure and deformation presenting in contact resonance method. Our method is effective for measuring the dynamic mechanical properties of fiber-like one-dimensional (1D) materials.

  15. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  16. A Comparison of Methods for Detecting Differential Distractor Functioning

    ERIC Educational Resources Information Center

    Koon, Sharon

    2010-01-01

    This study examined the effectiveness of the odds-ratio method (Penfield, 2008) and the multinomial logistic regression method (Kato, Moen, & Thurlow, 2009) for measuring differential distractor functioning (DDF) effects in comparison to the standardized distractor analysis approach (Schmitt & Bleistein, 1987). Students classified as participating…

  17. Surface texture measurement for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Triantaphyllou, Andrew; Giusca, Claudiu L.; Macaulay, Gavin D.; Roerig, Felix; Hoebel, Matthias; Leach, Richard K.; Tomita, Ben; Milne, Katherine A.

    2015-06-01

    The surface texture of additively manufactured metallic surfaces made by powder bed methods is affected by a number of factors, including the powder’s particle size distribution, the effect of the heat source, the thickness of the printed layers, the angle of the surface relative to the horizontal build bed and the effect of any post processing/finishing. The aim of the research reported here is to understand the way these surfaces should be measured in order to characterise them. In published research to date, the surface texture is generally reported as an Ra value, measured across the lay. The appropriateness of this method for such surfaces is investigated here. A preliminary investigation was carried out on two additive manufacturing processes—selective laser melting (SLM) and electron beam melting (EBM)—focusing on the effect of build angle and post processing. The surfaces were measured using both tactile and optical methods and a range of profile and areal parameters were reported. Test coupons were manufactured at four angles relative to the horizontal plane of the powder bed using both SLM and EBM. The effect of lay—caused by the layered nature of the manufacturing process—was investigated, as was the required sample area for optical measurements. The surfaces were also measured before and after grit blasting.

  18. In-flight calibration of mesospheric rocket plasma probes.

    PubMed

    Havnes, Ove; Hartquist, Thomas W; Kassa, Meseret; Morfill, Gregor E

    2011-07-01

    Many effects and factors can influence the efficiency of a rocket plasma probe. These include payload charging, solar illumination, rocket payload orientation and rotation, and dust impact induced secondary charge production. As a consequence, considerable uncertainties can arise in the determination of the effective cross sections of plasma probes and measured electron and ion densities. We present a new method for calibrating mesospheric rocket plasma probes and obtaining reliable measurements of plasma densities. This method can be used if a payload also carries a probe for measuring the dust charge density. It is based on that a dust probe's effective cross section for measuring the charged component of dust normally is nearly equal to its geometric cross section, and it involves the comparison of variations in the dust charge density measured with the dust detector to the corresponding current variations measured with the electron and/or ion probes. In cases in which the dust charge density is significantly smaller than the electron density, the relation between plasma and dust charge density variations can be simplified and used to infer the effective cross sections of the plasma probes. We illustrate the utility of the method by analysing the data from a specific rocket flight of a payload containing both dust and electron probes.

  19. Real Time Measures of Effectiveness

    DOT National Transportation Integrated Search

    2003-06-01

    This report describes research that is focused on identifying and determining methods for automatically computing measures of effectiveness (MOEs) when supplied with real time information. The MOEs, along with detection devices such as cameras, roadw...

  20. MTF measurement and analysis of linear array HgCdTe infrared detectors

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi

    2018-01-01

    The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.

  1. Determination of Cluster Distances from Chandra Imaging Spectroscopy and Sunyaev-Zeldovich Effect Measurements. I; Analysis Methods and Initial Results

    NASA Technical Reports Server (NTRS)

    Bonamente, Massimiliano; Joy, Marshall K.; Carlstrom, John E.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zeldovich Effect data ca,n be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from the Chandra Observatory, which provides both spatial and spectral information, and interferometric radio measurements of the Sunyam-Zeldovich Effect are available from the BIMA and 0VR.O arrays. We introduce a Monte Carlo Markov chain procedure for the joint analysis of X-ray and Sunyaev-Zeldovich Effect data. The advantages of this method are the high computational efficiency and the ability to measure the full probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and the cluster distance. We apply this technique to the Chandra X-ray data and the OVRO radio data for the galaxy cluster Abell 611. Comparisons with traditional likelihood-ratio methods reveal the robustness of the method. This method will be used in a follow-up paper to determine the distance of a large sample of galaxy clusters for which high-resolution Chandra X-ray and BIMA/OVRO radio data are available.

  2. High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.

    PubMed

    Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D

    2018-05-30

    NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Comparison of different measurement methods for transmittance haze

    NASA Astrophysics Data System (ADS)

    Yu, Hsueh-Ling; Hsaio, Chin-Chai

    2009-08-01

    Transmittance haze is increasingly important to the LCD and solar cell industry. Most commercial haze measurement instruments are designed according to the method recommended in the documentary standards like ASTM D 1003 (ASTM 2003 Standard Test Method for Haze and Luminous Transmittance of Transparent Plastics), JIS K 7361 (JIS 1997 Plastics—Determination of the Total Luminous Transmittance of Transparent Materials—Part 1: Single Beam Instrument) and ISO 14782 (ISO 1997 Plastics—Determination of Haze of Transparent Materials). To improve the measurement accuracy of the current standards, a new apparatus was designed by the Center for Measurement Standards (Yu et al 2006 Meas. Sci. Technol. 17 N29-36). Besides the methods mentioned above, a double-beam method is used in the design of some instruments. There are discrepancies between the various methods. But no matter which method is used, a white standard is always needed. This paper compares the measurement results from different methods, presents the effect of the white standard, and analyses the measurement uncertainty.

  4. [Effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patient with implant restoration].

    PubMed

    Wang, Rong; Xu, Xin

    2015-12-01

    To compare the effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patients with dental implant restoration. Twenty patients, each with a single edentulous posterior dentition with no distal dentition were selected, and divided into 2 groups. Patients in group A underwent original occlusion adjustment method and patients in group B underwent occlusal plane reduction technique. Ankylos implants were implanted in the edentulous space in each patient and restored with fixed prosthodontics single unit crown. Occlusion was adjusted in each restoration accordingly. Electromyograms were conducted to determine the effect of adjustment methods on occlusion and muscles of mastication 3 months and 6 months after initial restoration and adjustment. Data was collected and measurements for balanced occlusal measuring standards were obtained, including central occlusion force (COF), asymmetry index of molar occlusal force(AMOF). Balanced muscles of mastication measuring standards were also obtained including measurements from electromyogram for the muscles of mastication and the anterior bundle of the temporalis muscle at the mandibular rest position, average electromyogram measurements of the anterior bundle of the temporalis muscle at the intercuspal position(ICP), Astot, masseter muscle asymmetry index, and anterior temporalis asymmetry index (ASTA). Statistical analysis was performed using Student 's t test with SPSS 18.0 software package. Three months after occlusion adjustment, parameters of the original occlusion adjustment method were significantly different between group A and group B in balanced occlusal measuring standards and balanced muscles of mastication measuring standards. Six months after occlusion adjustment, parameters of the original occlusion adjustment methods were significantly different between group A and group B in balanced muscles of mastication measuring standards, but was no significant difference in balanced occlusal measuring standards. Using occlusion plane reduction adjustment technique, it is possible to obtain occlusion index and muscles of mastication's electromyogram index similar to the opposite side's natural dentition in patients with single unit fix prosthodontics crown and single posterior edentulous dentition without distal dentitions.

  5. Magnetic moment quantifications of small spherical objects in MRI.

    PubMed

    Cheng, Yu-Chung N; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2015-07-01

    The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values, and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Comparison of Methods for Evaluating Urban Transportation Alternatives

    DOT National Transportation Integrated Search

    1975-02-01

    The objective of the report was to compare five alternative methods for evaluating urban transportation improvement options: unaided judgmental evaluation cost-benefit analysis, cost-effectiveness analysis based on a single measure of effectiveness, ...

  7. A new method for measuring the neutron lifetime using an in situ neutron detector

    DOE PAGES

    Morris, Christopher L.; Adamek, Evan Robert; Broussard, Leah Jacklyn; ...

    2017-05-30

    Here, we describe a new method for measuring surviving neutrons in neutron lifetime measurements using bottled ultracold neutrons (UCN), which provides better characterization of systematic uncertainties and enables higher precision than previous measurement techniques. We also used an active detector that can be lowered into the trap to measure the neutron distribution as a function of height and measure the influence of marginally trapped UCN on the neutron lifetime measurement. Additionally, measurements have demonstrated phase-space evolution and its effect on the lifetime measurement.

  8. A new method for measuring the neutron lifetime using an in situ neutron detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Christopher L.; Adamek, Evan Robert; Broussard, Leah Jacklyn

    Here, we describe a new method for measuring surviving neutrons in neutron lifetime measurements using bottled ultracold neutrons (UCN), which provides better characterization of systematic uncertainties and enables higher precision than previous measurement techniques. We also used an active detector that can be lowered into the trap to measure the neutron distribution as a function of height and measure the influence of marginally trapped UCN on the neutron lifetime measurement. Additionally, measurements have demonstrated phase-space evolution and its effect on the lifetime measurement.

  9. Method and apparatus for correcting eddy current signal voltage for temperature effects

    DOEpatents

    Kustra, Thomas A.; Caffarel, Alfred J.

    1990-01-01

    An apparatus and method for measuring physical characteristics of an electrically conductive material by the use of eddy-current techniques and compensating measurement errors caused by changes in temperature includes a switching arrangement connected between primary and reference coils of an eddy-current probe which allows the probe to be selectively connected between an eddy current output oscilloscope and a digital ohm-meter for measuring the resistances of the primary and reference coils substantially at the time of eddy current measurement. In this way, changes in resistance due to temperature effects can be completely taken into account in determining the true error in the eddy current measurement. The true error can consequently be converted into an equivalent eddy current measurement correction.

  10. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  11. Effect of monochromatic aberrations on photorefractive patterns

    NASA Astrophysics Data System (ADS)

    Campbell, Melanie C. W.; Bobier, W. R.; Roorda, A.

    1995-08-01

    Photorefractive methods have become popular in the measurement of refractive and accommodative states of infants and children owing to their photographic nature and rapid speed of measurement. As in the case of any method that measures the refractive state of the human eye, monochromatic aberrations will reduce the accuracy of the measurement. Monochromatic aberrations cannot be as easily predicted or controlled as chromatic aberrations during the measurement, and accordingly they will introduce measurement errors. This study defines this error or uncertainty by extending the existing paraxial optical analyses of coaxial and eccentric photorefraction. This new optical analysis predicts that, for the amounts of spherical aberration (SA) reported for the human eye, there will be a significant degree of measurement uncertainty introduced for all photorefractive methods. The dioptric amount of this uncertainty may exceed the maximum amount of SA present in the eye. The calculated effects on photorefractive measurement of a real eye with a mixture of spherical aberration and coma are shown to be significant. The ability, developed here, to predict photorefractive patterns corresponding to different amounts and types of monochromatic aberration may in the future lead to an extension of photorefractive methods to the dual measurement of refractive states and aberrations of individual eyes. aberration, retinal image quality,

  12. A new proportion measure of the treatment effect captured by candidate surrogate endpoints.

    PubMed

    Kobayashi, Fumiaki; Kuroki, Manabu

    2014-08-30

    The use of surrogate endpoints is expected to play an important role in the development of new drugs, as they can be used to reduce the sample size and/or duration of randomized clinical trials. Biostatistical researchers and practitioners have proposed various surrogacy measures; however, (i) most of these surrogacy measures often fall outside the range [0,1] without any assumptions, (ii) these surrogacy measures do not provide a cut-off value for judging a surrogacy level of candidate surrogate endpoints, and (iii) most surrogacy measures are highly variable; thus, the confidence intervals are often unacceptably wide. In order to solve problems (i) and (ii), we propose a new surrogacy measure, a proportion of the treatment effect captured by candidate surrogate endpoints (PCS), on the basis of the decomposition of the treatment effect into parts captured and non-captured by the candidate surrogate endpoints. In order to solve problem (iii), we propose an estimation method based on the half-range mode method with the bootstrap distribution of the estimated surrogacy measures. Finally, through numerical experiments and two empirical examples, we show that the PCS with the proposed estimation method overcomes these difficulties. The results of this paper contribute to the reliable evaluation of how much of the treatment effect is captured by candidate surrogate endpoints. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Correction of stream quality trends for the effects of laboratory measurement bias

    USGS Publications Warehouse

    Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.

    1993-01-01

    We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.

  14. The inference of vector magnetic fields from polarization measurements with limited spectral resolution

    NASA Technical Reports Server (NTRS)

    Lites, B. W.; Skumanich, A.

    1985-01-01

    A method is presented for recovery of the vector magnetic field and thermodynamic parameters from polarization measurement of photospheric line profiles measured with filtergraphs. The method includes magneto-optic effects and may be utilized on data sampled at arbitrary wavelengths within the line profile. The accuracy of this method is explored through inversion of synthetic Stokes profiles subjected to varying levels of random noise, instrumental wave-length resolution, and line profile sampling. The level of error introduced by the systematic effect of profile sampling over a finite fraction of the 5 minute oscillation cycle is also investigated. The results presented here are intended to guide instrumental design and observational procedure.

  15. MOE vs. M&E: considering the difference between measuring strategic effectiveness and monitoring tactical evaluation.

    PubMed

    Diehl, Glen; Major, Solomon

    2015-01-01

    Measuring the effectiveness of military Global Health Engagements (GHEs) has become an area of increasing interest to the military medical field. As a result, there have been efforts to more logically and rigorously evaluate GHE projects and programs; many of these have been based on the Logic and Results Frameworks. However, while these Frameworks are apt and appropriate planning tools, they are not ideally suited to measuring programs' effectiveness. This article introduces military medicine professionals to the Measures of Effectiveness for Defense Engagement and Learning (MODEL) program, which implements a new method of assessment, one that seeks to rigorously use Measures of Effectiveness (vs. Measures of Performance) to gauge programs' and projects' success and fidelity to Theater Campaign goals. While the MODEL method draws on the Logic and Results Frameworks where appropriate, it goes beyond their planning focus by using the latest social scientific and econometric evaluation methodologies to link on-the-ground GHE "lines of effort" to the realization of national and strategic goals and end-states. It is hoped these methods will find use beyond the MODEL project itself, and will catalyze a new body of rigorous, empirically based work, which measures the effectiveness of a broad spectrum of GHE and security cooperation activities. We based our strategies on the principle that it is much more cost-effective to prevent conflicts than it is to stop one once it's started. I cannot overstate the importance of our theater security cooperation programs as the centerpiece to securing our Homeland from the irregular and catastrophic threats of the 21st Century.-GEN James L. Jones, USMC (Ret.). Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  16. Compensation of the sheath effects in cylindrical floating probes

    NASA Astrophysics Data System (ADS)

    Park, Ji-Hwan; Chung, Chin-Wook

    2018-05-01

    In cylindrical floating probe measurements, the plasma density and electron temperature are overestimated due to sheath expansion and oscillation. To reduce these sheath effects, a compensation method based on well-developed floating sheath theories is proposed and applied to the floating harmonic method. The iterative calculation of the Allen-Boyd-Reynolds equation can derive the floating sheath thickness, which can be used to calculate the effective ion collection area; in this way, an accurate ion density is obtained. The Child-Langmuir law is used to calculate the ion harmonic currents caused by sheath oscillation of the alternating-voltage-biased probe tip. Accurate plasma parameters can be obtained by subtracting these ion harmonic currents from the total measured harmonic currents. Herein, the measurement principles and compensation method are discussed in detail and an experimental demonstration is presented.

  17. Study on photoelectric parameter measurement method of high capacitance solar cell

    NASA Astrophysics Data System (ADS)

    Zhang, Junchao; Xiong, Limin; Meng, Haifeng; He, Yingwei; Cai, Chuan; Zhang, Bifeng; Li, Xiaohui; Wang, Changshi

    2018-01-01

    The high efficiency solar cells usually have high capacitance characteristic, so the measurement of their photoelectric performance usually requires long pulse width and long sweep time. The effects of irradiance non-uniformity, probe shielding and spectral mismatch on the IV curve measurement are analyzed experimentally. A compensation method for irradiance loss caused by probe shielding is proposed, and the accurate measurement of the irradiance intensity in the IV curve measurement process of solar cell is realized. Based on the characteristics that the open circuit voltage of solar cell is sensitive to the junction temperature, an accurate measurement method of the temperature of solar cell under continuous irradiation condition is proposed. Finally, a measurement method with the characteristic of high accuracy and wide application range for high capacitance solar cell is presented.

  18. The identification and repair of anomalous measurements in the measurement of big diameter based on rolling-wheel method

    NASA Astrophysics Data System (ADS)

    Chen, Haiou; Yu, Xiaofen

    2011-05-01

    Rolling-wheel method is an effective way of measuring big diameter. After amending the temperature error and pressure error, the uncertainty of measurement can not be φ =5um/m stably because of the influence of skid. The traditional method of identifying skid loses sight of the influences of the unstable motor speed, the appearance form error and the eccentric of installation of the big axis and rolling wheel and so on, so the method has its limitation. In this paper, a new method of multiple identification and repair is introduced, namely n diameters are measured and Chauvenet standard is used for identifying the anomalous measurements one by one, and then the average value of the remaining data is used for repairing identified anomalous measurements, and the next round identification and repair is carried out until the accuracy requirement of the measurement is satisfied. The result of experiments indicates that the method can identify anomalous measurements whose offsets caused by the skid are greater than 0.2φ , and the uncertainty of measurement has improved substantially.

  19. Chapter 8: Whole-Building Retrofit with Consumption Data Analysis Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Agnew, Ken; Goldberg, Mimi

    Whole-building retrofits involve the installation of multiple measures. Whole-building retrofit programs take many forms. With a focus on overall building performance, these programs usually begin with an energy audit to identify cost-effective energy efficiency measures for the home. Measures are then installed, either at no cost to the homeowner or partially paid for by rebates and/or financing. The methods described here may also be applied to evaluation of single-measure retrofit programs. Related methods exist for replace-on-failure programs and for new construction, but are not the subject of this chapter.

  20. MTF measurement of LCDs by a linear CCD imager: I. Monochrome case

    NASA Astrophysics Data System (ADS)

    Kim, Tae-hee; Choe, O. S.; Lee, Yun Woo; Cho, Hyun-Mo; Lee, In Won

    1997-11-01

    We construct the modulation transfer function (MTF) measurement system of a LCD using a linear charge-coupled device (CCD) imager. The MTF used in optical system can not describe in the effect of both resolution and contrast on the image quality of display. Thus we present the new measurement method based on the transmission property of a LCD. While controlling contrast and brightness levels, the MTF is measured. From the result, we show that the method is useful for describing of the image quality. A ne measurement method and its condition are described. To demonstrate validity, the method is applied for comparison of the performance of two different LCDs.

  1. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    NASA Astrophysics Data System (ADS)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  2. Measuring continuous baseline covariate imbalances in clinical trial data

    PubMed Central

    Ciolino, Jody D.; Martin, Renee’ H.; Zhao, Wenle; Hill, Michael D.; Jauch, Edward C.; Palesch, Yuko Y.

    2014-01-01

    This paper presents and compares several methods of measuring continuous baseline covariate imbalance in clinical trial data. Simulations illustrate that though the t-test is an inappropriate method of assessing continuous baseline covariate imbalance, the test statistic itself is a robust measure in capturing imbalance in continuous covariate distributions. Guidelines to assess effects of imbalance on bias, type I error rate, and power for hypothesis test for treatment effect on continuous outcomes are presented, and the benefit of covariate-adjusted analysis (ANCOVA) is also illustrated. PMID:21865270

  3. Procedural Factors That Affect Psychophysical Measures of Spatial Selectivity in Cochlear Implant Users

    PubMed Central

    Deeks, John M.; Carlyon, Robert P.

    2015-01-01

    Behavioral measures of spatial selectivity in cochlear implants are important both for guiding the programing of individual users’ implants and for the evaluation of different stimulation methods. However, the methods used are subject to a number of confounding factors that can contaminate estimates of spatial selectivity. These factors include off-site listening, charge interactions between masker and probe pulses in interleaved masking paradigms, and confusion effects in forward masking. We review the effects of these confounds and discuss methods for minimizing them. We describe one such method in which the level of a 125-pps masker is adjusted so as to mask a 125-pps probe, and where the masker and probe pulses are temporally interleaved. Five experiments describe the method and evaluate the potential roles of the different potential confounding factors. No evidence was obtained for off-site listening of the type observed in acoustic hearing. The choice of the masking paradigm was shown to alter the measured spatial selectivity. For short gaps between masker and probe pulses, both facilitation and refractory mechanisms had an effect on masking; this finding should inform the choice of stimulation rate in interleaved masking experiments. No evidence for confusion effects in forward masking was revealed. It is concluded that the proposed method avoids many potential confounds but that the choice of method should depend on the research question under investigation. PMID:26420785

  4. Measuring Learning Gains in Chemical Education: A Comparison of Two Methods

    ERIC Educational Resources Information Center

    Pentecost, Thomas C.; Barbera, Jack

    2013-01-01

    Evaluating the effect of a pedagogical innovation is often done by looking for a significant difference in a content measure using a pre-post design. While this approach provides valuable information regarding the presence or absence of an effect, it is limited in providing details about the nature of the effect. A measure of the magnitude of the…

  5. Application of partial inversion pulse to ultrasonic time-domain correlation method to measure the flow rate in a pipe

    NASA Astrophysics Data System (ADS)

    Wada, Sanehiro; Furuichi, Noriyuki; Shimada, Takashi

    2017-11-01

    This paper proposes the application of a novel ultrasonic pulse, called a partial inversion pulse (PIP), to the measurement of the velocity profile and flow rate in a pipe using the ultrasound time-domain correlation (UTDC) method. In general, the measured flow rate depends on the velocity profile in the pipe; thus, on-site calibration is the only method of checking the accuracy of on-site flow rate measurements. Flow rate calculation using UTDC is based on the integration of the measured velocity profile. The advantages of this method compared with the ultrasonic pulse Doppler method include the possibility of the velocity range having no limitation and its applicability to flow fields without a sufficient amount of reflectors. However, it has been previously reported that the measurable velocity range for UTDC is limited by false detections. Considering the application of this method to on-site flow fields, the issue of velocity range is important. To reduce the effect of false detections, a PIP signal, which is an ultrasound signal that contains a partially inverted region, was developed in this study. The advantages of the PIP signal are that it requires little additional hardware cost and no additional software cost in comparison with conventional methods. The effects of inversion on the characteristics of the ultrasound transmission were estimated through numerical calculation. Then, experimental measurements were performed at a national standard calibration facility for water flow rate in Japan. The experimental results demonstrate that measurements made using a PIP signal are more accurate and yield a higher detection ratio than measurements using a normal pulse signal.

  6. Analysis and testing of a new method for drop size measurement using laser scatter interferometry

    NASA Technical Reports Server (NTRS)

    Bachalo, W. D.; Houser, M. J.

    1984-01-01

    Research was conducted on a laser light scatter detection method for measuring the size and velocity of spherical particles. The method is based upon the measurement of the interference fringe pattern produced by spheres passing through the intersection of two laser beams. A theoretical analysis of the method was carried out using the geometrical optics theory. Experimental verification of the theory was obtained by using monodisperse droplet streams. Several optical configurations were tested to identify all of the parametric effects upon the size measurements. Both off-axis forward and backscatter light detection were utilized. Simulated spray environments and fuel spray nozzles were used in the evaluation of the method. The measurements of the monodisperse drops showed complete agreement with the theoretical predictions. The method was demonstrated to be independent of the beam intensity and extinction resulting from the surrounding drops. Signal processing concepts were considered and a method was selected for development.

  7. Monitoring post-fire vegetation rehabilitation projects: A common approach for non-forested ecosystems

    USGS Publications Warehouse

    Wirth, Troy A.; Pyke, David A.

    2007-01-01

    Emergency Stabilization and Rehabilitation (ES&R) and Burned Area Emergency Response (BAER) treatments are short-term, high-intensity treatments designed to mitigate the adverse effects of wildfire on public lands. The federal government expends significant resources implementing ES&R and BAER treatments after wildfires; however, recent reviews have found that existing data from monitoring and research are insufficient to evaluate the effects of these activities. The purpose of this report is to: (1) document what monitoring methods are generally used by personnel in the field; (2) describe approaches and methods for post-fire vegetation and soil monitoring documented in agency manuals; (3) determine the common elements of monitoring programs recommended in these manuals; and (4) describe a common monitoring approach to determine the effectiveness of future ES&R and BAER treatments in non-forested regions. Both qualitative and quantitative methods to measure effectiveness of ES&R treatments are used by federal land management agencies. Quantitative methods are used in the field depending on factors such as funding, personnel, and time constraints. There are seven vegetation monitoring manuals produced by the federal government that address monitoring methods for (primarily) vegetation and soil attributes. These methods vary in their objectivity and repeatability. The most repeatable methods are point-intercept, quadrat-based density measurements, gap intercepts, and direct measurement of soil erosion. Additionally, these manuals recommend approaches for designing monitoring programs for the state of ecosystems or the effect of management actions. The elements of a defensible monitoring program applicable to ES&R and BAER projects that most of these manuals have in common are objectives, stratification, control areas, random sampling, data quality, and statistical analysis. The effectiveness of treatments can be determined more accurately if data are gathered using an approach that incorporates these six monitoring program design elements and objectives, as well as repeatable procedures to measure cover, density, gap intercept, and soil erosion within each ecoregion and plant community. Additionally, using a common monitoring program design with comparable methods, consistently documenting results, and creating and maintaining a central database for query and reporting, will ultimately allow a determination of the effectiveness of post-fire rehabilitation activities region-wide.

  8. Electrical method for the measurements of volume averaged electron density and effective coupled power to the plasma bulk

    NASA Astrophysics Data System (ADS)

    Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.

    2016-02-01

    Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.

  9. Cost-effectiveness with multiple outcomes.

    PubMed

    Bjørner, Jakob; Keiding, Hans

    2004-12-01

    In a large number of situations, activities in health care have to be measured in terms of outcome and cost. However, the cases where outcome is fully captured by a single measure are rather few, so that one uses some index for outcome, computed by weighing together several outcome measures using subjective and somewhat arbitrary weights. In the paper we propose an approach to cost-effectiveness analysis where such artificial aggregation is avoided. This is achieved by assigning to each activity the weights which are the most favourable in a comparison with the other options available, so that activities which have a poor score in this method are guaranteed to be inferior. The method corresponds to applying Data envelopment analysis, known from the theory of productivity, to the context of health economic evaluations. The method is applied to an analysis of the cost-effectiveness of alternative health plans using data from the Medical Outcome Study (JAMA 1996; 276: 1039-1047), where outcome is measured as improvement in mental and physical health. 2004 John Wiley & Sons, Ltd.

  10. Autonomous Method and System for Minimizing the Magnitude of Plasma Discharge Current Oscillations in a Hall Effect Plasma Device

    NASA Technical Reports Server (NTRS)

    Hruby, Vladimir (Inventor); Demmons, Nathaniel (Inventor); Ehrbar, Eric (Inventor); Pote, Bruce (Inventor); Rosenblad, Nathan (Inventor)

    2014-01-01

    An autonomous method for minimizing the magnitude of plasma discharge current oscillations in a Hall effect plasma device includes iteratively measuring plasma discharge current oscillations of the plasma device and iteratively adjusting the magnet current delivered to the plasma device in response to measured plasma discharge current oscillations to reduce the magnitude of the plasma discharge current oscillations.

  11. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  12. Time-of-flight measurements of heavy ions using Si PIN diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strekalovsky, A. O., E-mail: alex.strek@bk.ru; Kamanin, D. V.; Pyatkov, Yu. V.

    2016-12-15

    A new off-line timing method for PIN diode signals is presented which allows the plasma delay effect to be suppressed. Velocities of heavy ions measured by the new method are in good agreement within a wide range of masses and energies with velocities measured by time stamp detectors based on microchannel plates.

  13. Automatic Method of Pause Measurement for Normal and Dysarthric Speech

    ERIC Educational Resources Information Center

    Rosen, Kristin; Murdoch, Bruce; Folker, Joanne; Vogel, Adam; Cahill, Louise; Delatycki, Martin; Corben, Louise

    2010-01-01

    This study proposes an automatic method for the detection of pauses and identification of pause types in conversational speech for the purpose of measuring the effects of Friedreich's Ataxia (FRDA) on speech. Speech samples of [approximately] 3 minutes were recorded from 13 speakers with FRDA and 18 healthy controls. Pauses were measured from the…

  14. Evaluation of Saltzman and phenoldisulfonic acid methods for determining NO/sub x/ in engine exhaust gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, R.H.; Calabro, D.S.

    1969-11-01

    The two methods normally used for the analysis of NO/sub x/ are the Saltzman and the phenoldisulfonic acid technique. This paper describes an evaluation of these wet chemical methods to determine their practical application to engine exhaust gas analysis. Parameters considered for the Saltzman method included bubbler collection efficiency, NO to NO/sub 2/ conversion efficiency, masking effect of other contaminants usually present in exhaust gases and the time-temperature effect of these contaminants on store developed solutions. Collection efficiency and the effects of contaminants were also considered for the phenoldisulfonic acid method. Test results indicated satisfactory collection and conversion efficiencies formore » the Saltzman method, but contaminants seriously affected the measurement accuracy particularly if the developed solution was stored for a number of hours at room temperature before analysis. Storage at 32/sup 0/F minimized effect. The standard procedure for the phenoldisulfonic acid method gave good results, but the process was found to be too time consuming for routine analysis and measured only total NO/sub x/. 3 references, 9 tables.« less

  15. Apparatus and method for quantitative assay of samples of transuranic waste contained in barrels in the presence of matrix material

    DOEpatents

    Caldwell, J.T.; Herrera, G.C.; Hastings, R.D.; Shunk, E.R.; Kunz, W.E.

    1987-08-28

    Apparatus and method for performing corrections for matrix material effects on the neutron measurements generated from analysis of transuranic waste drums using the differential-dieaway technique. By measuring the absorption index and the moderator index for a particular drum, correction factors can be determined for the effects of matrix materials on the ''observed'' quantity of fissile and fertile material present therein in order to determine the actual assays thereof. A barrel flux monitor is introduced into the measurement chamber to accomplish these measurements as a new contribution to the differential-dieaway technology. 9 figs.

  16. An algorithm to estimate unsteady and quasi-steady pressure fields from velocity field measurements.

    PubMed

    Dabiri, John O; Bose, Sanjeeb; Gemmell, Brad J; Colin, Sean P; Costello, John H

    2014-02-01

    We describe and characterize a method for estimating the pressure field corresponding to velocity field measurements such as those obtained by using particle image velocimetry. The pressure gradient is estimated from a time series of velocity fields for unsteady calculations or from a single velocity field for quasi-steady calculations. The corresponding pressure field is determined based on median polling of several integration paths through the pressure gradient field in order to reduce the effect of measurement errors that accumulate along individual integration paths. Integration paths are restricted to the nodes of the measured velocity field, thereby eliminating the need for measurement interpolation during this step and significantly reducing the computational cost of the algorithm relative to previous approaches. The method is validated by using numerically simulated flow past a stationary, two-dimensional bluff body and a computational model of a three-dimensional, self-propelled anguilliform swimmer to study the effects of spatial and temporal resolution, domain size, signal-to-noise ratio and out-of-plane effects. Particle image velocimetry measurements of a freely swimming jellyfish medusa and a freely swimming lamprey are analyzed using the method to demonstrate the efficacy of the approach when applied to empirical data.

  17. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  18. Echo movement and evolution from real-time processing.

    NASA Technical Reports Server (NTRS)

    Schaffner, M. R.

    1972-01-01

    Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.

  19. [New assessment scale based on the type of person desired by an employer].

    PubMed

    Sasaki, Kenichi; Toyoda, Hideki

    2011-10-01

    In many cases, aptitude tests used in the hiring process fail to connect the measurement scale with the emotional type of the person desired by an employer. This experimental study introduced a new measuring method, in which the measurement scale could be adjusted according to the type of person an employer is seeking. Then the effectiveness of this method was verified by comparing the results of an aptitude test utilizing the method and the results of the typical hiring process carried out by the new method in hiring.

  20. Procedure for Determining Speed and Climbing Performance of Airships

    NASA Technical Reports Server (NTRS)

    Thompson, F L

    1936-01-01

    The procedure for obtaining air-speed and rate-of-climb measurements in performance tests of airships is described. Two methods of obtaining speed measurements, one by means of instruments in the airship and the other by flight over a measured ground course, are explained. Instruments, their calibrations, necessary correction factors, observations, and calculations are detailed for each method, and also for the rate-of-climb tests. A method of correction for the effect on density of moist air and a description of other methods of speed course testing are appended.

  1. Evaluation of local site effect from microtremor measurements in Babol City, Iran

    NASA Astrophysics Data System (ADS)

    Rezaei, Sadegh; Choobbasti, Asskar Janalizadeh

    2018-03-01

    Every year, numerous casualties and a large deal of financial losses are incurred due to earthquake events. The losses incurred by an earthquake vary depending on local site effect. Therefore, in order to conquer drastic effects of an earthquake, one should evaluate urban districts in terms of the local site effect. One of the methods for evaluating the local site effect is microtremor measurement and analysis. Aiming at evaluation of local site effect across the city of Babol, the study area was gridded and microtremor measurements were performed with an appropriate distribution. The acquired data was analyzed through the horizontal-to-vertical noise ratio (HVNR) method, and fundamental frequency and associated amplitude of the H/V peak were obtained. The results indicate that fundamental frequency of the study area is generally lower than 1.25 Hz, which is acceptably in agreement with the findings of previous studies. Also, in order to constrain and validate the seismostratigraphic model obtained with this method, the results were compared with geotechnical, geological, and seismic data. Comparing the results of different methods, it was observed that the presented geophysical method can successfully determine the values of fundamental frequency across the study area as well as local site effect. Using the data obtained from the analysis of microtremor, a microzonation map of fundamental frequency across the city of Babol was prepared. This map has numerous applications in designing high-rise building and urban development plans.

  2. Inter-Method Reliability of School Effectiveness Measures: A Comparison of Value-Added and Regression Discontinuity Estimates

    ERIC Educational Resources Information Center

    Perry, Thomas

    2017-01-01

    Value-added (VA) measures are currently the predominant approach used to compare the effectiveness of schools. Recent educational effectiveness research, however, has developed alternative approaches including the regression discontinuity (RD) design, which also allows estimation of absolute school effects. Initial research suggests RD is a viable…

  3. The multi-line slope method for measuring the effective magnetic field of cool stars: an application to the solar-like cycle of ɛ Eri

    NASA Astrophysics Data System (ADS)

    Scalia, C.; Leone, F.; Gangi, M.; Giarrusso, M.; Stift, M. J.

    2017-12-01

    One method for the determination of integrated longitudinal stellar fields from low-resolution spectra is the so-called slope method, which is based on the regression of the Stokes V signal against the first derivative of Stokes I. Here we investigate the possibility of extending this technique to measure the magnetic fields of cool stars from high-resolution spectra. For this purpose we developed a multi-line modification to the slope method, called the multi-line slope method. We tested this technique by analysing synthetic spectra computed with the COSSAM code and real observations obtained with the high-resolution spectropolarimeters Narval, HARPSpol and the Catania Astrophysical Observatory Spectropolarimeter (CAOS). We show that the multi-line slope method is a fast alternative to the least squares deconvolution technique for the measurement of the effective magnetic fields of cool stars. Using a Fourier transform on the effective magnetic field variations of the star ε Eri, we find that the long-term periodicity of the field corresponds to the 2.95-yr period of the stellar dynamo, revealed by the variation of the activity index.

  4. Transfer path analysis: Current practice, trade-offs and consideration of damping

    NASA Astrophysics Data System (ADS)

    Oktav, Akın; Yılmaz, Çetin; Anlaş, Günay

    2017-02-01

    Current practice of experimental transfer path analysis is discussed in the context of trade-offs between accuracy and time cost. An overview of methods, which propose solutions for structure borne noise, is given, where assumptions, drawbacks and advantages of methods are stated theoretically. Applicability of methods is also investigated, where an engine induced structure borne noise of an automobile is taken as a reference problem. Depending on this particular problem, sources of measurement errors, processing operations that affect results and physical obstacles faced in the application are analysed. While an operational measurement is common in all stated methods, when it comes to removal of source, or the need for an external excitation, discrepancies are present. Depending on the chosen method, promised outcomes like independent characterisation of the source, or getting information about mounts also differ. Although many aspects of the problem are reported in the literature, damping and its effects are not considered. Damping effect is embedded in the measured complex frequency response functions, and it is needed to be analysed in the post processing step. Effects of damping, reasons and methods to analyse them are discussed in detail. In this regard, a new procedure, which increases the accuracy of results, is also proposed.

  5. Method effects: the problem with negatively versus positively keyed items.

    PubMed

    Lindwall, Magnus; Barkoukis, Vassilis; Grano, Caterina; Lucidi, Fabio; Raudsepp, Lennart; Liukkonen, Jarmo; Thøgersen-Ntoumani, Cecilie

    2012-01-01

    Using confirmatory factor analyses, we examined method effects on Rosenberg's Self-Esteem Scale (RSES; Rosenberg, 1965) in a sample of older European adults. Nine hundred forty nine community-dwelling adults 60 years of age or older from 5 European countries completed the RSES as well as measures of depression and life satisfaction. The 2 models that had an acceptable fit with the data included method effects. The method effects were associated with both positively and negatively worded items. Method effects models were invariant across gender and age, but not across countries. Both depression and life satisfaction predicted method effects. Individuals with higher depression scores and lower life satisfaction scores were more likely to endorse negatively phrased items.

  6. Peculiar velocity measurement in a clumpy universe

    NASA Astrophysics Data System (ADS)

    Habibi, Farhang; Baghram, Shant; Tavasoli, Saeed

    Aims: In this work, we address the issue of peculiar velocity measurement in a perturbed Friedmann universe using the deviations from measured luminosity distances of standard candles from background FRW universe. We want to show and quantify the statement that in intermediate redshifts (0.5 < z < 2), deviations from the background FRW model are not uniquely governed by peculiar velocities. Luminosity distances are modified by gravitational lensing. We also want to indicate the importance of relativistic calculations for peculiar velocity measurement at all redshifts. Methods: For this task, we discuss the relativistic correction on luminosity distance and redshift measurement and show the contribution of each of the corrections as lensing term, peculiar velocity of the source and Sachs-Wolfe effect. Then, we use the SNe Ia sample of Union 2, to investigate the relativistic effects, we consider. Results: We show that, using the conventional peculiar velocity method, that ignores the lensing effect, will result in an overestimate of the measured peculiar velocities at intermediate redshifts. Here, we quantify this effect. We show that at low redshifts the lensing effect is negligible compare to the effect of peculiar velocity. From the observational point of view, we show that the uncertainties on luminosity of the present SNe Ia data prevent us from precise measuring the peculiar velocities even at low redshifts (z < 0.2).

  7. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  8. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.

    2013-08-01

    Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now be made in the mHz to kHz frequency range. This increased accuracy in the kHz range will allow a more accurate field characterization of the complex electrical conductivity of soils and sediments, which may lead to the improved estimation of saturated hydraulic conductivity from electrical properties. Although the correction methods have been developed for a custom-made EIT system, they also have potential to improve the phase accuracy of EIT measurements made with commercial systems relying on multicore cables.

  9. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  10. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  11. Measuring air-water interfacial area for soils using the mass balance surfactant-tracer method.

    PubMed

    Araujo, Juliana B; Mainhagu, Jon; Brusseau, Mark L

    2015-09-01

    There are several methods for conducting interfacial partitioning tracer tests to measure air-water interfacial area in porous media. One such approach is the mass balance surfactant tracer method. An advantage of the mass-balance method compared to other tracer-based methods is that a single test can produce multiple interfacial area measurements over a wide range of water saturations. The mass-balance method has been used to date only for glass beads or treated quartz sand. The purpose of this research is to investigate the effectiveness and implementability of the mass-balance method for application to more complex porous media. The results indicate that interfacial areas measured with the mass-balance method are consistent with values obtained with the miscible-displacement method. This includes results for a soil, for which solid-phase adsorption was a significant component of total tracer retention. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method.

    PubMed

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-07-22

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.

  13. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method

    PubMed Central

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-01-01

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105

  14. Quantitative measurements of in-cylinder gas composition in a controlled auto-ignition combustion engine

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Zhang, S.

    2008-01-01

    One of the most effective means to achieve controlled auto-ignition (CAI) combustion in a gasoline engine is by the residual gas trapping method. The amount of residual gas and mixture composition have significant effects on the subsequent combustion process and engine emissions. In order to obtain quantitative measurements of in-cylinder residual gas concentration and air/fuel ratio, a spontaneous Raman scattering (SRS) system has been developed recently. The optimized optical SRS setups are presented and discussed. The temperature effect on the SRS measurement is considered and a method has been developed to correct for the overestimated values due to the temperature effect. Simultaneous measurements of O2, H2O, CO2 and fuel were obtained throughout the intake, compression, combustion and expansion strokes. It shows that the SRS can provide valuable data on this process in a CAI combustion engine.

  15. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  16. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  17. Depth Dose Measurement using a Scintillating Fiber Optic Dosimeter for Proton Therapy Beam of the Passive-Scattering Mode Having Range Modulator Wheel

    NASA Astrophysics Data System (ADS)

    Hwang, Ui-Jung; Shin, Dongho; Lee, Se Byeong; Lim, Young Kyung; Jeong, Jong Hwi; Kim, Hak Soo; Kim, Ki Hwan

    2018-05-01

    To apply a scintillating fiber dosimetry system to measure the range of a proton therapy beam, a new method was proposed to correct for the quenching effect on measuring an spread out Bragg peak (SOBP) proton beam whose range is modulated by a range modulator wheel. The scintillating fiber dosimetry system was composed of a plastic scintillating fiber (BCF-12), optical fiber (SH 2001), photo multiplier tube (H7546), and data acquisition system (PXI6221 and SCC68). The proton beam was generated by a cyclotron (Proteus-235) in the National Cancer Center in Korea. It operated in the double-scattering mode and the spread out of the Bragg peak was achieved by a spinning range modulation wheel. Bragg peak beams and SOBP beams of various ranges were measured, corrected, and compared to the ion chamber data. For the Bragg peak beam, quenching equation was used to correct the quenching effect. On the proposed process of correcting SOBP beams, the measured data using a scintillating fiber were separated by the Bragg peaks that the SOBP beam contained, and then recomposed again to reconstruct an SOBP after correcting for each Bragg peak. The measured depth-dose curve for the single Bragg peak beam was well corrected by using a simple quenching equation. Correction for SOBP beam was conducted with a newly proposed method. The corrected SOBP signal was in accordance with the results measured with an ion chamber. We propose a new method to correct for the SOBP beam from the quenching effect in a scintillating fiber dosimetry system. This method can be applied to other scintillator dosimetry for radiation beams in which the quenching effect is shown in the scintillator.

  18. The Relative Effectiveness of Expository Methods of Teaching Science

    ERIC Educational Resources Information Center

    Babikian, Elijah

    1973-01-01

    Two methods of instruction (question-answer and question-incomplete-answer) are utilized in teaching a unit on thermal energy. Effectiveness, as measured by a posttest, confirmed significant differences in the two modes of instruction. (DF)

  19. Bacteriocidal activity of sanitizers against Enterococcus faecium attached to stainless steel as determined by plate count and impedance methods.

    PubMed

    Andrade, N J; Bridgeman, T A; Zottola, E A

    1998-07-01

    Enterococcus faecium attached to stainless steel chips (100 mm2) was treated with the following sanitizers: sodium hypochlorite, peracetic acid (PA), peracetic acid plus an organic acid (PAS), quaternary ammonium, organic acid, and anionic acid. The effectiveness of sanitizer solutions on planktonic cells (not attached) was evaluated by the Association of Official Analytical Chemists (AOAC) suspension test. The number of attached cells was determined by impedance measurement and plate count method after vortexing. The decimal reduction (DR) in numbers of the E. faecium population was determined for the three methods and was analyzed by analysis of variance (P < 0.05) using Statview software. The adhered cells were more resistant (P < 0.05) than nonadherent cells. The DR averages for all of the sanitizers for 30 s of exposure were 6.4, 2.2, and 2.5 for the AOAC suspension test, plate count method after vortexing, and impedance measurement, respectively. Plate count and impedance methods showed a difference (P < 0.05) after 30 s of sanitizer exposure but not after 2 min. The impedance measurement was the best method to measure adherent cells. Impedance measurement required the development of a quadratic regression. The equation developed from 82 samples is as follows: log CFU/chip = 0.2385T2-0.96T + 9.35, r2 = 0.92, P < 0.05, T = impedance detection time in hours. This method showed that the sanitizers PAS and PA were more effective against E. faecium than the other sanitizers. At 30 s, the impedance method recovered about 25 times more cells than the plate count method after vortexing. These data suggest that impedance measurement is the method of choice when evaluating the number of bacterial cells adhered to a surface.

  20. Examination of the Measurement of Absorption Using the Reverberant Room Method for Highly Absorptive Acoustic Foam

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.; Chris Nottoli; Eric Wolfram

    2015-01-01

    The absorption coefficient for material specimens are needed to quantify the expected acoustic performance of that material in its actual usage and environment. The ASTM C423-09a standard, "Standard Test Method for Sound Absorption and Sound Absorption Coefficients by the Reverberant Room Method" is often used to measure the absorption coefficient of material test specimens. This method has its basics in the Sabine formula. Although widely used, the interpretation of these measurements are a topic of interest. For example, in certain cases the measured Sabine absorption coefficients are greater than 1.0 for highly absorptive materials. This is often attributed to the diffraction edge effect phenomenon. An investigative test program to measure the absorption properties of highly absorbent melamine foam has been performed at the Riverbank Acoustical Laboratories. This paper will present and discuss the test results relating to the effect of the test materials' surface area, thickness and edge sealing conditions. A follow-on paper is envisioned that will present and discuss the results relating to the spacing between multiple piece specimens, and the mounting condition of the test specimen.

  1. A simple method for characterizing and engineering thermal relaxation of an optical microcavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Weijian; Zhu, Jiangang; Özdemir, Şahin Kaya

    2016-08-08

    Thermal properties of a photonic resonator are determined not only by intrinsic properties of materials, such as thermo-optic coefficient, but also by the geometry and structure of the resonator. Techniques for characterization and measurement of thermal properties of individual photonic resonator will benefit numerous applications. In this work, we demonstrate a method to optically measure the thermal relaxation time and effective thermal conductance of a whispering gallery mode microcavity using optothermal effect. Two nearby optical modes within the cavity are optically probed, which allows us to quantify the thermal relaxation process of the cavity by analyzing changes in the transmissionmore » spectra induced by optothermal effect. We show that the effective thermal conductance can be experimentally deduced from the thermal relaxation measurement, and it can be tailored by changing the geometric parameters of the cavity. The experimental observations are in good agreement with the proposed analytical modeling. This method can be applied to various resonators in different forms.« less

  2. Automated general temperature correction method for dielectric soil moisture sensors

    NASA Astrophysics Data System (ADS)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.

  3. Methods for assessing the quality of data in public health information systems: a critical review.

    PubMed

    Chen, Hong; Yu, Ping; Hailey, David; Wang, Ning

    2014-01-01

    The quality of data in public health information systems can be ensured by effective data quality assessment. In order to conduct effective data quality assessment, measurable data attributes have to be precisely defined. Then reliable and valid measurement methods for data attributes have to be used to measure each attribute. We conducted a systematic review of data quality assessment methods for public health using major databases and well-known institutional websites. 35 studies were eligible for inclusion in the study. A total of 49 attributes of data quality were identified from the literature. Completeness, accuracy and timeliness were the three most frequently assessed attributes of data quality. Most studies directly examined data values. This is complemented by exploring either data users' perception or documentation quality. However, there are limitations of current data quality assessment methods: a lack of consensus on attributes measured; inconsistent definition of the data quality attributes; a lack of mixed methods for assessing data quality; and inadequate attention to reliability and validity. Removal of these limitations is an opportunity for further improvement.

  4. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  5. Odors and Air Pollution: A Bibliography with Abstracts.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Research Triangle Park, NC. Office of Air Programs.

    The annotated bibliography presents a compilation of abstracts which deal with odors as they relate to air pollution. The abstracts are arranged within the following categories: Emission sources; Control methods; Measurement methods; Air quality measurements; Atmospheric interaction; Basic science and technology; Effects-human health;…

  6. A Simple Method for Decreasing the Liquid Junction Potential in a Flow-through-Type Differential pH Sensor Probe Consisting of pH-FETs by Exerting Spatiotemporal Control of the Liquid Junction

    PubMed Central

    Yamada, Akira; Mohri, Satoshi; Nakamura, Michihiro; Naruse, Keiji

    2015-01-01

    The liquid junction potential (LJP), the phenomenon that occurs when two electrolyte solutions of different composition come into contact, prevents accurate measurements in potentiometry. The effect of the LJP is usually remarkable in measurements of diluted solutions with low buffering capacities or low ion concentrations. Our group has constructed a simple method to eliminate the LJP by exerting spatiotemporal control of a liquid junction (LJ) formed between two solutions, a sample solution and a baseline solution (BLS), in a flow-through-type differential pH sensor probe. The method was contrived based on microfluidics. The sensor probe is a differential measurement system composed of two ion-sensitive field-effect transistors (ISFETs) and one Ag/AgCl electrode. With our new method, the border region of the sample solution and BLS is vibrated in order to mix solutions and suppress the overshoot after the sample solution is suctioned into the sensor probe. Compared to the conventional method without vibration, our method shortened the settling time from over two min to 15 s and reduced the measurement error by 86% to within 0.060 pH. This new method will be useful for improving the response characteristics and decreasing the measurement error of many apparatuses that use LJs. PMID:25835300

  7. Accurate evaluation of fast threshold voltage shift for SiC MOS devices under various gate bias stress conditions

    NASA Astrophysics Data System (ADS)

    Sometani, Mitsuru; Okamoto, Mitsuo; Hatakeyama, Tetsuo; Iwahashi, Yohei; Hayashi, Mariko; Okamoto, Dai; Yano, Hiroshi; Harada, Shinsuke; Yonezawa, Yoshiyuki; Okumura, Hajime

    2018-04-01

    We investigated methods of measuring the threshold voltage (V th) shift of 4H-silicon carbide (SiC) metal–oxide–semiconductor field-effect transistors (MOSFETs) under positive DC, negative DC, and AC gate bias stresses. A fast measurement method for V th shift under both positive and negative DC stresses revealed the existence of an extremely large V th shift in the short-stress-time region. We then examined the effect of fast V th shifts on drain current (I d) changes within a pulse under AC operation. The fast V th shifts were suppressed by nitridation. However, the I d change within one pulse occurred even in commercially available SiC MOSFETs. The correlation between I d changes within one pulse and V th shifts measured by a conventional method is weak. Thus, a fast and in situ measurement method is indispensable for the accurate evaluation of I d changes under AC operation.

  8. [Measuring client satisfaction in youth mental health care: qualitative methods and satisfaction questionnaires].

    PubMed

    Vanderfaeillie, J; Stroobants, T; van West, D; Andries, C

    2015-01-01

    Quality youth care and decisions about youth care should ideally be based on a combination of empirical data, the clinical judgment of health professionals and the views and preferences of clients. Additionally, the treatment provided needs to fit in with the client's social and cultural background. Clients' views about their treatment are often collected via satisfaction measurements and particularly via satisfaction questionnaires. To make a critical analysis of the factors that determine both client satisfaction and the content of the satisfaction questionnaires used as a measurement method in youth care. We made a selective study of the relevant literature. Our results show that client satisfaction is not an indicator of the effectiveness of treatment and that the degree of client satisfaction varies according to the client's outlook and perspective. Apparently, there are many disadvantages of using questionnaires as a measurement method. For the collection of a young person's views, qualitative methods seem to be more effective than questionnaires.

  9. Direct measurement of sub-surface mass change using the variable-baseline gravity gradient method

    USGS Publications Warehouse

    Kennedy, Jeffrey; Ferré, Ty P.A.; Güntner, Andreas; Abe, Maiko; Creutzfeldt, Benjamin

    2014-01-01

    Time-lapse gravity data provide a direct, non-destructive method to monitor mass changes at scales from cm to km. But, the effectively infinite spatial sensitivity of gravity measurements can make it difficult to isolate the signal of interest. The variable-baseline gravity gradient method, based on the difference of measurements between two gravimeters, is an alternative to the conventional approach of individually modeling all sources of mass and elevation change. This approach can improve the signal-to-noise ratio for many applications by removing the contributions of Earth tides, loading, and other signals that have the same effect on both gravimeters. At the same time, this approach can focus the support volume within a relatively small user-defined region of the subsurface. The method is demonstrated using paired superconducting gravimeters to make for the first time a large-scale, non-invasive measurement of infiltration wetting front velocity and change in water content above the wetting front.

  10. Evaluation of methods for measuring particulate matter emissions from gas turbines.

    PubMed

    Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David

    2011-04-15

    The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.

  11. Improvement of the radiographic method for measurement of effective energy of pulsed X-ray emission from a PF device for different anode's insert materials.

    PubMed

    Miremad, Seyed Milad; Shirani, Babak

    2018-06-01

    In this paper, effective energy of pulsed X-Ray emitted from a Mather-type plasma focus device in stored energy of 2.5 kJ with six different anode's insert materials was measured using radiographic method with attenuation filters. Since intensity and energy of X-ray beam were considerably changed with changing the insert material, the method was improved by using different filters simultaneously in all the experiments and selection of the best filter in each experiment according to the appropriate criteria. Effective energy of pulsed X-ray beam was measured 16, 28, 50, 51, 34 and 44 keV when aluminum, copper, zinc, tin, tungsten and lead were used as insert materials, and aluminum, copper, silver, silver, copper and lead were used as filters, respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Speech Effort Measurement and Stuttering: Investigating the Chorus Reading Effect

    ERIC Educational Resources Information Center

    Ingham, Roger J.; Warner, Allison; Byrd, Anne; Cotton, John

    2006-01-01

    Purpose: The purpose of this study was to investigate chorus reading's (CR's) effect on speech effort during oral reading by adult stuttering speakers and control participants. The effect of a speech effort measurement highlighting strategy was also investigated. Method: Twelve persistent stuttering (PS) adults and 12 normally fluent control…

  13. Research on effects of baffle position in an integrating sphere on the luminous flux measurement

    NASA Astrophysics Data System (ADS)

    Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei; Xia, Ming

    2016-09-01

    In the field of optical metrology, luminous flux is an important index to characterize the quality of electric light source. Currently, the majority of luminous flux measurement is based on the integrating sphere method, so measurement accuracy of integrating sphere is the key factor. There are plenty of factors affecting the measurement accuracy, such as coating, power and the position of light source. However, the baffle which is a key part of integrating sphere has important effects on the measurement results. The paper analyzes in detail the principle of an ideal integrating sphere. We use moving rail to change the relative position of baffle and light source inside the sphere. By experiments, measured luminous flux values at different distances between the light source and baffle are obtained, which we used to take analysis of the effects of different baffle position on the measurement. By theoretical calculation, computer simulation and experiment, we obtain the optimum position of baffle for luminous flux measurements. Based on the whole luminous flux measurement error analysis, we develop the methods and apparatus to improve the luminous flux measurement accuracy and reliability. It makes our unifying and transferring work of the luminous flux more accurate in East China and provides effective protection for our traceability system.

  14. Self-informant Agreement for Personality and Evaluative Person Descriptors: Comparing Methods for Creating Informant Measures.

    PubMed

    Simms, Leonard J; Zelazny, Kerry; Yam, Wern How; Gros, Daniel F

    2010-05-01

    Little attention typically is paid to the way self-report measures are translated for use in self-informant agreement studies. We studied two possible methods for creating informant measures: (a) the traditional method in which self-report items were translated from the first- to the third-person and (b) an alternative meta-perceptual method in which informants were directed to rate their perception of the targets' self-perception. We hypothesized that the latter method would yield stronger self-informant agreement for evaluative personality dimensions measured by indirect item markers. We studied these methods in a sample of 303 undergraduate friendship dyads. Results revealed mean-level differences between methods, similar self-informant agreement across methods, stronger agreement for Big Five dimensions than for evaluative dimensions, and incremental validity for meta-perceptual informant rating methods. Limited power reduced the interpretability of several sparse acquaintanceship effects. We conclude that traditional informant methods are appropriate for most personality traits, but meta-perceptual methods may be more appropriate when personality questionnaire items reflect indirect indicators of the trait being measured, which is particularly likely for evaluative traits.

  15. Self-informant Agreement for Personality and Evaluative Person Descriptors: Comparing Methods for Creating Informant Measures

    PubMed Central

    Simms, Leonard J.; Zelazny, Kerry; Yam, Wern How; Gros, Daniel F.

    2011-01-01

    Little attention typically is paid to the way self-report measures are translated for use in self-informant agreement studies. We studied two possible methods for creating informant measures: (a) the traditional method in which self-report items were translated from the first- to the third-person and (b) an alternative meta-perceptual method in which informants were directed to rate their perception of the targets’ self-perception. We hypothesized that the latter method would yield stronger self-informant agreement for evaluative personality dimensions measured by indirect item markers. We studied these methods in a sample of 303 undergraduate friendship dyads. Results revealed mean-level differences between methods, similar self-informant agreement across methods, stronger agreement for Big Five dimensions than for evaluative dimensions, and incremental validity for meta-perceptual informant rating methods. Limited power reduced the interpretability of several sparse acquaintanceship effects. We conclude that traditional informant methods are appropriate for most personality traits, but meta-perceptual methods may be more appropriate when personality questionnaire items reflect indirect indicators of the trait being measured, which is particularly likely for evaluative traits. PMID:21541262

  16. Estimation of CO2 emissions from waste incinerators: Comparison of three methods.

    PubMed

    Lee, Hyeyoung; Yi, Seung-Muk; Holsen, Thomas M; Seo, Yong-Seok; Choi, Eunhwa

    2018-03-01

    Climate-relevant CO 2 emissions from waste incineration were compared using three methods: making use of CO 2 concentration data, converting O 2 concentration and waste characteristic data, and using a mass balance method following Intergovernmental Panel on Climate Change (IPCC) guidelines. For the first two methods, CO 2 and O 2 concentrations were measured continuously from 24 to 86 days. The O 2 conversion method in comparison to the direct CO 2 measurement method had a 4.8% mean difference in daily CO 2 emissions for four incinerators where analyzed waste composition data were available. However, the IPCC method had a higher difference of 13% relative to the direct CO 2 measurement method. For three incinerators using designed values for waste composition, the O 2 conversion and IPCC methods in comparison to the direct CO 2 measurement method had mean differences of 7.5% and 89%, respectively. Therefore, the use of O 2 concentration data measured for monitoring air pollutant emissions is an effective method for estimating CO 2 emissions resulting from waste incineration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Removal of batch effects using distribution-matching residual networks.

    PubMed

    Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval

    2017-08-15

    Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. A simple and cost-effective method for cable root detection and extension measurement in estuary wetland forests

    NASA Astrophysics Data System (ADS)

    Vovides, Alejandra G.; Marín-Castro, Beatriz; Barradas, Guadalupe; Berger, Uta; López-Portillo, Jorge

    2016-12-01

    This work presents the development of a low-cost method to measure the length cable roots of black mangrove (Avicennia germinans) trees to define the boundaries of central part of the anchoring root system (CPRS) without the need to fully expose root systems. The method was tested to locate and measure the length shallow woody root systems. An ultrasonic Doppler fetal monitor (UD) and a stock of steel rods (SR) were used to probe root locations without removing sediments from the surface, measure their length and estimate root-soil plate dimensions. The method was validated by comparing measurements with root lengths taken through direct measurement of excavated cable roots and from root-soil plate radii (exposed root-soil material when a tree tips over) of five up-rooted trees with stem diameters (D130) ranging between 10 and 50 cm. The mean CPRS radius estimated with the use of the Doppler was directly correlated with tree stem diameter and was not significantly different from the root-soil plate mean radius measured from up-rooted trees or from CPRS approximated by digging trenches. Our method proved to be effective and reliable in following cable roots for large amounts of trees of both black and white mangrove trees. In a period of 40 days of work, three people were capable of measuring 648 roots belonging to 81 trees, out of which 37% were found grafted to other tree roots. This simple method can be helpful in following shallow root systems with minimal impact and help map root connection networks of grafted trees.

  19. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  20. A new experimental method for the determination of the effective orifice area based on the acoustical source term

    NASA Astrophysics Data System (ADS)

    Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.

    2005-12-01

    The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.

  1. Gas concentration measurement instrument based on the effects of a wave-mixing interference on stimulated emissions

    DOEpatents

    Garrett, W. Ray

    1997-01-01

    A method and apparatus for measuring partial pressures of gaseous components within a mixture. The apparatus comprises generally at least one tunable laser source, a beam splitter, mirrors, optical filter, an optical spectrometer, and a data recorder. Measured in the forward direction along the path of the laser, the intensity of the emission spectra of the gaseous component, at wavelengths characteristic of the gas component being measured, are suppressed. Measured in the backward direction, the peak intensities characteristic of a given gaseous component will be wavelength shifted. These effects on peak intensity wavelengths are linearly dependent on the partial pressure of the compound being measured, but independent of the partial pressures of other gases which are present within the sample. The method and apparatus allow for efficient measurement of gaseous components.

  2. Gas concentration measurement instrument based on the effects of a wave-mixing interference on stimulated emissions

    DOEpatents

    Garrett, W.R.

    1997-11-11

    A method and apparatus are disclosed for measuring partial pressures of gaseous components within a mixture. The apparatus comprises generally at least one tunable laser source, a beam splitter, mirrors, optical filter, an optical spectrometer, and a data recorder. Measured in the forward direction along the path of the laser, the intensity of the emission spectra of the gaseous component, at wavelengths characteristic of the gas component being measured, are suppressed. Measured in the backward direction, the peak intensities characteristic of a given gaseous component will be wavelength shifted. These effects on peak intensity wavelengths are linearly dependent on the partial pressure of the compound being measured, but independent of the partial pressures of other gases which are present within the sample. The method and apparatus allow for efficient measurement of gaseous components. 9 figs.

  3. Method of accurate thickness measurement of boron carbide coating on copper foil

    DOEpatents

    Lacy, Jeffrey L.; Regmi, Murari

    2017-11-07

    A method is disclosed of measuring the thickness of a thin coating on a substrate comprising dissolving the coating and substrate in a reagent and using the post-dissolution concentration of the coating in the reagent to calculate an effective thickness of the coating. The preferred method includes measuring non-conducting films on flexible and rough substrates, but other kinds of thin films can be measure by matching a reliable film-substrate dissolution technique. One preferred method includes determining the thickness of Boron Carbide films deposited on copper foil. The preferred method uses a standard technique known as inductively coupled plasma optical emission spectroscopy (ICPOES) to measure boron concentration in a liquid sample prepared by dissolving boron carbide films and the Copper substrates, preferably using a chemical etch known as ceric ammonium nitrate (CAN). Measured boron concentration values can then be calculated.

  4. Measuring bi-directional current through a field-effect transistor by virtue of drain-to-source voltage measurement

    DOEpatents

    Turner, Steven Richard

    2006-12-26

    A method and apparatus for measuring current, and particularly bi-directional current, in a field-effect transistor (FET) using drain-to-source voltage measurements. The drain-to-source voltage of the FET is measured and amplified. This signal is then compensated for variations in the temperature of the FET, which affects the impedance of the FET when it is switched on. The output is a signal representative of the direction of the flow of current through the field-effect transistor and the level of the current through the field-effect transistor. Preferably, the measurement only occurs when the FET is switched on.

  5. Note: Comparison experimental results of the laser heterodyne interferometer for angle measurement based on the Faraday effect.

    PubMed

    Zhang, Enzheng; Chen, Benyong; Zheng, Hao; Teng, Xueying; Yan, Liping

    2018-04-01

    A laser heterodyne interferometer for angle measurement based on the Faraday effect is proposed. A novel optical configuration, designed by using the orthogonal return method for a linearly polarized beam based on the Faraday effect, guarantees that the measurement beam can return effectively even though an angular reflector has a large lateral displacement movement. The optical configuration and measurement principle are presented in detail. Two verification experiments were performed; the experimental results show that the proposed interferometer can achieve a large lateral displacement tolerance of 7.4 mm and also can realize high precision angle measurement with a large measurement range.

  6. Note: Comparison experimental results of the laser heterodyne interferometer for angle measurement based on the Faraday effect

    NASA Astrophysics Data System (ADS)

    Zhang, Enzheng; Chen, Benyong; Zheng, Hao; Teng, Xueying; Yan, Liping

    2018-04-01

    A laser heterodyne interferometer for angle measurement based on the Faraday effect is proposed. A novel optical configuration, designed by using the orthogonal return method for a linearly polarized beam based on the Faraday effect, guarantees that the measurement beam can return effectively even though an angular reflector has a large lateral displacement movement. The optical configuration and measurement principle are presented in detail. Two verification experiments were performed; the experimental results show that the proposed interferometer can achieve a large lateral displacement tolerance of 7.4 mm and also can realize high precision angle measurement with a large measurement range.

  7. Evaluation of effective energy for QA and QC: measurement of half-value layer using radiochromic film density.

    PubMed

    Gotanda, T; Katsuda, T; Gotanda, R; Tabuchi, A; Yamamoto, K; Kuwano, T; Yatake, H; Takeda, Y

    2009-03-01

    The effective energy of diagnostic X-rays is important for quality assurance (QA) and quality control (QC). However, the half-value layer (HVL), which is necessary to evaluate the effective energy, is not ubiquitously monitored because ionization-chamber dosimetry is time-consuming and complicated. To verify the applicability of GAFCHROMIC XR type R (GAF-R) film for HVL measurement as an alternative to monitoring with an ionization chamber, a single-strip method for measuring the HVL has been evaluated. Calibration curves of absorbed dose versus film density were generated using this single-strip method with GAF-R film, and the coefficient of determination (r2) of the straight-line approximation was evaluated. The HVLs (effective energies) estimated using the GAF-R film and an ionization chamber were compared. The coefficient of determination (r2) of the straight-line approximation obtained with the GAF-R film was more than 0.99. The effective energies (HVLs) evaluated using the GAF-R film and the ionization chamber were 43.25 keV (5.10 mm) and 39.86 keV (4.45 mm), respectively. The difference in the effective energies determined by the two methods was thus 8.5%. These results suggest that GAF-R might be used to evaluate the effective energy from the film-density growth without the need for ionization-chamber measurements.

  8. Lies, Damn Lies, and Statistics Revisited: A Comparison of Three Methods of Representing Change. AIR 1991 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    Because change is fundamental to education and the measurement of change assesses the quality and effectiveness of postsecondary education, this study examined three methods of measuring change: (1) gain scores; (2) residual scores; and (3) repeated measures. Data for the study was obtained from transcripts of 722 graduating seniors at the…

  9. Single-case synthesis tools II: Comparing quantitative outcome measures.

    PubMed

    Zimmerman, Kathleen N; Pustejovsky, James E; Ledford, Jennifer R; Barton, Erin E; Severini, Katherine E; Lloyd, Blair P

    2018-03-07

    Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes-overlap measures (percentage non-overlapping data, improvement rate difference, and Tau) and parametric within-case effect sizes (standardized mean difference and log response ratio [increasing and decreasing])-were compared to determine if choice of synthesis method within and across classes impacts conclusions regarding effectiveness. The effectiveness of sensory-based interventions (SBI), a commonly used class of treatments for young children, was evaluated. Separately from evaluations of rigor and quality, authors evaluated behavior change between baseline and SBI conditions. SBI were unlikely to result in positive behavior change across all measures except IRD. However, subgroup analyses resulted in variable conclusions, indicating that the choice of measures for SCD meta-analyses can impact conclusions. Suggestions for using the log response ratio in SCD meta-analyses and considerations for understanding variability in SCD meta-analysis conclusions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Numerical simulations to assess the tracer dilution method for measurement of landfill methane emissions.

    PubMed

    Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T

    2016-10-01

    Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Global self-esteem and method effects: competing factor structures, longitudinal invariance, and response styles in adolescents.

    PubMed

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2014-06-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.

  12. Global self-esteem and method effects: competing factor structures, longitudinal invariance and response styles in adolescents

    PubMed Central

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2013-01-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931

  13. Effective method of measuring the radioactivity of [ 131I]‐capsule prior to radioiodine therapy with significant reduction of the radiation exposure to the medical staff

    PubMed Central

    Lützen, Ulf; Zhao, Yi; Marx, Marlies; Imme, Thea; Assam, Isong; Siebert, Frank‐Andre; Culman, Juraj

    2016-01-01

    Radiation Protection in Radiology, Nuclear Medicine and Radio Oncology is of the utmost importance. Radioiodine therapy is a frequently used and effective method for the treatment of thyroid disease. Prior to each therapy the radioactivity of the [ 131I]‐capsule must be determined to prevent misadministration. This leads to a significant radiation exposure to the staff. We describe an alternative method, allowing a considerable reduction of the radiation exposure. Two [ 131I]‐capsules (A01=2818.5; A02=73.55.0 MBq) were measured multiple times in their own delivery lead containers — that is to say, [ 131I]‐capsules remain inside the containers during the measurements (shielded measurement) using a dose calibrator and a well‐type and a thyroid uptake probe. The results of the shielded measurements were correlated linearly with the [ 131I]‐capsules radioactivity to create calibration curves for the used devices. Additional radioactivity measurements of 50 [ 131I]‐capsules of different radioactivities were done to validate the shielded measuring method. The personal skin dose rate (HP(0.07)) was determined using calibrated thermo luminescent dosimeters. The determination coefficients for the calibration curves were R2>0.9980 for all devices. The relative uncertainty of the shielded measurement was <6.8%. At a distance of 10 cm from the unshielded capsule the HP(0.07) was 46.18 μSv/(GBq⋅s), and on the surface of the lead container containing the [ 131I]‐capsule the HP(0.07) was 2.99 and 0.27 μSv/(GBq⋅s) for the two used container sizes. The calculated reduction of the effective dose by using the shielded measuring method was, depending on the used container size, 74.0% and 97.4%, compared to the measurement of the unshielded [ 131I]‐capsule using a dose calibrator. The measured reduction of the effective radiation dose in the practice was 56.6% and 94.9 for size I and size II containers. The shielded [ 131I]‐capsule measurement reduces the radiation exposure to the staff significantly and offers the same accuracy of the unshielded measurement in the same amount of time. In order to maintain the consistency of the measuring method, monthly tests have to be done by measuring a [ 131I]‐capsule with known radioactivity. PACS number(s): 93.85.Np, 92.20.Td, 87.50.yk, 87.53.Bn PMID:27455475

  14. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  15. Advanced Recording and Preprocessing of Physiological Signals. [data processing equipment for flow measurement of blood flow by ultrasonics

    NASA Technical Reports Server (NTRS)

    Bentley, P. B.

    1975-01-01

    The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.

  16. Effective Porosity Measurements by Wet- and Dry-type Vacuum Saturations using Process-Programmable Vacuum Saturation System

    NASA Astrophysics Data System (ADS)

    Lee, T. J.; Lee, K. S., , Dr; Lee, S. K.

    2017-12-01

    One of the most important factors in measuring effective porosity by vacuum saturation method is that the air in the pore space can be fully substituted by water during the vacuum saturation process. International Society of Rock Mechanics (ISRM) suggests vacuuming a rock sample submerged in the water, while American Society of Test and Materials (ASTM) vacuuming the sample and water separately and then pour the water to the sample. In this study, we call the former wet-type vacuum saturation (WVS) method and the latter dry-type vacuum saturation (DVS) method, and compare the effective porosity measured by the two different vacuum saturation processes. For that purpose, a vacuum saturation system has been developed, which can support both WVS and DVS by only changing the process by programming. Comparison of effective porosity has been made for a cement mortar and rock samples. As a result, DVS can substitute more void volume to water than WVS, which in turn insists that DVS can provide more exact value of effective porosity than WVS.

  17. Calculating the trap density of states in organic field-effect transistors from experiment: A comparison of different methods

    NASA Astrophysics Data System (ADS)

    Kalb, Wolfgang L.; Batlogg, Bertram

    2010-01-01

    The spectral density of localized states in the band gap of pentacene (trap DOS) was determined with a pentacene-based thin-film transistor from measurements of the temperature dependence and gate-voltage dependence of the contact-corrected field-effect conductivity. Several analytical methods to calculate the trap DOS from the measured data were used to clarify, if the different methods lead to comparable results. We also used computer simulations to further test the results from the analytical methods. Most methods predict a trap DOS close to the valence-band edge that can be very well approximated by a single exponential function with a slope in the range of 50-60 meV and a trap density at the valence-band edge of ≈2×1021eV-1cm-3 . Interestingly, the trap DOS is always slightly steeper than exponential. An important finding is that the choice of the method to calculate the trap DOS from the measured data can have a considerable effect on the final result. We identify two specific simplifying assumptions that lead to significant errors in the trap DOS. The temperature dependence of the band mobility should generally not be neglected. Moreover, the assumption of a constant effective accumulation-layer thickness leads to a significant underestimation of the slope of the trap DOS.

  18. Measurement methods for human exposure analysis.

    PubMed Central

    Lioy, P J

    1995-01-01

    The general methods used to complete measurements of human exposures are identified and illustrations are provided for the cases of indirect and direct methods used for exposure analysis. The application of the techniques for external measurements of exposure, microenvironmental and personal monitors, are placed in the context of the need to test hypotheses concerning the biological effects of concern. The linkage of external measurements to measurements made in biological fluids is explored for a suite of contaminants. This information is placed in the context of the scientific framework used to conduct exposure assessment. Examples are taken from research on volatile organics and for a large scale problem: hazardous waste sites. PMID:7635110

  19. Analysis of Water Volume Changes and Temperature Measurement Location Effect to the Accuracy of RTP Power Calibration

    NASA Astrophysics Data System (ADS)

    Lanyau, T.; Hamzah, N. S.; Jalal Bayar, A. M.; Karim, J. Abdul; Phongsakorn, P. K.; Suhaimi, K. Mohammad; Hashim, Z.; Razi, H. Md; Fazli, Z. Mohd; Ligam, A. S.; Mustafa, M. K. A.

    2018-01-01

    Power calibration is one of the important aspect for safe operation of the reactor. In RTP, the calorimetric method has been applied in reactor power calibration. This method involves measurement of water temperature in the RTP tank. Water volume and location of the temperature measurement may play an important role to the accuracy of the measurement. In this study, the analysis of water volume changes and thermocouple location effect to the power calibration accuracy has been done. The changes of the water volume are controlled by the variation of water level in reactor tank. The water level is measured by the ultrasonic measurement device. Temperature measurement has been done by thermocouple placed at three different locations. The accuracy of the temperature trend from various condition of measurement has been determined and discussed in this paper.

  20. Review on methods for determination of metallothioneins in aquatic organisms.

    PubMed

    Shariati, Fatemeh; Shariati, Shahab

    2011-06-01

    One aspect of environmental degradation in coastal areas is pollution from toxic metals, which are persistent and are bioaccumulated by marine organisms, with serious public health implications. A conventional monitoring system of environmental metal pollution includes measuring the level of selected metals in the whole organism or in respective organs. However, measuring only the metal content in particular organs does not give information about its effect at the subcellular level. Therefore, the evaluation of biochemical biomarker metallothionein may be useful in assessing metal exposure and the prediction of potential detrimental effects induced by metal contamination. There are some methods for the determination of metallothioneins including spectrophotometric method, electrochemical methods, chromatography, saturation-based methods, immunological methods, electrophoresis, and RT-PCR. In this paper, different methods are discussed briefly and the comparison between them will be presented.

  1. Accuracy of two simple methods for estimation of thyroidal {sup 131}I kinetics for dosimetry-based treatment of Graves' disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125

    2009-04-15

    One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less

  2. A new method for determining the plasma electron density using optical frequency comb interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arakawa, Hiroyuki, E-mail: arakawa@fmt.teikyo-u.ac.jp; Tojo, Hiroshi; Sasao, Hajime

    2014-04-15

    A new method of plasma electron density measurement using interferometric phases (fractional fringes) of an optical frequency comb interferometer is proposed. Using the characteristics of the optical frequency comb laser, high density measurement can be achieved without fringe counting errors. Simulations show that the short wavelength and wide wavelength range of the laser source and low noise in interferometric phases measurements are effective to reduce ambiguity of measured density.

  3. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  4. Robust Measurements of Phase Response Curves Realized via Multicycle Weighted Spike-Triggered Averages

    NASA Astrophysics Data System (ADS)

    Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio

    2017-02-01

    Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.

  5. Methods to characterize charging effects

    NASA Astrophysics Data System (ADS)

    Slots, H.

    1984-08-01

    Methods to characterize charging in insulating material under high voltage dc stress, leading to electrical breakdown, are reviewed. The behavior of the charges can be studied by ac loss angle measurements after application or removal of dc bias. Measurements were performed on oil-paper and oil-Mylar systems. The poor reproducibility of the measurements makes it impossible to draw more than qualitative conclusions about the charging effects. With an ultrasound pressure wave the electric field distribution in a material can be determined. An alternative derivation for the transient response of a system which elucidates the influence of several parameters in a simple way is given.

  6. Air Pollution Translations: A Bibliography with Abstracts - Volume 4.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Research Triangle Park, NC. Air Pollution Technical Information Center.

    This volume is the fourth in a series of compilations presenting abstracts and indexes of translations of technical air pollution literature. The entries are grouped into 12 subject categories: Emission Sources, Control Methods, Measurement Methods, Air Quality Measurements, Atmospheric Interaction, Basic Science and Technology, Effects--Human…

  7. UTILIZING THE PAKS METHOD FOR MEASURING ACROLEIN AND OTHER ALDEHYDES IN DEARS

    EPA Science Inventory

    Acrolein is a hazardous air pollutant of high priority due to its high irritation potency and other potential adverse health effects. However, a reliable method is currently unavailable for measuring airborne acrolein at typical environmental levels. In the Detroit Exposure and A...

  8. Development of Porosity Measurement Method in Shale Gas Reservoir Rock

    NASA Astrophysics Data System (ADS)

    Siswandani, Alita; Nurhandoko, BagusEndar B.

    2016-08-01

    The pore scales have impacts on transport mechanisms in shale gas reservoirs. In this research, digital helium porosity meter is used for porosity measurement by considering real condition. Accordingly it is necessary to obtain a good approximation for gas filled porosity. Shale has the typical effective porosity that is changing as a function of time. Effective porosity values for three different shale rocks are analyzed by this proposed measurement. We develop the new measurement method for characterizing porosity phenomena in shale gas as a time function by measuring porosity in a range of minutes using digital helium porosity meter. The porosity of shale rock measured in this experiment are free gas and adsorbed gas porosoty. The pressure change in time shows that porosity of shale contains at least two type porosities: macro scale porosity (fracture porosity) and fine scale porosity (nano scale porosity). We present the estimation of effective porosity values by considering Boyle-Gay Lussaac approximation and Van der Waals approximation.

  9. Measurement of Thermal Conductivity of Porcine Liver in the Temperature Range of Cryotherapy and Hyperthermia (250~315k) by A Thermal Sensor Made of A Micron-Scale Enameled Copper Wire.

    PubMed

    Jiang, Z D; Zhao, G; Lu, G R

      BACKGROUND: Cryotherapy and hyperthermia are effective treatments for several diseases, especially for liver cancers. Thermal conductivity is a significant thermal property for the prediction and guidance of surgical procedure. However, the thermal conductivities of organs and tissues, especially over the temperature range of both cryotherapy and hyperthermia are scarce. To provide comprehensive thermal conductivity of liver for both cryotherapy and hyperthermia. A hot probe made of stain steel needle and micron-sized copper wire is used for measurement. To verify data processing, both the least square method and the Monte Carlo inversion method are used to determine the hot probe constants, respectively, with reference materials of water and 29.9 % Ca 2 Cl aqueous solution. Then the thermal conductivities of Hanks solution and pork liver bathed in Hanks solution are measured. The effective length for two methods is nearly the same, but the heat capacity of probe calibrated by the Monte Carlo inversion is temperature dependent. Fairly comprehensive thermal conductivity of porcine liver measured with these two methods in the target temperature range is verified to be similar. We provide an integrated thermal conductivity of liver for cryotherapy and hyperthermia in two methods, and make more accurate predictions possible for surgery. The least square method and the Monte Carlo inversion method have their advantages and disadvantages. The least square method is available for measurement of liquids that not prone to convection or solids in a wide temperature range, while the Monte Carlo inversion method is available for accurate and rapid measurement.

  10. Methods and results of boundary layer measurements on a glider

    NASA Technical Reports Server (NTRS)

    Nes, W. V.

    1978-01-01

    Boundary layer measurements were carried out on a glider under natural conditions. Two effects are investigated: the effect of inconstancy of the development of static pressure within the boundary layer and the effect of the negative pressure difference in a sublaminar boundary layer. The results obtained by means of an ion probe in parallel connection confirm those results obtained by means of a pressure probe. Additional effects which have occurred during these measurements are briefly dealt with.

  11. SU-G-IeP3-04: Effective Dose Measurements in Fast Kvp Switch Dual Energy Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raudabaugh, J; Moore, B; Nguyen, G

    2016-06-15

    Purpose: The objective of this study was two-fold: (a) to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam in dual-energy (DE) computed tomography (CT), and (b) to derive the effective dose (ED) in the abdomen-pelvis protocol in DECT. Methods: A commercial dual energy CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. MOSFET detectors were used for organ dose measurements. First, an experimental validation of the dose equivalency between MOSFET and ion chamber (as a gold standard) was performed using a CTDImore » phantom. Second, the ED of DECT scans was measured using MOSFET detectors and an anthropomorphic phantom. For ED calculations, an abdomen/pelvis scan was used using ICRP 103 tissue weighting factors; ED was also computed using the AAPM Dose Length Product (DLP) method and compared to the MOSFET value. Results: The effective energy was determined as 42.9 kV under the combined beam from half-value layer (HVL) measurement. ED for the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method. Conclusion: Tissue dose in the center of the CTDI body phantom was 1.71 ± 0.01 cGy (ion chamber) and 1.71 ± 0.06 (MOSFET) respectively; this validated the use of effective energy method for organ dose estimation. ED from the abdomen-pelvis scan was calculated as 16.49 ± 0.04 mSv by MOSFET and 14.62 mSv by the DLP method; this suggests that the DLP method provides a reasonable approximation to the ED.« less

  12. Effects of Aggregation on Blood Sedimentation and Conductivity

    PubMed Central

    Zhbanov, Alexander; Yang, Sung

    2015-01-01

    The erythrocyte sedimentation rate (ESR) test has been used for over a century. The Westergren method is routinely used in a variety of clinics. However, the mechanism of erythrocyte sedimentation remains unclear, and the 60 min required for the test seems excessive. We investigated the effects of cell aggregation during blood sedimentation and electrical conductivity at different hematocrits. A sample of blood was drop cast into a small chamber with two planar electrodes placed on the bottom. The measured blood conductivity increased slightly during the first minute and decreased thereafter. We explored various methods of enhancing or retarding the erythrocyte aggregation. Using experimental measurements and theoretical calculations, we show that the initial increase in blood conductivity was indeed caused by aggregation, while the subsequent decrease in conductivity resulted from the deposition of erythrocytes. We present a method for calculating blood conductivity based on effective medium theory. Erythrocytes are modeled as conducting spheroids surrounded by a thin insulating membrane. A digital camera was used to investigate the erythrocyte sedimentation behavior and the distribution of the cell volume fraction in a capillary tube. Experimental observations and theoretical estimations of the settling velocity are provided. We experimentally demonstrate that the disaggregated cells settle much slower than the aggregated cells. We show that our method of measuring the electrical conductivity credibly reflected the ESR. The method was very sensitive to the initial stage of aggregation and sedimentation, while the sedimentation curve for the Westergren ESR test has a very mild slope in the initial time. We tested our method for rapid estimation of the Westergren ESR. We show a correlation between our method of measuring changes in blood conductivity and standard Westergren ESR method. In the future, our method could be examined as a potential means of accelerating ESR tests in clinical practice. PMID:26047511

  13. The influence of NDT-Bobath and PNF methods on the field support and total path length measure foot pressure (COP) in patients after stroke.

    PubMed

    Krukowska, Jolanta; Bugajski, Marcin; Sienkiewicz, Monika; Czernicki, Jan

    In stroke patients, the NDT - (Bobath - Neurodevelopmental Treatment) and PNF (Proprioceptive Neuromuscular Facilitation) methods are used to achieve the main objective of rehabilitation, which aims at the restoration of maximum patient independence in the shortest possible period of time (especially the balance of the body). The aim of the study is to evaluate the effect of the NDT-Bobath and PNF methods on the field support and total path length measure foot pressure (COP) in patients after stroke. The study included 72 patients aged from 20 to 69 years after ischemic stroke with Hemiparesis. The patients were divided into 4 groups by a simple randomization. The criteria for this division were: the body side (right or left) affected by paresis and the applied rehabilitation methods. All the patients were applied the recommended kinesitherapeutic method (randomized), 35 therapy sessions, every day for a period of six weeks. Before initiation of therapy and after 6 weeks was measured the total area of the support and path length (COP (Center Of Pressure) measure foot pressure) using stabilometer platform - alpha. The results were statistically analyzed. After treatment studied traits decreased in all groups. The greatest improvement was obtained in groups with NDT-Bobath therapy. NDT-Bobath method for improving the balance of the body is a more effective method of treatment in comparison with of the PNF method. In stroke patients, the effectiveness of NDT-Bobath method does not depend on hand paresis. Copyright © 2016 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  14. Assessment of ground effects on the propagation of aircraft noise: The T-38A flight experiment

    NASA Technical Reports Server (NTRS)

    Willshire, W. L., Jr.

    1980-01-01

    A flight experiment was conducted to investigate air to ground propagation of sound at gazing angles of incidence. A turbojet powered airplane was flown at altitudes ranging from 10 to 160 m over a 20-microphone array positioned over grass and concrete. The dependence of ground effects on frequency, incidence angle, and slant range was determined using two analysis methods. In one method, a microphone close to the flight path is compared to down range microphones. In the other method, comparisons are made between two microphones which were equidistant from the flight path but positioned over the two surfaces. In both methods, source directivity angle was the criterion by which portions of the microphone signals were compared. The ground effects were largest in the frequency range of 200 to 400 Hz and were found to be dependent on incidence angle and slant range. Ground effects measured for angles of incidence greater than 10 deg to 15 deg were near zero. Measured attenuation increased with increasing slant range for slant ranges less than 750 m. Theoretical predictions were found to be in good agreement with the major details of the measured results.

  15. Effective wavefront aberration measurement of spectacle lenses in as-worn status

    NASA Astrophysics Data System (ADS)

    Jia, Zhigang; Xu, Kai; Fang, Fengzhou

    2018-04-01

    An effective wavefront aberration analysis method for measuring spectacle lenses in as-worn status was proposed and verified using an experimental apparatus based on an eye rotation model. Two strategies were employed to improve the accuracy of measurement of the effective wavefront aberrations on the corneal sphere. The influences of three as-worn parameters, the vertex distance, pantoscopic angle, and face form angle, together with the eye rotation and corresponding incident beams, were objectively and quantitatively obtained. The experimental measurements of spherical single vision and freeform progressive addition lenses demonstrate the accuracy and validity of the proposed method and experimental apparatus, which provide a potential means of achieving supernormal vision correction with customization and personalization in optimizing the as-worn status-based design of spectacle lenses and evaluating their manufacturing and imaging qualities.

  16. Evaluation of Fiber Reinforced Cement Using Digital Image Correlation

    PubMed Central

    Melenka, Garrett W.; Carey, Jason P.

    2015-01-01

    The effect of short fiber reinforcements on the mechanical properties of cement has been examined using a splitting tensile – digital image correlation (DIC) measurement method. Three short fiber reinforcement materials have been used in this study: fiberglass, nylon, and polypropylene. The method outlined provides a simple experimental setup that can be used to evaluate the ultimate tensile strength of brittle materials as well as measure the full field strain across the surface of the splitting tensile test cylindrical specimen. Since the DIC measurement technique is a contact free measurement this method can be used to assess sample failure. PMID:26039590

  17. The measurement of heats of solution of high melting metallic systems in an electromagnetic levitation field. Ph.D. Thesis - Tech. Univ. Berlin - 1979

    NASA Technical Reports Server (NTRS)

    Frohberg, M. G.; Betz, G.

    1982-01-01

    A method was tested for measuring the enthalpies of mixing of liquid metallic alloying systems, involving the combination of two samples in the electromagnetic field of an induction coil. The heat of solution is calculated from the pyrometrically measured temperature effect, the heat capacity of the alloy, and the heat content of the added sample. The usefulness of the method was tested experimentally with iron-copper and niobium-silicon systems. This method should be especially applicable to high-melting alloys, for which conventional measurements have failed.

  18. Input reconstruction of chaos sensors.

    PubMed

    Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin

    2008-06-01

    Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.

  19. Note: Measuring instrument of singlet oxygen quantum yield in photodynamic effects

    NASA Astrophysics Data System (ADS)

    Li, Zhongwei; Zhang, Pengwei; Zang, Lixin; Qin, Feng; Zhang, Zhiguo; Zhang, Hongli

    2017-06-01

    Using diphenylisobenzofuran (C20H14O) as a singlet oxygen (1O2) reporter, a comparison method, which can be used to measure the singlet oxygen quantum yield (ΦΔ) of the photosensitizer quantitatively, is presented in this paper. Based on this method, an automatic measuring instrument of singlet oxygen quantum yield is developed. The singlet oxygen quantum yield of the photosensitizer hermimether and aloe-emodin is measured. It is found that the measuring results are identical to the existing ones, which verifies the validity of the measuring instrument.

  20. Current measurement apparatus

    DOEpatents

    Umans, Stephen D.

    2008-11-11

    Apparatus and methods are provided for a system for measurement of a current in a conductor such that the conductor current may be momentarily directed to a current measurement element in order to maintain proper current without significantly increasing an amount of power dissipation attributable to the current measurement element or adding resistance to assist in current measurement. The apparatus and methods described herein are useful in superconducting circuits where it is necessary to monitor current carried by the superconducting elements while minimizing the effects of power dissipation attributable to the current measurement element.

  1. Open-field test site

    NASA Astrophysics Data System (ADS)

    Gyoda, Koichi; Shinozuka, Takashi

    1995-06-01

    An open-field test site with measurement equipment, a turn table, antenna positioners, and measurement auxiliary equipment was remodelled at the CRL north-site. This paper introduces the configuration, specifications and characteristics of this new open-field test site. Measured 3-m and 10-m site attenuations are in good agreement with theoretical values, and this means that this site is suitable for using 3-m and 10-m method EMI/EMC measurements. The site is expected to be effective for antenna measurement, antenna calibration, and studies on EMI/EMC measurement methods.

  2. Antimicrobial Testing Methods & Procedures Developed by EPA's Microbiology Laboratory

    EPA Pesticide Factsheets

    We develop antimicrobial testing methods and standard operating procedures to measure the effectiveness of hard surface disinfectants against a variety of microorganisms. Find methods and procedures for antimicrobial testing.

  3. A Method of Calibrating Airspeed Installations on Airplanes at Transonic and Supersonic Speeds by the Use of Accelerometer and Attitude-Angle Measurements

    NASA Technical Reports Server (NTRS)

    Zalovick, John A; Lina, Lindsay J; Trant, James P , Jr

    1953-01-01

    A method is described for calibrating airspeed installation on airplanes at transonic and supersonic speeds in vertical-plane maneuvers in which use is made of measurements of normal and longitudinal accelerations and attitude angle. In this method all the required instrumentation is carried within the airplane. An analytical study of the effects of various sources of error on the accuracy of an airspeed calibration by the accelerometer method indicated that the required measurements can be made accurately enough to insure a satisfactory calibration.

  4. Anomalous Shocks on the Measured Near-Field Pressure Signatures of Low-Boom Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2006-01-01

    Unexpected shocks on wind-tunnel-measured pressure signatures prompted questions about design methods, pressure signature measurement techniques, and the quality of measurements in the flow fields near lifting models. Some of these unexpected shocks were the result of component integration methods. Others were attributed to the three-dimension nature of the flow around a lifting model, to inaccuracies in the prediction of the area-ruled lift, or to wing-tip stall effects. This report discusses the low-boom model wind-tunnel data where these unexpected shocks were initially observed, the physics of the lifting wing/body model's flow field, the wind-tunnel data used to evaluate the applicability of methods for calculating equivalent areas due to lift, the performance of lift prediction codes, and tip stall effects so that the cause of these shocks could be determined.

  5. Fundamental study on non-invasive blood glucose sensing.

    PubMed

    Xu, K; Li, Q; Lu, Z; Jiang, J

    2002-01-01

    Diabetes is a disease which severely threatens the health of human beings. Unfortunately, current monitoring techniques with finger sticks discourage the regular use. Noninvasive spectroscopic measurement of blood glucose is a simple and painless technique, and reduces the long-term health care costs of diabetic patients due to no reagents. It is suitable for home use. Moreover, the establishment of the methodology not only applies to blood glucose noninvasive measurement, but also can be extended to noninvasive measurement of other analytes in body fluid, which will be of important significance for the development of the technique of clinical analysis. In this paper, some fundamental researches, which have been achieved in our laboratory in the field of non-invasive blood glucose measurement, were introduced. 1. Fundamental research was done for the glucose concentrations from simple to complex samples with near and middle infrared spectroscopy: (1) the relationship between the instrument precision and prediction accuracy of the glucose measurement; (2) the change of the result of the quantitative measurement with the change of the complexity of samples; (3) the attempt of increasing the prediction accuracy of the glucose measurement by improving the methods of modeling. The research results showed that it is feasible for non-invasive blood glucose measurement with near and middle infrared spectroscopy in theory, and the experimental results, from simple to complex samples, proved that it is effective for the methodology consisting of hardware and software. 2. According to the characteristics of human body measurement, the effects of measuring conditions on measurement results, such as: (1) the effect of measurement position; (2) the effect of measurement pressure; (3) the effect of measurement site; (4) the effect of measured individual, were investigated. With the fundamental researches, the special problems of human body measurement were solved. In addition, the practical and effective method of noninvasive human blood glucose measurement was proposed.

  6. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  7. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  8. Ostwald ripening and interparticle-diffraction effects for illite crystals

    USGS Publications Warehouse

    Eberl, D.D.; Srodon, J.

    1988-01-01

    The Warren-Averbach method, an X-ray diffraction (XRD) method used to measure mean particle thickness and particle-thickness distribution, is used to restudy sericite from the Silverton caldera. Apparent particle-thickness distributions indicate that the clays may have undergone Ostwald ripening and that this process has modified the K-Ar ages of the samples. The mechanism of Ostwald ripening can account for many of the features found for the hydrothermal alteration of illite. Expandabilities measured by the XRD peak-position method for illite/smectites (I/S) from various locations are smaller than expandabilities measured by transmission electron microscopy (TEM) and by the Warren-Averbach (W-A) method. This disparity is interpreted as being related to the presence of nonswelling basal surfaces that form the ends of stacks of illite particles (short-stack effect), stacks that, according to the theory of interparticle diffraction, diffract as coherent X-ray scattering domains. -from Authors

  9. Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS

    NASA Astrophysics Data System (ADS)

    Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan

    2018-03-01

    As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.

  10. Improving the surface metrology accuracy of optical profilers by using multiple measurements

    NASA Astrophysics Data System (ADS)

    Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan

    2016-10-01

    The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.

  11. Quantum Field Energy Sensor based on the Casimir Effect

    NASA Astrophysics Data System (ADS)

    Ludwig, Thorsten

    The Casimir effect converts vacuum fluctuations into a measurable force. Some new energy technologies aim to utilize these vacuum fluctuations in commonly used forms of energy like electricity or mechanical motion. In order to study these energy technologies it is helpful to have sensors for the energy density of vacuum fluctuations. In today's scientific instrumentation and scanning microscope technologies there are several common methods to measure sub-nano Newton forces. While the commercial atomic force microscopes (AFM) mostly work with silicon cantilevers, there are a large number of reports on the use of quartz tuning forks to get high-resolution force measurements or to create new force sensors. Both methods have certain advantages and disadvantages over the other. In this report the two methods are described and compared towards their usability for Casimir force measurements. Furthermore a design for a quantum field energy sensor based on the Casimir force measurement will be described. In addition some general considerations on extracting energy from vacuum fluctuations will be given.

  12. AMS Measurement of 36Cl with a Q3D Magnetic Spectrometer at CIAE

    NASA Astrophysics Data System (ADS)

    Li, Chaoli; He, Ming; Zhang, Wei; Wu, Shaoyong; Li, Zhenyu; He, Xianwen; Liu, Jiancheng; Dong, Kejun; Jiang, Shan

    2012-06-01

    The ratio of 36Cl/Cl can determine the exposure age of surface rocks and monitor the secular equilibrium of 36Cl of sedimentary and igneous rock in groundwater. Due to the uncertainty effects of different chemical separation processes for removing 36S, there is a high degree of uncertainty in 36Cl accelerator mass spectrometry (AMS) measurements if the ratio of 36Cl/Cl is lower than 10-14. A 36Cl AMS higher sensitivity measurement has been set up by using a ΔE-Q3D method at the China Institute of Atomic Energy (CIAE). The performances of ΔE-Q3D method for 36Cl-AMS measurement had been systemically studied. The experimental results show that the ΔE-Q3D method has a higher isobar suppression factor. Taking advantage of direct removing 36S, the sample preparation can be simplified and the uncertainty effects of different chemical separation processes can be reduced in 36Cl AMS measurements.

  13. A Comparison of Four Approaches to Account for Method Effects in Latent State-Trait Analyses

    ERIC Educational Resources Information Center

    Geiser, Christian; Lockhart, Ginger

    2012-01-01

    Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…

  14. Methods to measure sedimentation of spawning gravels

    Treesearch

    Thomas E. Lisle; Rand E. Eads

    1991-01-01

    Sediment transport occurring after spawning can cause scour of incubating embryos and infiltration of fine sediment into spawning gravel, decreasing intergravel flow and preventing hatched fry from emerging from the gravel. Documentation of these effects requires measuring gravel conditions before and after high flow periods and combining methods to record scour and...

  15. 49 CFR 192.945 - What methods must an operator use to measure program effectiveness?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.945 What methods must an operator use to measure program...

  16. [A method of temperature measurement for hot forging with surface oxide based on infrared spectroscopy].

    PubMed

    Zhang, Yu-cun; Qi, Yan-de; Fu, Xian-bin

    2012-05-01

    High temperature large forging is covered with a thick oxide during forging. It leads to a big measurement data error. In this paper, a method of measuring temperature based on infrared spectroscopy is presented. It can effectively eliminate the influence of surface oxide on the measurement of temperature. The method can measure the surface temperature and emissivity of the oxide directly using the infrared spectrum. The infrared spectrum is radiated from surface oxide of forging. Then it can derive the real temperature of hot forging covered with the oxide using the heat exchange equation. In order to greatly restrain interference spectroscopy through included in the received infrared radiation spectrum, three interference filter system was proposed, and a group of optimal gap parameter values using spectral simulation were obtained. The precision of temperature measurement was improved. The experimental results show that the method can accurately measure the surface temperature of high temperature forging covered with oxide. It meets the requirements of measurement accuracy, and the temperature measurement method is feasible according to the experiment result.

  17. Generalization of the swelling method to measure the intrinsic curvature of lipids

    NASA Astrophysics Data System (ADS)

    Barragán Vidal, I. A.; Müller, M.

    2017-12-01

    Via computer simulation of a coarse-grained model of two-component lipid bilayers, we compare two methods of measuring the intrinsic curvatures of the constituting monolayers. The first one is a generalization of the swelling method that, in addition to the assumption that the spontaneous curvature linearly depends on the composition of the lipid mixture, incorporates contributions from its elastic energy. The second method measures the effective curvature-composition coupling between the apposing leaflets of bilayer structures (planar bilayers or cylindrical tethers) to extract the spontaneous curvature. Our findings demonstrate that both methods yield consistent results. However, we highlight that the two-leaflet structure inherent to the latter method has the advantage of allowing measurements for mixed lipid systems up to their critical point of demixing as well as in the regime of high concentration (of either species).

  18. In-flight calibration of mesospheric rocket plasma probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havnes, Ove; University Studies Svalbard; Hartquist, Thomas W.

    Many effects and factors can influence the efficiency of a rocket plasma probe. These include payload charging, solar illumination, rocket payload orientation and rotation, and dust impact induced secondary charge production. As a consequence, considerable uncertainties can arise in the determination of the effective cross sections of plasma probes and measured electron and ion densities. We present a new method for calibrating mesospheric rocket plasma probes and obtaining reliable measurements of plasma densities. This method can be used if a payload also carries a probe for measuring the dust charge density. It is based on that a dust probe's effectivemore » cross section for measuring the charged component of dust normally is nearly equal to its geometric cross section, and it involves the comparison of variations in the dust charge density measured with the dust detector to the corresponding current variations measured with the electron and/or ion probes. In cases in which the dust charge density is significantly smaller than the electron density, the relation between plasma and dust charge density variations can be simplified and used to infer the effective cross sections of the plasma probes. We illustrate the utility of the method by analysing the data from a specific rocket flight of a payload containing both dust and electron probes.« less

  19. Physics Notes

    ERIC Educational Resources Information Center

    School Science Review, 1977

    1977-01-01

    Includes methods for demonstrating Schlieren effect, measuring refractive index, measuring acceleration, presenting concepts of optics, automatically recording weather, constructing apparaturs for sound experiments, using thermistor thermometers, using the 741 operational amplifier in analog computing, measuring inductance, electronically ringing…

  20. Improved non-invasive method for aerosol particle charge measurement employing in-line digital holography

    NASA Astrophysics Data System (ADS)

    Tripathi, Anjan Kumar

    Electrically charged particles are found in a wide range of applications ranging from electrostatic powder coating, mineral processing, and powder handling to rain-producing cloud formation in atmospheric turbulent flows. In turbulent flows, particle dynamics is influenced by the electric force due to particle charge generation. Quantifying particle charges in such systems will help in better predicting and controlling particle clustering, relative motion, collision, and growth. However, there is a lack of noninvasive techniques to measure particle charges. Recently, a non-invasive method for particle charge measurement using in-line Digital Holographic Particle Tracking Velocimetry (DHPTV) technique was developed in our lab, where charged particles to be measured were introduced to a uniform electric field, and their movement towards the oppositely charged electrode was deemed proportional to the amount of charge on the particles (Fan Yang, 2014 [1]). However, inherent speckle noise associated with reconstructed images was not adequately removed and therefore particle tracking data was contaminated. Furthermore, particle charge calculation based on particle deflection velocity neglected the particle drag force and rebound effect of the highly charged particles from the electrodes. We improved upon the existing particle charge measurement method by: 1) hologram post processing, 2) taking drag force into account in charge calculation, 3) considering rebound effect. The improved method was first fine-tuned through a calibration experiment. The complete method was then applied to two different experiments, namely conduction charging and enclosed fan-driven turbulence chamber, to measure particle charges. In all three experiments conducted, the particle charge was found to obey non-central t-location scale family of distribution. It was also noted that the charge distribution was insensitive to the change in voltage applied between the electrodes. The range of voltage applied where reliable particle charges can be measured was also quantified by taking into account the rebound effect of highly charged particles. Finally, in the enclosed chamber experiment, it was found that using carbon conductive coating on the inner walls of the chamber minimized the charge generation inside the chamber when glass bubble particles were used. The value of electric charges obtained in calibration experiment through the improved method was found to have the same order as reported in the existing work (Y.C Ahn et al. 2004 [2]), indicating that the method is indeed effective.

  1. Investigating Measurement Invariance in Computer-Based Personality Testing: The Impact of Using Anchor Items on Effect Size Indices

    ERIC Educational Resources Information Center

    Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N.

    2015-01-01

    A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…

  2. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment

    PubMed Central

    O’Brien, Katie M.; Upson, Kristen; Cook, Nancy R.; Weinberg, Clarice R.

    2015-01-01

    Background Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. Objectives We compared adjustment methods, including novel approaches, using simulated case–control data. Methods Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. Results For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. Conclusions To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals. Citation O’Brien KM, Upson K, Cook NR, Weinberg CR. 2016. Environmental chemicals in urine and blood: improving methods for creatinine and lipid adjustment. Environ Health Perspect 124:220–227; http://dx.doi.org/10.1289/ehp.1509693 PMID:26219104

  3. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  4. Concerning the Video Drift Method to Measure Double Stars

    NASA Astrophysics Data System (ADS)

    Nugent, Richard L.; Iverson, Ernest W.

    2015-05-01

    Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.

  5. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  6. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.

  7. Spectroscopic vector analysis for fast pattern quality monitoring

    NASA Astrophysics Data System (ADS)

    Sohn, Younghoon; Ryu, Sungyoon; Lee, Chihoon; Yang, Yusin

    2018-03-01

    In semiconductor industry, fast and effective measurement of pattern variation has been key challenge for assuring massproduct quality. Pattern measurement techniques such as conventional CD-SEMs or Optical CDs have been extensively used, but these techniques are increasingly limited in terms of measurement throughput and time spent in modeling. In this paper we propose time effective pattern monitoring method through the direct spectrum-based approach. In this technique, a wavelength band sensitive to a specific pattern change is selected from spectroscopic ellipsometry signal scattered by pattern to be measured, and the amplitude and phase variation in the wavelength band are analyzed as a measurement index of the pattern change. This pattern change measurement technique is applied to several process steps and verified its applicability. Due to its fast and simple analysis, the methods can be adapted to the massive process variation monitoring maximizing measurement throughput.

  8. Solving the Capacitive Effect in the High-Frequency sweep for Langmuir Probe in SYMPLE

    NASA Astrophysics Data System (ADS)

    Pramila; Patel, J. J.; Rajpal, R.; Hansalia, C. J.; Anitha, V. P.; Sathyanarayana, K.

    2017-04-01

    Langmuir Probe based measurements need to be routinely carried out to measure various plasma parameters such as the electron density (ne), the electron temperature (Te), the floating potential (Vf), and the plasma potential (Vp). For this, the diagnostic electronics along with the biasing power supplies is installed in standard industrial racks with a 2KV isolation transformer. The Signal Conditioning Electronics (SCE) system is populated inside the 4U-chassis based system with the front-end electronics, designed using high common mode differential amplifiers which can measure small differential signal in presence of high common mode dc- bias or ac ramp voltage used for biasing the probes. DC-biasing of the probe is most common method for getting its I-V characteristic but method of biasing the probe with a sweep at high frequency encounters the problem of corruption of signal due to capacitive effect specially when the sweep period and the discharge time is very fast and die down in the order of μs or lesser. This paper presents and summarises the method of removing such effects encountered while measuring the probe current.

  9. Diffraction grating strain gauge method: error analysis and its application for the residual stress measurement in thermal barrier coatings

    NASA Astrophysics Data System (ADS)

    Yin, Yuanjie; Fan, Bozhao; He, Wei; Dai, Xianglu; Guo, Baoqiao; Xie, Huimin

    2018-03-01

    Diffraction grating strain gauge (DGSG) is an optical strain measurement method. Based on this method, a six-spot diffraction grating strain gauge (S-DGSG) system has been developed with the advantages of high and adjustable sensitivity, compact structure, and non-contact measurement. In this study, this system is applied for the residual stress measurement in thermal barrier coatings (TBCs) combining the hole-drilling method. During the experiment, the specimen’s location is supposed to be reset accurately before and after the hole-drilling, however, it is found that the rigid body displacements from the resetting process could seriously influence the measurement accuracy. In order to understand and eliminate the effects from the rigid body displacements, such as the three-dimensional (3D) rotations and the out-of-plane displacement of the grating, the measurement error of this system is systematically analyzed, and an optimized method is proposed. Moreover, a numerical experiment and a verified tensile test are conducted, and the results verify the applicability of this optimized method successfully. Finally, combining this optimized method, a residual stress measurement experiment is conducted, and the results show that this method can be applied to measure the residual stress in TBCs.

  10. Food, Fun and Fitness Internet program for girls: influencing log-on rate

    USDA-ARS?s Scientific Manuscript database

    Internet-based interventions hold promise as an effective channel for reaching large numbers of youth. However, log-on rates, a measure of program dose, have been highly variable. Methods to enhance log-on rate are needed. Incentives may be an effective method. This paper reports the effect of reinf...

  11. Thermal conductivity of catalyst layer of polymer electrolyte membrane fuel cells: Part 1 - Experimental study

    NASA Astrophysics Data System (ADS)

    Ahadi, Mohammad; Tam, Mickey; Saha, Madhu S.; Stumper, Jürgen; Bahrami, Majid

    2017-06-01

    In this work, a new methodology is proposed for measuring the through-plane thermal conductivity of catalyst layers (CLs) in polymer electrolyte membrane fuel cells. The proposed methodology is based on deconvolution of bulk thermal conductivity of a CL from measurements of two thicknesses of the CL, where the CLs are sandwiched in a stack made of two catalyst-coated substrates. Effects of hot-pressing, compression, measurement method, and substrate on the through-plane thermal conductivity of the CL are studied. For this purpose, different thicknesses of catalyst are coated on ethylene tetrafluoroethylene (ETFE) and aluminum (Al) substrates by a conventional Mayer bar coater and measured by scanning electron microscopy (SEM). The through-plane thermal conductivity of the CLs is measured by the well-known guarded heat flow (GHF) method as well as a recently developed transient plane source (TPS) method for thin films which modifies the original TPS thin film method. Measurements show that none of the studied factors has any effect on the through-plane thermal conductivity of the CL. GHF measurements of a non-hot-pressed CL on Al yield thermal conductivity of 0.214 ± 0.005 Wṡm-1ṡK-1, and TPS measurements of a hot-pressed CL on ETFE yield thermal conductivity of 0.218 ± 0.005 Wṡm-1ṡK-1.

  12. Establishing System Measures of Effectiveness

    DTIC Science & Technology

    2001-03-01

    Halpin, 1991] Andriole, Stephen J. and Stanley M. Halpin, editors. Information Technology for Command and Control: Methods and Tools for Systems...Systems with Models and Objects, New York: Mc Graw -Hill, 1997. [Pawlowski, 1993a] Pawlowski, Thomas J. III, LTC. C3IEW Measures of Effectiveness

  13. A method for measuring sulfide toxicity in the nematode Caenorhabditis elegans.

    PubMed

    Livshits, Leonid; Gross, Einav

    2017-01-01

    Cysteine catabolism by gut microbiota produces high levels of sulfide. Excessive sulfide can interfere with colon function, and therefore may be involved in the etiology and risk of relapse of ulcerative colitis, an inflammatory bowel disease affecting millions of people worldwide. Therefore, it is crucial to understand how cells/animals regulate the detoxification of sulfide generated by bacterial cysteine catabolism in the gut. Here we describe a simple and cost-effective way to explore the mechanism of sulfide toxicity in the nematode Caenorhabditis elegans ( C. elegans ). •A rapid cost-effective method to quantify and study sulfide tolerance in C. elegans and other free-living nematodes.•A cost effective method to measure the concentration of sulfide in the inverted plate assay.

  14. Combustor kinetic energy efficiency analysis of the hypersonic research engine data

    NASA Astrophysics Data System (ADS)

    Hoose, K. V.

    1993-11-01

    A one-dimensional method for measuring combustor performance is needed to facilitate design and development scramjet engines. A one-dimensional kinetic energy efficiency method is used for measuring inlet and nozzle performance. The objective of this investigation was to assess the use of kinetic energy efficiency as an indicator for scramjet combustor performance. A combustor kinetic energy efficiency analysis was performed on the Hypersonic Research Engine (HRE) data. The HRE data was chosen for this analysis due to its thorough documentation and availability. The combustor, inlet, and nozzle kinetic energy efficiency values were utilized to determine an overall engine kinetic energy efficiency. Finally, a kinetic energy effectiveness method was developed to eliminate thermochemical losses from the combustion of fuel and air. All calculated values exhibit consistency over the flight speed range. Effects from fuel injection, altitude, angle of attack, subsonic-supersonic combustion transition, and inlet spike position are shown and discussed. The results of analyzing the HRE data indicate that the kinetic energy efficiency method is effective as a measure of scramjet combustor performance.

  15. A temperature compensation methodology for piezoelectric based sensor devices

    NASA Astrophysics Data System (ADS)

    Wang, Dong F.; Lou, Xueqiao; Bao, Aijian; Yang, Xu; Zhao, Ji

    2017-08-01

    A temperature compensation methodology comprising a negative temperature coefficient thermistor with the temperature characteristics of a piezoelectric material is proposed to improve the measurement accuracy of piezoelectric sensing based devices. The piezoelectric disk is characterized by using a disk-shaped structure and is also used to verify the effectiveness of the proposed compensation method. The measured output voltage shows a nearly linear relationship with respect to the applied pressure by introducing the proposed temperature compensation method in a temperature range of 25-65 °C. As a result, the maximum measurement accuracy is observed to be improved by 40%, and the higher the temperature, the more effective the method. The effective temperature range of the proposed method is theoretically analyzed by introducing the constant coefficient of the thermistor (B), the resistance of initial temperature (R0), and the paralleled resistance (Rx). The proposed methodology can not only eliminate the influence of piezoelectric temperature dependent characteristics on the sensing accuracy but also decrease the power consumption of piezoelectric sensing based devices by the simplified sensing structure.

  16. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  17. Experiment and numerical simulation for laser ultrasonic measurement of residual stress.

    PubMed

    Zhan, Yu; Liu, Changsheng; Kong, Xiangwei; Lin, Zhongya

    2017-01-01

    Laser ultrasonic is a most promising method for non-destructive evaluation of residual stress. The residual stress of thin steel plate is measured by laser ultrasonic technique. The pre-stress loading device is designed which can easily realize the condition of the specimen being laser ultrasonic tested at the same time in the known stress state. By the method of pre-stress loading, the acoustoelastic constants are obtained and the effect of different test directions on the results of surface wave velocity measurement is discussed. On the basis of known acoustoelastic constants, the longitudinal and transverse welding residual stresses are measured by the laser ultrasonic technique. The finite element method is used to simulate the process of surface wave detection of welding residual stress. The pulsed laser is equivalent to the surface load and the relationship between the physical parameters of the laser and the load is established by the correction coefficient. The welding residual stress of the specimen is realized by the ABAQUS function module of predefined field. The results of finite element analysis are in good agreement with the experimental method. The simple and effective numerical and experimental methods for laser ultrasonic measurement of residual stress are demonstrated. Copyright © 2016. Published by Elsevier B.V.

  18. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  19. Evaluation of Specific Absorption Rate as a Dosimetric Quantity for Electromagnetic Fields Bioeffects

    PubMed Central

    Panagopoulos, Dimitris J.; Johansson, Olle; Carlo, George L.

    2013-01-01

    Purpose To evaluate SAR as a dosimetric quantity for EMF bioeffects, and identify ways for increasing the precision in EMF dosimetry and bioactivity assessment. Methods We discuss the interaction of man-made electromagnetic waves with biological matter and calculate the energy transferred to a single free ion within a cell. We analyze the physics and biology of SAR and evaluate the methods of its estimation. We discuss the experimentally observed non-linearity between electromagnetic exposure and biological effect. Results We find that: a) The energy absorbed by living matter during exposure to environmentally accounted EMFs is normally well below the thermal level. b) All existing methods for SAR estimation, especially those based upon tissue conductivity and internal electric field, have serious deficiencies. c) The only method to estimate SAR without large error is by measuring temperature increases within biological tissue, which normally are negligible for environmental EMF intensities, and thus cannot be measured. Conclusions SAR actually refers to thermal effects, while the vast majority of the recorded biological effects from man-made non-ionizing environmental radiation are non-thermal. Even if SAR could be accurately estimated for a whole tissue, organ, or body, the biological/health effect is determined by tiny amounts of energy/power absorbed by specific biomolecules, which cannot be calculated. Moreover, it depends upon field parameters not taken into account in SAR calculation. Thus, SAR should not be used as the primary dosimetric quantity, but used only as a complementary measure, always reporting the estimating method and the corresponding error. Radiation/field intensity along with additional physical parameters (such as frequency, modulation etc) which can be directly and in any case more accurately measured on the surface of biological tissues, should constitute the primary measure for EMF exposures, in spite of similar uncertainty to predict the biological effect due to non-linearity. PMID:23750202

  20. Concerns regarding 24-h sampling for formaldehyde, acetaldehyde, and acrolein using 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents

    NASA Astrophysics Data System (ADS)

    Herrington, Jason S.; Hays, Michael D.

    2012-08-01

    There is high demand for accurate and reliable airborne carbonyl measurement methods due to the human and environmental health impacts of carbonyls and their effects on atmospheric chemistry. Standardized 2,4-dinitrophenylhydrazine (DNPH)-based sampling methods are frequently applied for measuring gaseous carbonyls in the atmospheric environment. However, there are multiple short-comings associated with these methods that detract from an accurate understanding of carbonyl-related exposure, health effects, and atmospheric chemistry. The purpose of this brief technical communication is to highlight these method challenges and their influence on national ambient monitoring networks, and to provide a logical path forward for accurate carbonyl measurement. This manuscript focuses on three specific carbonyl compounds of high toxicological interest—formaldehyde, acetaldehyde, and acrolein. Further method testing and development, the revision of standardized methods, and the plausibility of introducing novel technology for these carbonyls are considered elements of the path forward. The consolidation of this information is important because it seems clear that carbonyl data produced utilizing DNPH-based methods are being reported without acknowledgment of the method short-comings or how to best address them.

  1. A rapid and non-invasive method for measuring the peak positive pressure of HIFU fields by a laser beam.

    PubMed

    Wang, Hua; Zeng, Deping; Chen, Ziguang; Yang, Zengtao

    2017-04-12

    Based on the acousto-optic interaction, we propose a laser deflection method for rapidly, non-invasively and quantitatively measuring the peak positive pressure of HIFU fields. In the characterization of HIFU fields, the effect of nonlinear propagation is considered. The relation between the laser deflection length and the peak positive pressure is derived. Then the laser deflection method is assessed by comparing it with the hydrophone method. The experimental results show that the peak positive pressure measured by laser deflection method is little higher than that obtained by the hydrophone, confirming that they are in reasonable agreement. Considering that the peak pressure measured by hydrophones is always underestimated, the laser deflection method is assumed to be more accurate than the hydrophone method due to the absence of the errors in hydrophone spatial-averaging measurement and the influence of waveform distortion on hydrophone corrections. Moreover, noting that the Lorentz formula still remains applicable to high-pressure environments, the laser deflection method exhibits a great potential for measuring HIFU field under high-pressure amplitude. Additionally, the laser deflection method provides a rapid way for measuring the peak positive pressure, without the scan time, which is required by the hydrophones.

  2. Exploring expert opinion on the practicality and effectiveness of biosecurity measures on dairy farms in the United Kingdom using choice modeling.

    PubMed

    Shortall, Orla; Green, Martin; Brennan, Marnie; Wapenaar, Wendela; Kaler, Jasmeet

    2017-03-01

    Biosecurity, defined as a series of measures aiming to stop disease-causing agents entering or leaving an area where farm animals are present, is very important for the continuing economic viability of the United Kingdom dairy sector, and for animal welfare. This study gathered expert opinion from farmers, veterinarians, consultants, academics, and government and industry representatives on the practicality and effectiveness of different biosecurity measures on dairy farms. The study used best-worst scaling, a technique that allows for greater discrimination between choices and avoids the variability in interpretation associated with other methods, such as Likert scales and ranking methods. Keeping a closed herd was rated as the most effective measure overall, and maintaining regular contact with the veterinarian was the most practical measure. Measures relating to knowledge, planning, and veterinary involvement; buying-in practices; and quarantine and treatment scored highly for effectiveness overall. Measures relating to visitors, equipment, pest control, and hygiene scored much lower for effectiveness. Overall, measures relating to direct animal-to-animal contact scored much higher for effectiveness than measures relating to indirect disease transmission. Some of the most effective measures were also rated as the least practical, such as keeping a closed herd and avoiding nose-to-nose contact between contiguous animals, suggesting that real barriers exist for farmers when implementing biosecurity measures on dairy farms. We observed heterogeneity in expert opinion on biosecurity measures; for example, veterinarians rated the effectiveness of consulting the veterinarian on biosecurity significantly more highly than dairy farmers, suggesting a greater need for veterinarians to promote their services on-farm. Still, both groups rated it as a practical measure, suggesting that the farmer-veterinarian relationship holds some advantages for the promotion of biosecurity. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Link prediction measures considering different neighbors’ effects and application in social networks

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Wu, Chong; Li, Yongli

    Link prediction measures have been attracted particular attention in the field of mathematical physics. In this paper, we consider the different effects of neighbors in link prediction and focus on four different situations: only consider the individual’s own effects; consider the effects of individual, neighbors and neighbors’ neighbors; consider the effects of individual, neighbors, neighbors’ neighbors, neighbors’ neighbors’ neighbors and neighbors’ neighbors’ neighbors’ neighbors; consider the whole network participants’ effects. Then, according to the four situations, we present our link prediction models which also take the effects of social characteristics into consideration. An artificial network is adopted to illustrate the parameter estimation based on logistic regression. Furthermore, we compare our methods with the some other link prediction methods (LPMs) to examine the validity of our proposed model in online social networks. The results show the superior of our proposed link prediction methods compared with others. In the application part, our models are applied to study the social network evolution and used to recommend friends and cooperators in social networks.

  4. 42 CFR 67.101 - Purpose and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Section 1142 of the Social Security Act to support research on the outcomes, effectiveness, and... services and procedures; projects to improve methods and data bases for outcomes and effectiveness research..., performance measures, and review criteria; conferences; and research on dissemination methods. (b) The...

  5. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis

    2011-04-15

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two differentmore » spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm{sup 3}) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25x25 cm{sup 2}, in general agreement with previously published Monte Carlo results. Conclusions: The authors conclude that the spectral method can be used to accurately correct the Cerenkov light effect in PSDs. The authors confirmed the importance of maximizing the difference of Cerenkov light production between calibration measurements. The authors also found that the attenuation of the optical fiber, which is assumed to be constant in the original formulation of the spectral method, may cause a variation of the calibration factors in some experimental setups.« less

  6. High-precision diode-laser-based temperature measurement for air refractive index compensation.

    PubMed

    Hieta, Tuomas; Merimaa, Mikko; Vainio, Markku; Seppä, Jeremias; Lassila, Antti

    2011-11-01

    We present a laser-based system to measure the refractive index of air over a long path length. In optical distance measurements, it is essential to know the refractive index of air with high accuracy. Commonly, the refractive index of air is calculated from the properties of the ambient air using either Ciddor or Edlén equations, where the dominant uncertainty component is in most cases the air temperature. The method developed in this work utilizes direct absorption spectroscopy of oxygen to measure the average temperature of air and of water vapor to measure relative humidity. The method allows measurement of temperature and humidity over the same beam path as in optical distance measurement, providing spatially well-matching data. Indoor and outdoor measurements demonstrate the effectiveness of the method. In particular, we demonstrate an effective compensation of the refractive index of air in an interferometric length measurement at a time-variant and spatially nonhomogeneous temperature over a long time period. Further, we were able to demonstrate 7 mK RMS noise over a 67 m path length using a 120 s sample time. To our knowledge, this is the best temperature precision reported for a spectroscopic temperature measurement. © 2011 Optical Society of America

  7. Non-invasive glucose measurement technologies: an update from 1999 to the dawn of the new millennium.

    PubMed

    Khalil, Omar S

    2004-10-01

    There are three main issues in non-invasive (NI) glucose measurements: namely, specificity, compartmentalization of glucose values, and calibration. There has been progress in the use of near-infrared and mid-infrared spectroscopy. Recently new glucose measurement methods have been developed, exploiting the effect of glucose on erythrocyte scattering, new photoacoustic phenomenon, optical coherence tomography, thermo-optical studies on human skin, Raman spectroscopy studies, fluorescence measurements, and use of photonic crystals. In addition to optical methods, in vivo electrical impedance results have been reported. Some of these methods measure intrinsic properties of glucose; others deal with its effect on tissue or blood properties. Recent studies on skin from individuals with diabetes and its response to stimuli, skin thermo-optical response, peripheral blood flow, and red blood cell rheology in diabetes shed new light on physical and physiological changes resulting from the disease that can affect NI glucose measurements. There have been advances in understanding compartmentalization of glucose values by targeting certain regions of human tissue. Calibration of NI measurements and devices is still an open question. More studies are needed to understand the specific glucose signals and signals that are due to the effect of glucose on blood and tissue properties. These studies should be performed under normal physiological conditions and in the presence of other co-morbidities.

  8. A method to measure internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures

    NASA Astrophysics Data System (ADS)

    Tian, Qijie; Chang, Songtao; Li, Zhou; He, Fengyun; Qiao, Yanfeng

    2017-03-01

    The suppression level of internal stray radiation is a key criterion for infrared imaging systems, especially for high-precision cryogenic infrared imaging systems. To achieve accurate measurement for internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures, a measurement method, which is based on radiometric calibration, is presented in this paper. First of all, the calibration formula is deduced considering the integration time, and the effect of ambient temperature on internal stray radiation is further analyzed in detail. Then, an approach is proposed to measure the internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures. By calibrating the system under two ambient temperatures, the quantitative relation between the internal stray radiation and the ambient temperature can be acquired, and then the internal stray radiation of the cryogenic infrared imaging system under various ambient temperatures can be calculated. Finally, several experiments are performed in a chamber with controllable inside temperatures to evaluate the effectiveness of the proposed method. Experimental results indicate that the proposed method can be used to measure internal stray radiation with high accuracy at various ambient temperatures and integration times. The proposed method has some advantages, such as simple implementation and the capability of high-precision measurement. The measurement results can be used to guide the stray radiation suppression and to test whether the internal stray radiation suppression performance meets the requirement or not.

  9. Study of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method

    NASA Astrophysics Data System (ADS)

    Qin, Le; Xie, HuiMin; Zhu, RongHua; Wu, Dan; Che, ZhiGang; Zou, ShiKun

    2014-04-01

    This paper investigates the effect of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method. The selection of the location of the testing area is analyzed from theory and experiment. In the theoretical study, the factors which affect the surface released radial strain ɛ r were analyzed on the basis of the formulae of the hole-drilling method, and the relations between those factors and ɛ r were established. By combining Moiré interferometry with the hole-drilling method, the residual stress of interference-fit specimen was measured to verify the theoretical analysis. According to the analysis results, the testing area for minimizing the error of strain measurement is determined. Moreover, if the orientation of the maximum principal stress is known, the value of strain will be measured with higher precision by the Moiré interferometry method.

  10. Evaluating the use of gas discharge visualization to measure massage therapy outcomes

    PubMed Central

    Haun, Jolie; Patel, Nitin; Schwartz, Gary; Ritenbaugh, Cheryl

    2017-01-01

    Background The purpose of this study was to evaluate the short-term effects of massage therapy using gas discharge visualization (GDV), a computerized biophysical electrophoton capture (EPC), in tandem with traditional self-report measures to evaluate the use of GDV measurement to assess the bioenergetic whole-person effects of massage therapy. Methods This study used a single treatment group, pre–post-repeated measures design with a sample of 23 healthy adults. This study utilized a single 50-min full-body relaxation massage with participants. GDV measurement method, an EPC, and traditional paper-based measures evaluating pain, stress, muscle tension, and well-being were used to assess intervention outcomes. Results Significant differences were found between pre- and post-measures of well-being, pain, stress, muscle tension, and GDV parameters. Pearson correlations indicate the GDV measure is correlated with pain and stress, variables that impact the whole person. Conclusions This study demonstrates that GDV parameters may be used to indicate significant bioenergetic change from pre- to post-massage. Findings warrant further investigation with a larger diverse sample size and control group to further explore GDV as a measure of whole-person bioenergetic effects associated with massage. PMID:26087069

  11. Burrowing as a novel voluntary strength training method for mice: A comparison of various voluntary strength or resistance exercise methods.

    PubMed

    Roemers, P; Mazzola, P N; De Deyn, P P; Bossers, W J; van Heuvelen, M J G; van der Zee, E A

    2018-04-15

    Voluntary strength training methods for rodents are necessary to investigate the effects of strength training on cognition and the brain. However, few voluntary methods are available. The current study tested functional and muscular effects of two novel voluntary strength training methods, burrowing (digging a substrate out of a tube) and unloaded tower climbing, in male C57Bl6 mice. To compare these two novel methods with existing exercise methods, resistance running and (non-resistance) running were included. Motor coordination, grip strength and muscle fatigue were measured at baseline, halfway through and near the end of a fourteen week exercise intervention. Endurance was measured by an incremental treadmill test after twelve weeks. Both burrowing and resistance running improved forelimb grip strength as compared to controls. Running and resistance running increased endurance in the treadmill test and improved motor skills as measured by the balance beam test. Post-mortem tissue analyses revealed that running and resistance running induced Soleus muscle hypertrophy and reduced epididymal fat mass. Tower climbing elicited no functional or muscular changes. As a voluntary strength exercise method, burrowing avoids the confounding effects of stress and positive reinforcers elicited in forced strength exercise methods. Compared to voluntary resistance running, burrowing likely reduces the contribution of aerobic exercise components. Burrowing qualifies as a suitable voluntary strength training method in mice. Furthermore, resistance running shares features of strength training and endurance (aerobic) exercise and should be considered a multi-modal aerobic-strength exercise method in mice. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect.

    PubMed

    Ku, Bon Ki; Evans, Douglas E

    2012-04-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as "Maynard's estimation method") is used. Therefore, it is necessary to quantitatively investigate how much the Maynard's estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard's estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard's estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard's estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles.

  13. Performance Measures for Public Participation Methods : Final Report

    DOT National Transportation Integrated Search

    2018-01-01

    Public engagement is an important part of transportation project development, but measuring its effectiveness is typically piecemealed. Performance measurementdescribed by the Urban Institute as the measurement on a regular basis of the results (o...

  14. Inversion of time-domain induced polarization data based on time-lapse concept

    NASA Astrophysics Data System (ADS)

    Kim, Bitnarae; Nam, Myung Jin; Kim, Hee Joon

    2018-05-01

    Induced polarization (IP) surveys, measuring overvoltage phenomena of the medium, are widely and increasingly performed not only for exploration of mineral resources but also for engineering applications. Among several IP survey methods such as time-domain, frequency-domain and spectral IP surveys, this study introduces a noble inversion method for time-domain IP data to recover the chargeability structure of target medium. The inversion method employs the concept of 4D inversion of time-lapse resistivity data sets, considering the fact that measured voltage in time-domain IP survey is distorted by IP effects to increase from the instantaneous voltage measured at the moment the source current injection starts. Even though the increase is saturated very fast, we can consider the saturated and instantaneous voltages as a time-lapse data set. The 4D inversion method is one of the most powerful method for inverting time-lapse resistivity data sets. Using the developed IP inversion algorithm, we invert not only synthetic but also field IP data to show the effectiveness of the proposed method by comparing the recovered chargeability models with those from linear inversion that was used for the inversion of the field data in a previous study. Numerical results confirm that the proposed inversion method generates reliable chargeability models even though the anomalous bodies have large IP effects.

  15. Latency as a region contrast: Measuring ERP latency differences with Dynamic Time Warping.

    PubMed

    Zoumpoulaki, A; Alsufyani, A; Filetti, M; Brammer, M; Bowman, H

    2015-12-01

    Methods for measuring onset latency contrasts are evaluated against a new method utilizing the dynamic time warping (DTW) algorithm. This new method allows latency to be measured across a region instead of single point. We use computer simulations to compare the methods' power and Type I error rates under different scenarios. We perform per-participant analysis for different signal-to-noise ratios and two sizes of window (broad vs. narrow). In addition, the methods are tested in combination with single-participant and jackknife average waveforms for different effect sizes, at the group level. DTW performs better than the other methods, being less sensitive to noise as well as to placement and width of the window selected. © 2015 Society for Psychophysiological Research.

  16. Measurement and Analysis of the Temperature Gradient of Blackbody Cavities, for Use in Radiation Thermometry

    NASA Astrophysics Data System (ADS)

    De Lucas, Javier; Segovia, José Juan

    2018-05-01

    Blackbody cavities are the standard radiation sources widely used in the fields of radiometry and radiation thermometry. Its effective emissivity and uncertainty depend to a large extent on the temperature gradient. An experimental procedure based on the radiometric method for measuring the gradient is followed. Results are applied to particular blackbody configurations where gradients can be thermometrically estimated by contact thermometers and where the relationship between both basic methods can be established. The proposed procedure may be applied to commercial blackbodies if they are modified allowing secondary contact temperature measurement. In addition, the established systematic may be incorporated as part of the actions for quality assurance in routine calibrations of radiation thermometers, by using the secondary contact temperature measurement for detecting departures from the real radiometrically obtained gradient and the effect on the uncertainty. On the other hand, a theoretical model is proposed to evaluate the effect of temperature variations on effective emissivity and associated uncertainty. This model is based on a gradient sample chosen following plausible criteria. The model is consistent with the Monte Carlo method for calculating the uncertainty of effective emissivity and complements others published in the literature where uncertainty is calculated taking into account only geometrical variables and intrinsic emissivity. The mathematical model and experimental procedure are applied and validated using a commercial type three-zone furnace, with a blackbody cavity modified to enable a secondary contact temperature measurement, in the range between 400 °C and 1000 °C.

  17. A method for determining the actual rate of orientation switching of DNA self-assembled monolayers using optical and electrochemical frequency response analysis.

    PubMed

    Casanova-Moreno, J; Bizzotto, D

    2015-02-17

    Electrostatic control of the orientation of fluorophore-labeled DNA strands immobilized on an electrode surface has been shown to be an effective bioanalytical tool. Modulation techniques and later time-resolved measurements were used to evaluate the kinetics of the switching between lying and standing DNA conformations. These measurements, however, are the result of a convolution between the DNA "switching" response time and the other frequency limited responses in the measurement. In this work, a method for analyzing the response of a potential driven DNA sensor is presented by calculating the potential effectively dropped across the electrode interface (using electrochemical impedance spectroscopy) as opposed to the potential applied to the electrochemical cell. This effectively deconvolutes the effect of the charging time on the observed frequency response. The corrected response shows that DNA is able to switch conformation faster than previously reported using modulation techniques. This approach will ensure accurate measurements independent of the electrochemical system, removing the uncertainty in the analysis of the switching response, enabling comparison between samples and measurement systems.

  18. Objective Physiological Measurements but Not Subjective Reports Moderate the Effect of Hunger on Choice Behavior

    PubMed Central

    Shabat-Simon, Maytal; Shuster, Anastasia; Sela, Tal; Levy, Dino J.

    2018-01-01

    Hunger is a powerful driver of human behavior, and is therefore of great interest to the study of psychology, economics, and consumer behavior. Assessing hunger levels in experiments is often biased, when using self-report methods, or complex, when using blood tests. We propose a novel way of objectively measuring subjects’ levels of hunger by identifying levels of alpha-amylase (AA) enzyme in their saliva samples. We used this measure to uncover the effect of hunger on different types of choice behaviors. We found that hunger increases risk-seeking behavior in a lottery-choice task, modifies levels of vindictiveness in a social decision-making task, but does not have a detectible effect on economic inconsistency in a budget-set choice task. Importantly, these findings were moderated by AA levels and not by self-report measures. We demonstrate the effects hunger has on choice behavior and the problematic nature of subjective measures of physiological states, and propose to use reliable and valid biologically based methods to overcome these problems. PMID:29875715

  19. FOEHN: The critical experiment for the Franco-German High Flux Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scharmer, K.; Eckert, H. G.

    1991-01-01

    A critical experiment for the Franco-German High Flux Reactor was carried out in the French reactor EOLE (CEN Cadarache). The purpose of the experiment was to check the calculation methods in a realistic geometry and to measure effects that can only be calculated imprecisely (e.g. beam hole effects). The structure of the experiment and the measurement and calculation methods are described. A detailed comparison between theoretical and experimental results was performed. 30 refs., 105 figs.

  20. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    PubMed

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  1. Louisiana SIP: LAC 33:III Ch. 7 - Table 2 - Ambient Air--Methods of Contaminant Measurements; SIP effective 1989-05-08 (LAc49) and 1989-08-14 (LAc50) to 2011-08-03 (LAd34 - Moved to Section 711 and revised [adds PM-2.5])

    EPA Pesticide Factsheets

    Louisiana SIP: LAC 33:III Ch. 7 - Table 2 - Ambient Air--Methods of Contaminant Measurements; SIP effective 1989-05-08 (LAc49) and 1989-08-14 (LAc50) to 2011-08-03 (LAd34 - Moved to Section 711 and revised [adds PM-2.5])

  2. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    PubMed

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  3. Evaluating the impact of method bias in health behaviour research: a meta-analytic examination of studies utilising the theories of reasoned action and planned behaviour.

    PubMed

    McDermott, Máirtín S; Sharma, Rajeev

    2017-12-01

    The methods employed to measure behaviour in research testing the theories of reasoned action/planned behaviour (TRA/TPB) within the context of health behaviours have the potential to significantly bias findings. One bias yet to be examined in that literature is that due to common method variance (CMV). CMV introduces a variance in scores attributable to the method used to measure a construct, rather than the construct it represents. The primary aim of this study was to evaluate the impact of method bias on the associations of health behaviours with TRA/TPB variables. Data were sourced from four meta-analyses (177 studies). The method used to measure behaviour for each effect size was coded for susceptibility to bias. The moderating impact of method type was assessed using meta-regression. Method type significantly moderated the associations of intentions, attitudes and social norms with behaviour, but not that between perceived behavioural control and behaviour. The magnitude of the moderating effect of method type appeared consistent between cross-sectional and prospective studies, but varied across behaviours. The current findings strongly suggest that method bias significantly inflates associations in TRA/TPB research, and poses a potentially serious validity threat to the cumulative findings reported in that field.

  4. Comparison of Two Acoustic Waveguide Methods for Determining Liner Impedance

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Watson, Willie R.; Tracy, Maureen B.; Parrott, Tony L.

    2001-01-01

    Acoustic measurements taken in a flow impedance tube are used to assess the relative accuracy of two waveguide methods for impedance eduction in the presence of grazing flow. The aeroacoustic environment is assumed to contain forward and backward-traveling acoustic waves, consisting of multiple modes, and uniform mean flow. Both methods require a measurement of the complex acoustic pressure profile over the length of the test liner. The Single Mode Method assumes that the sound pressure level and phase decay-rates of a single progressive mode can be extracted from this measured complex acoustic pressure profile. No a priori assumptions are made in the Finite Element. Method regarding the modal or reflection content in the measured acoustic pressure profile. The integrity of each method is initially demonstrated by how well their no-flow impedances match those acquired in a normal incidence impedance tube. These tests were conducted using ceramic tubular and conventional perforate liners. Ceramic tubular liners were included because of their impedance insensitivity to mean flow effects. Conversely, the conventional perforate liner was included because its impedance is known to be sensitive to mean flow velocity effects. Excellent comparisons between impedance values educed with the two waveguide methods in the absence of mean flow and the corresponding values educed with the normal incident impedance tube were observed. The two methods are then compared for mean flow Mach numbers up to 0.5, and are shown to give consistent results for both types of test liners. The quality of the results indicates that the Single Mode Method should be used when the measured acoustic pressure profile is clearly dominated by a single progressive mode, and the Finite Element Method should be used for all other cases.

  5. The Effects of Experimental Conditions on the Refractive Index and Density of Low-Temperature Ices: Solid Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Loeffler, M. J.; Moore, M. H.; Gerakines, P. A.

    2016-01-01

    We present the first study on the effects of the deposition technique on the measurements of the visible refractive index and the density of a low-temperature ice using solid carbon dioxide (CO2) at 14-70 K as an example. While our measurements generally agree with previous studies that show a dependence of index and density on temperature below 50 K, we also find that the measured values depend on the method used to create each sample. Below 50 K, we find that the refractive index varied by as much as 4% and the density by as much as 16% at a single temperature depending on the deposition method. We also show that the Lorentz-Lorenz approximation is valid for solid CO2 across the full 14-70 K temperature range, regardless of the deposition method used. Since the refractive index and density are important in calculations of optical constants and infrared (IR) band strengths of materials, our results suggest that the deposition method must be considered in cases where nvis and ? are not measured in the same experimental setup where the IR spectral measurements are made.

  6. Effect of adjuvant physical properties on spray characteristics

    USDA-ARS?s Scientific Manuscript database

    The effects of adjuvant physical properties on spray characteristics were studied. Dynamic surface tension was measured with a Sensa Dyne surface tensiometer 6000 using the maximum bubble pressure method. Viscosity was measured with a Brookfield synchro-lectric viscometer model LVT using a UL adap...

  7. Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions

    ERIC Educational Resources Information Center

    Pokropek, Artur

    2015-01-01

    This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…

  8. Biases and power for groups comparison on subjective health measurements.

    PubMed

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald's test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.

  9. Direct and ultrasonic measurements of macroscopic piezoelectricity in sintered hydroxyapatite

    NASA Astrophysics Data System (ADS)

    Tofail, S. A. M.; Haverty, D.; Cox, F.; Erhart, J.; Hána, P.; Ryzhenko, V.

    2009-03-01

    Macroscopic piezoelectricity in hydroxyapatite (HA) ceramic was measured by a direct quasistatic method and an ultrasonic interference technique. The effective symmetry of polycrystalline aggregate was established and a detailed theoretical analysis was carried out to determine by these two methods the shear piezoelectric coefficient, d14, of HA. Piezoelectric nature of HA was proved qualitatively although a specific quantitative value for the d14 coefficient could not be established. Ultrasound method was also employed to anisotropic elastic constants, which agreed well with those measured from the first principles.

  10. Method and apparatus for measuring coupled flow, transport, and reaction processes under liquid unsaturated flow conditions

    DOEpatents

    McGrail, Bernard P.; Martin, Paul F.; Lindenmeier, Clark W.

    1999-01-01

    The present invention is a method and apparatus for measuring coupled flow, transport and reaction processes under liquid unsaturated flow conditions. The method and apparatus of the present invention permit distinguishing individual precipitation events and their effect on dissolution behavior isolated to the specific event. The present invention is especially useful for dynamically measuring hydraulic parameters when a chemical reaction occurs between a particulate material and either liquid or gas (e.g. air) or both, causing precipitation that changes the pore structure of the test material.

  11. The Effectiveness of an Individualized Learning Method of Instruction When Compared to the Lecture-Discussion Method. A Research Report of a Graduate Study.

    ERIC Educational Resources Information Center

    Oen, Urban T.; Sweany, H. Paul

    To compare the effectiveness of individualized and lecture-discussion methods with a non-instruction (Control) method in developing turfgrass competencies in 11th and 12th grade students as measured by achievement in a battery of tests, teachers from 29 Michigan schools were randomly placed in three groups and attended workshops where they were…

  12. Is the societal approach wide enough to include relatives? Incorporating relatives' costs and effects in a cost-effectiveness analysis.

    PubMed

    Davidson, Thomas; Levin, Lars-Ake

    2010-01-01

    It is important for economic evaluations in healthcare to cover all relevant information. However, many existing evaluations fall short of this goal, as they fail to include all the costs and effects for the relatives of a disabled or sick individual. The objective of this study was to analyse how relatives' costs and effects could be measured, valued and incorporated into a cost-effectiveness analysis. In this article, we discuss the theories underlying cost-effectiveness analyses in the healthcare arena; the general conclusion is that it is hard to find theoretical arguments for excluding relatives' costs and effects if a societal perspective is used. We argue that the cost of informal care should be calculated according to the opportunity cost method. To capture relatives' effects, we construct a new term, the R-QALY weight, which is defined as the effect on relatives' QALY weight of being related to a disabled or sick individual. We examine methods for measuring, valuing and incorporating the R-QALY weights. One suggested method is to estimate R-QALYs and incorporate them together with the patient's QALY in the analysis. However, there is no well established method as yet that can create R-QALY weights. One difficulty with measuring R-QALY weights using existing instruments is that these instruments are rarely focused on relative-related aspects. Even if generic quality-of-life instruments do cover some aspects relevant to relatives and caregivers, they may miss important aspects and potential altruistic preferences. A further development and validation of the existing caregiving instruments used for eliciting utility weights would therefore be beneficial for this area, as would further studies on the use of time trade-off or Standard Gamble methods for valuing R-QALY weights. Another potential method is to use the contingent valuation method to find a monetary value for all the relatives' costs and effects. Because cost-effectiveness analyses are used for decision making, and this is often achieved by comparing different cost-effectiveness ratios, we argue that it is important to find ways of incorporating all relatives' costs and effects into the analysis. This may not be necessary for every analysis of every intervention, but for treatments where relatives' costs and effects are substantial there may be some associated influence on the cost-effectiveness ratio.

  13. The assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation, using two methods of measurement.

    PubMed

    Jivănescu, Anca; Bratu, Dana Cristina; Tomescu, Lucian; Măroiu, Alexandra Cristina; Popa, George; Bratu, Emanuel Adrian

    2015-01-01

    Using two measurement methods (a three-dimensional laser scanning system and a digital caliper), this study compares the lower face morphology of complete edentulous patients, before and after prosthodontic rehabilitation with bimaxillary complete dentures. Fourteen edentulous patients were randomly selected from the Department of Prosthodontics, at the Faculty of Dental Medicine, "Victor Babes" University of Medicine and Pharmacy, Timisoara, Romania. The changes that occurred in the lower third of the face after prosthodontic treatment were assessed quantitatively by measuring the vertical projection of the distances between two sets of anthropometric landmarks: Subnasale - cutaneous Pogonion (D1) and Labiale superius - Labiale inferius (D2). A two-way repeated measures ANOVA model design was carried out to test for significant interactions, main effects and differences between the two types of measuring devices and between the initial and final rehabilitation time points. The main effect of the type of measuring device showed no statistically significant differences in the measured distances (p=0.24 for D1 and p=0.39 for D2), between the initial and the final rehabilitation time points. Regarding the main effect of time, there were statistically significant differences in both the measured distances D1 and D2 (p=0.001), between the initial and the final rehabilitation time points. The two methods of measurement were equally reliable in the assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation with bimaxillary complete dentures. The differences between the measurements taken before and after prosthodontic rehabilitation proved to be statistically significant.

  14. Measurement of unsteady pressures in rotating systems

    NASA Technical Reports Server (NTRS)

    Kienappel, K.

    1978-01-01

    The principles of the experimental determination of unsteady periodic pressure distributions in rotating systems are reported. An indirect method is discussed, and the effects of the centrifugal force and the transmission behavior of the pressure measurement circuit were outlined. The required correction procedures are described and experimentally implemented in a test bench. Results show that the indirect method is suited to the measurement of unsteady nonharmonic pressure distributions in rotating systems.

  15. A method for mapping fire hazard and risk across multiple scales and its application in fire management

    Treesearch

    Robert E. Keane; Stacy A. Drury; Eva C. Karau; Paul F. Hessburg; Keith M. Reynolds

    2010-01-01

    This paper presents modeling methods for mapping fire hazard and fire risk using a research model called FIREHARM (FIRE Hazard and Risk Model) that computes common measures of fire behavior, fire danger, and fire effects to spatially portray fire hazard over space. FIREHARM can compute a measure of risk associated with the distribution of these measures over time using...

  16. Gamma model and its analysis for phase measuring profilometry.

    PubMed

    Liu, Kai; Wang, Yongchang; Lau, Daniel L; Hao, Qi; Hassebrook, Laurence G

    2010-03-01

    Phase measuring profilometry is a method of structured light illumination whose three-dimensional reconstructions are susceptible to error from nonunitary gamma in the associated optical devices. While the effects of this distortion diminish with an increasing number of employed phase-shifted patterns, gamma distortion may be unavoidable in real-time systems where the number of projected patterns is limited by the presence of target motion. A mathematical model is developed for predicting the effects of nonunitary gamma on phase measuring profilometry, while also introducing an accurate gamma calibration method and two strategies for minimizing gamma's effect on phase determination. These phase correction strategies include phase corrections with and without gamma calibration. With the reduction in noise, for three-step phase measuring profilometry, analysis of the root mean squared error of the corrected phase will show a 60x reduction in phase error when the proposed gamma calibration is performed versus 33x reduction without calibration.

  17. Dynamic axle and wheel loads identification: laboratory studies

    NASA Astrophysics Data System (ADS)

    Zhu, X. Q.; Law, S. S.

    2003-12-01

    Two methods have been reported by Zhu and Law to identify moving loads on the top of a bridge deck. One is based on the exact solution (ESM) and the other is based on the finite element formulation (FEM). Simulation studies on the effect of different influencing factors have been reported previously. This paper comparatively studies the performances of these two methods with experimental measurements obtained from a bridge/vehicle system in the laboratory. The strains of the bridge deck are measured when a model car moves across the bridge deck along different paths. The moving loads on the bridge deck are identified from the measured strains using these two methods, and the responses are reconstructed from the identified loads for comparison with the measured responses to verify the performances of these methods. Studies on the identification accuracy due to the effect of the number of vibration mode used, the number of measuring points and eccentricities of travelling paths are performed. Results show that the ESM could identify the moving loads individually or as axle loads when they are travelling at an eccentricity with the sensors located close to the travelling path of the forces. And the accuracy of the FEM is dependent on the amount of measured information used in the identification.

  18. A Novel Operant-based Behavioral Assay of Mechanical Allodynia in the Orofacial Region of Rats

    PubMed Central

    Rohrs, Eric L.; Kloefkorn, Heidi E.; Lakes, Emily H.; Jacobs, Brittany Y.; Neubert, John K.; Caudle, Robert M.; Allen, Kyle D.

    2015-01-01

    Background Detecting behaviors related to orofacial pain in rodent models often relies on subjective investigator grades or methods that place the animal in a stressful environment. In this study, an operant-based behavioral assay is presented for the assessment of orofacial tactile sensitivity in the rat. New Methods In the testing chamber, rats are provided access to a sweetened condensed milk bottle; however, a 360° array of stainless steel wire loops impedes access. To receive the reward, an animal must engage the wires across the orofacial region. Contact with the bottle triggers a motor, requiring the animal to accept increasing pressure on the face during the test. To evaluate this approach, tolerated bottle distance was measured for 10 hairless Sprague-Dawley rats at baseline and 30 minutes after application of capsaicin cream (0.1%) to the face. The experiment was repeated to evaluate the ability of morphine to reverse this effect. Results The application of capsaicin cream reduced tolerated bottle distance measures relative to baseline (p<0.05). As long as morphine did not cause reduced participation due to sedation, subcutaneous morphine dosing reduced the effects of capsaicin (p<0.001). Comparison with Existing Method For behavioral tests, experimenters often make subjective decisions of an animal’s response. Operant methods can reduce these effects by measuring an animal’s selection in a reward-conflict decision. Herein, a method to measure orofacial sensitivity is presented using an operant system. Conclusions This operant device allows for consistent measurement of heightened tactile sensitivity in the orofacial regions of the rat. PMID:25823368

  19. Application of optical interferometry in focused acoustic field measurement

    NASA Astrophysics Data System (ADS)

    Wang, Yuebing; Sun, Min; Cao, Yonggang; Zhu, Jiang

    2018-07-01

    Optical interferometry has been successfully applied in measuring acoustic pressures in plane-wave fields and spherical-wave fields. In this paper, the "effective" refractive index for focused acoustic fields was developed, through numerical simulation and experiments, the feasibility of the optical method in measuring acoustic fields of focused transducers was proved. Compared with the results from a membrane hydrophone, it was concluded that the optical method has good spatial resolution and is suitable for detecting focused fields with fluctuant distributions. The influences of a few factors (the generated lamb wave, laser beam directivity, etc.) were analyzed, and corresponding suggestions were proposed for effective application of this technology.

  20. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  1. N,N-Dimethyl-p-phenylenediamine dihydrochloride-based method for the measurement of plasma oxidative capacity during human aging.

    PubMed

    Mehdi, Mohammad Murtaza; Rizvi, Syed Ibrahim

    2013-05-15

    N,N-Dimethyl-p-phenylenediamine dihydrochloride (DMPD) is a compound that is normally used to measure the antioxidant potential. In the presence of Fe(3+), it gets converted to DMPD(∙+) radical, which is scavenged by antioxidant molecules present in test samples. In plasma, due to the presence of iron, this method cannot be applied for the measurement of antioxidant potential. The modified DMPD method proposed by us measures with great accuracy the oxidant potential of plasma using the oxidizing effect of plasma to oxidize DMPD into producing a stable pink color. The method is fast and reproducible. We show that plasma oxidative capacity increases significantly during human aging. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.

    1998-01-01

    The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.

  3. How Teacher Evaluation Methods Matter for Accountability: A Comparative Analysis of Teacher Effectiveness Ratings by Principals and Teacher Value-Added Measures

    ERIC Educational Resources Information Center

    Harris, Douglas N.; Ingle, William K.; Rutledge, Stacey A.

    2014-01-01

    Policymakers are revolutionizing teacher evaluation by attaching greater stakes to student test scores and observation-based teacher effectiveness measures, but relatively little is known about why they often differ so much. Quantitative analysis of thirty schools suggests that teacher value-added measures and informal principal evaluations are…

  4. Cooling Effectiveness Measurements for Air Film Cooling of Thermal Barrier Coated Surfaces in a Burner Rig Environment Using Phosphor Thermometry

    NASA Technical Reports Server (NTRS)

    Eldridge, Jeffrey I.; Shyam, Vikram; Wroblewski, Adam C.; Zhu, Dongming; Cuy, Michael D.; Wolfe, Douglas E.

    2016-01-01

    While the effects of thermal barrier coating (TBC) thermal protection and air film cooling effectiveness are usually studied separately, their contributions to combined cooling effectiveness are interdependent and are not simply additive. Therefore, combined cooling effectiveness must be measured to achieve an optimum balance between TBC thermal protection and air film cooling. In this investigation, surface temperature mapping was performed using recently developed Cr-doped GdAlO3 phosphor thermometry. Measurements were performed in the NASA GRC Mach 0.3 burner rig on a TBC-coated plate using a scaled up cooling hole geometry where both the mainstream hot gas temperature and the blowing ratio were varied. Procedures for surface temperature and cooling effectiveness mapping of the air film-cooled TBC-coated surface are described. Applications are also shown for an engine component in both the burner rig test environment as well as an engine afterburner environment. The effects of thermal background radiation and flame chemiluminescence on the measurements are investigated, and advantages of this method over infrared thermography as well as the limitations of this method for studying air film cooling are discussed.

  5. Measurement of Crystalline Silica Aerosol Using Quantum Cascade Laser-Based Infrared Spectroscopy.

    PubMed

    Wei, Shijun; Kulkarni, Pramod; Ashley, Kevin; Zheng, Lina

    2017-10-24

    Inhalation exposure to airborne respirable crystalline silica (RCS) poses major health risks in many industrial environments. There is a need for new sensitive instruments and methods for in-field or near real-time measurement of crystalline silica aerosol. The objective of this study was to develop an approach, using quantum cascade laser (QCL)-based infrared spectroscopy (IR), to quantify airborne concentrations of RCS. Three sampling methods were investigated for their potential for effective coupling with QCL-based transmittance measurements: (i) conventional aerosol filter collection, (ii) focused spot sample collection directly from the aerosol phase, and (iii) dried spot obtained from deposition of liquid suspensions. Spectral analysis methods were developed to obtain IR spectra from the collected particulate samples in the range 750-1030 cm -1 . The new instrument was calibrated and the results were compared with standardized methods based on Fourier transform infrared (FTIR) spectrometry. Results show that significantly lower detection limits for RCS (≈330 ng), compared to conventional infrared methods, could be achieved with effective microconcentration and careful coupling of the particulate sample with the QCL beam. These results offer promise for further development of sensitive filter-based laboratory methods and portable sensors for near real-time measurement of crystalline silica aerosol.

  6. [Measuring the effect of eyeglasses on determination of squint angle with Purkinje reflexes and the prism cover test].

    PubMed

    Barry, J C; Backes, A

    1998-04-01

    The alternating prism and cover test is the conventional test for the measurement of the angle of strabismus. The error induced by the prismatic effect of glasses is typically about 27-30%/10 D. Alternatively, the angle of strabismus can be measured with methods based on Purkinje reflex positions. This study examines the differences between three such options, taking into account the influence of glasses. The studied system comprised the eyes with or without glasses, a fixation object and a device for recording the eye position: in the case of the alternate prism and cover test, a prism bar was required; in the case of a Purkinje reflex based device, light sources for generation of reflexes and a camera for the documentation of the reflex positions were used. Measurements performed on model eyes and computer ray traces were used to analyze and compare the options. When a single corneal reflex is used, the misalignment of the corneal axis can be measured; the error in this measurement due to the prismatic effect of glasses was 7.6%/10 D, the smallest found in this study. The individual Hirschberg ratio can be determined by monocular measurements in three gaze directions. The angle of strabismus can be measured with Purkinje reflex based methods if the fundamental differences between these methods and the alternate prism and cover test, and if the influence of glasses and other sources of error are accounted for.

  7. Remote Vibration Measurements at a Sud Aviation Alouette III Helicopter with a CW CO2-Laser System

    DTIC Science & Technology

    1993-09-28

    mrad and a continuous output of 0.4 Watt. The purpose of our measurements was to measure the vibration spectra of a helicopter from the Dutch Air Force...detection 13 3.2.1 Non-linear effects in vibrometry 15 4 THE VIBRATION SOURCES OF A HELICOPTER 24 5 MEASUREMENTS 29 5.1 Measuring Method 29 5.2 Scenario...vibration in Hz. 3.2.1 Non-linear effects in vibrometry A brief explanation of the non-linear effects is given below. A FM receiver has a built-in limiter

  8. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  9. DEVELOPMENT AND VALIDATION OF A METHOD FOR MEASURING EXEMPT VOLATILE ORGANIC COMPOUNDS AND CARBON DIOXIDE IN CONSUMER PRODUCTS

    EPA Science Inventory

    The report describes the development and validation of a method for measuring exempt volatile organic compounds (VOCs) and carbon dioxide in consumer products. (NOTE: Ground-level ozone can cause a variety of adverse health effects as well as agricultural and ecological damage. C...

  10. Pendant-Drop Surface-Tension Measurement On Molten Metal

    NASA Technical Reports Server (NTRS)

    Man, Kin Fung; Thiessen, David

    1996-01-01

    Method of measuring surface tension of molten metal based on pendant-drop method implemented in quasi-containerless manner and augmented with digital processing of image data. Electrons bombard lower end of sample rod in vacuum, generating hanging drop of molten metal. Surface tension of drop computed from its shape. Technique minimizes effects of contamination.

  11. USING RESPIROMETRY TO MEASURE HYDROGEN UTILIZATION IN SULFATE REDUCING BACTERIA IN THE PRESENCE OF COPPER AND ZINC

    EPA Science Inventory

    A respirometric method has been developed to measure hydrogen utilization by sulfate reducing bacteria (SRB). One application of this method has been to test inhibitory metals effects on the SRB culture used in a novel acid mine drainage treatment technology. As a control param...

  12. Alternative Methods in the Evaluation of School District Cash Management Programs.

    ERIC Educational Resources Information Center

    Dembowski, Frederick L.

    1980-01-01

    Empirically evaluates three measures of effectiveness of school district cash management: the rate of return method in common use and two new measures--efficiency rating and Net Present Value (NPV). The NPV approach allows examination of efficiency and provides a framework for evaluating other areas of educational policy. (Author/IRT)

  13. Simultaneous Measurement of Thermal Conductivity and Specific Heat in a Single TDTR Experiment

    NASA Astrophysics Data System (ADS)

    Sun, Fangyuan; Wang, Xinwei; Yang, Ming; Chen, Zhe; Zhang, Hang; Tang, Dawei

    2018-01-01

    Time-domain thermoreflectance (TDTR) technique is a powerful thermal property measurement method, especially for nano-structures and material interfaces. Thermal properties can be obtained by fitting TDTR experimental data with a proper thermal transport model. In a single TDTR experiment, thermal properties with different sensitivity trends can be extracted simultaneously. However, thermal conductivity and volumetric heat capacity usually have similar trends in sensitivity for most materials; it is difficult to measure them simultaneously. In this work, we present a two-step data fitting method to measure the thermal conductivity and volumetric heat capacity simultaneously from a set of TDTR experimental data at single modulation frequency. This method takes full advantage of the information carried by both amplitude and phase signals; it is a more convenient and effective solution compared with the frequency-domain thermoreflectance method. The relative error is lower than 5 % for most cases. A silicon wafer sample was measured by TDTR method to verify the two-step fitting method.

  14. Fault detection of helicopter gearboxes using the multi-valued influence matrix method

    NASA Technical Reports Server (NTRS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-01-01

    In this paper we investigate the effectiveness of a pattern classifying fault detection system that is designed to cope with the variability of fault signatures inherent in helicopter gearboxes. For detection, the measurements are monitored on-line and flagged upon the detection of abnormalities, so that they can be attributed to a faulty or normal case. As such, the detection system is composed of two components, a quantization matrix to flag the measurements, and a multi-valued influence matrix (MVIM) that represents the behavior of measurements during normal operation and at fault instances. Both the quantization matrix and influence matrix are tuned during a training session so as to minimize the error in detection. To demonstrate the effectiveness of this detection system, it was applied to vibration measurements collected from a helicopter gearbox during normal operation and at various fault instances. The results indicate that the MVIM method provides excellent results when the full range of faults effects on the measurements are included in the training set.

  15. Physical and Psychological Effects of Head Treatment in the Supine Position Using Specialized Ayurveda-Based Techniques

    PubMed Central

    Iwawaki, Yoko; Uebaba, Kazuo; Yamamoto, Yoko; Takishita, Yukie; Harada, Kiyomi; Shibata, Akemi; Narumoto, Jin; Fukui, Kenji

    2016-01-01

    Abstract Objective: To clarify the physical and psychological effects of head massage performed in the supine position using Ayurveda-based techniques (head treatment). Design: Twenty-four healthy female students were included in the study. Using a crossover study design, the same participants were enrolled in both the head treatment intervention group and control group. There was an interval of 1 week or more between measurements. Outcome measures: The physiologic indices measured included blood pressure and heart rate fluctuations (high frequency and low frequency/high frequency). The psychological markers measured included liveliness, depression, and boredom using the visual analogue scale method. State anxiety was measured using the State-Trait Anxiety Inventory method. Results: The parasympathetic nerve activity increased immediately after head treatment. Upon completion of head treatment, the parasympathetic nerve predominance tended to gradually ease. Head treatment boosted freshness and relieved anxiety. Conclusions: The results suggest that head treatment has a relaxing and refreshing effect and may be used to provide comfort. PMID:27163344

  16. Is literature search training for medical students and residents effective? a literature review.

    PubMed

    Just, Melissa L

    2012-10-01

    This literature review examines the effectiveness of literature searching skills instruction for medical students or residents, as determined in studies that either measure learning before and after an intervention or compare test and control groups. The review reports on the instruments used to measure learning and on their reliability and validity, where available. Finally, a summary of learning outcomes is presented. Fifteen studies published between 1998 and 2011 were identified for inclusion in the review. The selected studies all include a description of the intervention, a summary of the test used to measure learning, and the results of the measurement. Instruction generally resulted in improvement in clinical question writing, search strategy construction, article selection, and resource usage. Although the findings of most of the studies indicate that the current instructional methods are effective, the study designs are generally weak, there is little evidence that learning persists over time, and few validated methods of skill measurement have been developed.

  17. Research on Bidding Decision-making of International Public-Private Partnership Projects

    NASA Astrophysics Data System (ADS)

    Hu, Zhen Yu; Zhang, Shui Bo; Liu, Xin Yan

    2018-06-01

    In order to select the optimal quasi-bidding project for an investment enterprise, a bidding decision-making model for international PPP projects was established in this paper. Firstly, the literature frequency statistics method was adopted to screen out the bidding decision-making indexes, and accordingly the bidding decision-making index system for international PPP projects was constructed. Then, the group decision-making characteristic root method, the entropy weight method, and the optimization model based on least square method were used to set the decision-making index weights. The optimal quasi-bidding project was thus determined by calculating the consistent effect measure of each decision-making index value and the comprehensive effect measure of each quasi-bidding project. Finally, the bidding decision-making model for international PPP projects was further illustrated by a hypothetical case. This model can effectively serve as a theoretical foundation and technical support for the bidding decision-making of international PPP projects.

  18. A method to eliminate the influence of incident light variations in spectral analysis

    NASA Astrophysics Data System (ADS)

    Luo, Yongshun; Li, Gang; Fu, Zhigang; Guan, Yang; Zhang, Shengzhao; Lin, Ling

    2018-06-01

    The intensity of the light source and consistency of the spectrum are the most important factors influencing the accuracy in quantitative spectrometric analysis. An efficient "measuring in layer" method was proposed in this paper to limit the influence of inconsistencies in the intensity and spectrum of the light source. In order to verify the effectiveness of this method, a light source with a variable intensity and spectrum was designed according to Planck's law and Wien's displacement law. Intra-lipid samples with 12 different concentrations were prepared and divided into modeling sets and prediction sets according to different incident lights and solution concentrations. The spectra of each sample were measured with five different light intensities. The experimental results showed that the proposed method was effective in eliminating the influence caused by incident light changes and was more effective than normalized processing.

  19. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  20. Atmospheric Effects on InSAR Measurements and Their Mitigation

    PubMed Central

    Ding, Xiao-li; Li, Zhi-wei; Zhu, Jian-jun; Feng, Guang-cai; Long, Jiang-ping

    2008-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a powerful technology for observing the Earth surface, especially for mapping the Earth's topography and deformations. InSAR measurements are however often significantly affected by the atmosphere as the radar signals propagate through the atmosphere whose state varies both in space and in time. Great efforts have been made in recent years to better understand the properties of the atmospheric effects and to develop methods for mitigating the effects. This paper provides a systematic review of the work carried out in this area. The basic principles of atmospheric effects on repeat-pass InSAR are first introduced. The studies on the properties of the atmospheric effects, including the magnitudes of the effects determined in the various parts of the world, the spectra of the atmospheric effects, the isotropic properties and the statistical distributions of the effects, are then discussed. The various methods developed for mitigating the atmospheric effects are then reviewed, including the methods that are based on PSInSAR processing, the methods that are based on interferogram modeling, and those that are based on external data such as GPS observations, ground meteorological data, and satellite data including those from the MODIS and MERIS. Two examples that use MODIS and MERIS data respectively to calibrate atmospheric effects on InSAR are also given. PMID:27873822

  1. Atmospheric Effects on InSAR Measurements and Their Mitigation.

    PubMed

    Ding, Xiao-Li; Li, Zhi-Wei; Zhu, Jian-Jun; Feng, Guang-Cai; Long, Jiang-Ping

    2008-09-03

    Interferometric Synthetic Aperture Radar (InSAR) is a powerful technology for observing the Earth surface, especially for mapping the Earth's topography and deformations. InSAR measurements are however often significantly affected by the atmosphere as the radar signals propagate through the atmosphere whose state varies both in space and in time. Great efforts have been made in recent years to better understand the properties of the atmospheric effects and to develop methods for mitigating the effects. This paper provides a systematic review of the work carried out in this area. The basic principles of atmospheric effects on repeat-pass InSAR are first introduced. The studies on the properties of the atmospheric effects, including the magnitudes of the effects determined in the various parts of the world, the spectra of the atmospheric effects, the isotropic properties and the statistical distributions of the effects, are then discussed. The various methods developed for mitigating the atmospheric effects are then reviewed, including the methods that are based on PSInSAR processing, the methods that are based on interferogram modeling, and those that are based on external data such as GPS observations, ground meteorological data, and satellite data including those from the MODIS and MERIS. Two examples that use MODIS and MERIS data respectively to calibrate atmospheric effects on InSAR are also given.

  2. A Re-evaluation of the Ferrozine Method for Dissolved Iron: The Effect of Organic Interferences

    NASA Astrophysics Data System (ADS)

    Balind, K.; Barber, A.; Gelinas, Y.

    2016-12-01

    Among the most commonly used analytical methods in geochemistry is the ferrozine method for determining dissolved iron concentration in water (1). This cheap and easy-to-use spectrophotometric method involves a complexing agent (ferrozine), a reducing agent (hydroxylamine-HCl) and buffer (ammonium acetate with ammonium hydroxide). Previous studies have demonstrated that complex organic matter (OM) originating from the Suwannee River did not lead to a significantly underestimation of the measured iron content in OM amended iron solutions (2). The authors concluded that this method could be used even in organic rich (i.e., 25 mg/L) waters. Here we compare the concentration of Fe measured using this spectrophotometric method to the total Fe as measured by ICP-MS in the presence/absence of specific organic molecules to ascertain if they interfere with the ferrozine method. We show that certain molecules with hydroxyl and carboxyl functional groups as well as multi-dentate chelating species have a significant effect on the measured iron concentrations. Two possible mechanisms likely are responsible for the inefficiency of this method in the presence of specific organic molecules; 1) incomplete reduction of Fe(III) bound to organic molecules, or 2) competition between the OM and ferrozine for the available iron. We address these possibilities separately by varying the experimental conditions. These methodological artifacts may have far reaching implications due to the extensive use of this method. Stookey, L. L., Anal. Chem., 42, 779 (1970). Viollier, E., et al., Applied Geochem., 15, 785 (2000).

  3. Estimating intercellular surface tension by laser-induced cell fusion.

    PubMed

    Fujita, Masashi; Onami, Shuichi

    2011-12-01

    Intercellular surface tension is a key variable in understanding cellular mechanics. However, conventional methods are not well suited for measuring the absolute magnitude of intercellular surface tension because these methods require determination of the effective viscosity of the whole cell, a quantity that is difficult to measure. In this study, we present a novel method for estimating the intercellular surface tension at single-cell resolution. This method exploits the cytoplasmic flow that accompanies laser-induced cell fusion when the pressure difference between cells is large. Because the cytoplasmic viscosity can be measured using well-established technology, this method can be used to estimate the absolute magnitudes of tension. We applied this method to two-cell-stage embryos of the nematode Caenorhabditis elegans and estimated the intercellular surface tension to be in the 30-90 µN m(-1) range. Our estimate was in close agreement with cell-medium surface tensions measured at single-cell resolution.

  4. The effect of the blackout method on acquisition and generalization1

    PubMed Central

    Wildemann, Donald G.; Holland, James G.

    1973-01-01

    In discrimination training with the Lyons' blackout method, pecks to the negative stimulus are prevented by darkening the chamber each time the subject approaches the negative stimulus. Stimulus generalization along a stimulus dimension was measured after training with this method. For comparison, generalization was also measured after reinforced responding to the positive stimulus without discrimination training, and after discrimination training by extinction of pecks to the negative stimulus. The blackout procedure and the extinction of pecks to the negative stimulus both produced a peak shift in the generalization gradients. The results suggest that after discrimination training in which the positive and negative stimulus are on the same continuum, the blackout method produces extinction-like effects on generalization tests. PMID:16811655

  5. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  6. DC/DC Converter Stability Testing Study

    NASA Technical Reports Server (NTRS)

    Wang, Bright L.

    2008-01-01

    This report presents study results on hybrid DC/DC converter stability testing methods. An input impedance measurement method and a gain/phase margin measurement method were evaluated to be effective to determine front-end oscillation and feedback loop oscillation. In particular, certain channel power levels of converter input noises have been found to have high degree correlation with the gain/phase margins. It becomes a potential new method to evaluate stability levels of all type of DC/DC converters by utilizing the spectral analysis on converter input noises.

  7. Surface dose measurements with commonly used detectors: a consistent thickness correction method.

    PubMed

    Reynolds, Tatsiana A; Higgins, Patrick

    2015-09-08

    The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not recommended for these types of measurements.

  8. An Inexpensive, Stable, and Accurate Relative Humidity Measurement Method for Challenging Environments

    PubMed Central

    Zhang, Wei; Ma, Hong; Yang, Simon X.

    2016-01-01

    In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products. PMID:26999161

  9. An Inexpensive, Stable, and Accurate Relative Humidity Measurement Method for Challenging Environments.

    PubMed

    Zhang, Wei; Ma, Hong; Yang, Simon X

    2016-03-18

    In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products.

  10. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  11. A Hydrogen Exchange Method Using Tritium and Sephadex: Its Application to Ribonuclease*

    PubMed Central

    Englander, S. Walter

    2012-01-01

    A new method for measuring the hydrogen exchange of macromolecules in solution is described. The method uses tritium to trace the movement of hydrogen, and utilizes Sephadex columns to effect, in about 2 minutes, a separation between tritiated macromolecule and tritiated solvent great enough to allow the measurement of bound tritium. High sensitivity and freedom from artifact is demonstrated and the possible value of the technique for investigation of other kinds of colloid-small molecule interaction is indicated. Competition experiments involving tritium, hydrogen, and deuterium indicate the absence of any equilibrium isotope effect in the ribonuclease-hydrogen isotope system, though a secondary kinetic isotope effect is apparent when ribonuclease is largely deuterated. Ribonuclease shows four clearly distinguishable kinetic classes of exchangeable hydrogens. Evidence is marshaled to suggest the independently measurable classes II, III, and IV (in order of decreasing rate of exchange) to represent “random-chain” peptides, peptides involved in α-helix, and otherwise shielded side-chain and peptide hydrogens, respectively. PMID:14075117

  12. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  13. Can We Control Cheating in the Classroom?

    ERIC Educational Resources Information Center

    Kerkvliet, Joe; Sigmund, Charles L.

    1999-01-01

    Examines the determinants of class-specific academic cheating on examinations, class-to-class differences in the severity of the cheating problem across 12 principles of economics classes, whether control measures are effective, and the relative effectiveness of deterrent measures. Considers methods for gathering data on cheating. (CMK)

  14. The Gas-Absorption/Chemical-Reaction Method for Measuring Air-Water Interfacial Area in Natural Porous Media

    NASA Astrophysics Data System (ADS)

    Lyu, Ying; Brusseau, Mark L.; El Ouni, Asma; Araujo, Juliana B.; Su, Xiaosi

    2017-11-01

    The gas-absorption/chemical-reaction (GACR) method used in chemical engineering to quantify gas-liquid interfacial area in reactor systems is adapted for the first time to measure the effective air-water interfacial area of natural porous media. Experiments were conducted with the GACR method, and two standard methods (X-ray microtomographic imaging and interfacial partitioning tracer tests) for comparison, using model glass beads and a natural sand. The results of a series of experiments conducted under identical conditions demonstrated that the GACR method exhibited excellent repeatability for measurement of interfacial area (Aia). Coefficients of variation for Aia were 3.5% for the glass beads and 11% for the sand. Extrapolated maximum interfacial areas (Am) obtained with the GACR method were statistically identical to independent measures of the specific solid surface areas of the media. For example, the Am for the glass beads is 29 (±1) cm-1, compared to 32 (±3), 30 (±2), and 31 (±2) cm-1 determined from geometric calculation, N2/BET measurement, and microtomographic measurement, respectively. This indicates that the method produced accurate measures of interfacial area. Interfacial areas determined with the GACR method were similar to those obtained with the standard methods. For example, Aias of 47 and 44 cm-1 were measured with the GACR and XMT methods, respectively, for the sand at a water saturation of 0.57. The results of the study indicate that the GACR method is a viable alternative for measuring air-water interfacial areas. The method is relatively quick, inexpensive, and requires no specialized instrumentation compared to the standard methods.

  15. MO-FG-CAMPUS-IeP2-01: Characterization of Beam Shaping Filters and Photon Spectra From HVL Profiles in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bujila, R; Royal Institute of Technology, Stockholm; Kull, L

    Purpose: Advanced dosimetry in CT (e.g. the Monte Carlo method) requires an accurate characterization of the shaped filter and radiation quality used during a scan. The purpose of this work was to develop a method where half value layer (HVL) profiles along shaped filters could be made. From the HVL profiles the beam shaping properties and effective photon spectrum for a particular scan can be inferred. Methods: A measurement rig was developed to allow determinations of the HVL under a scatter-free narrow-beam geometry and constant focal spot to ionization chamber distance for different fan angles. For each fan angle themore » HVL is obtained by fitting the transmission of radiation through different thicknesses of an Al absorber (type 1100) using an appropriate model. The effective Al thickness of shaped filters and effective photon spectra are estimated using a model of photon emission from a Tungsten anode. This method is used to obtain the effective photon spectra and effective Al thickness of shaped filters for a CT scanner recently introduced to the market. Results: This study resulted in a set of effective photon spectra (central ray) for each kVp along with effective Al thicknesses of the different shaped filters. The effective photon spectra and effective Al thicknesses of shaped filters were used to obtain numerically approximated HVL profiles and compared to measured HVL profiles (mean absolute percentage error = 0.02). The central axis HVL found in the vendor’s technical documentation were compared to approximated HVL values (mean absolute percentage error = 0.03). Conclusion: This work has resulted in a unique method of measuring HVL profiles along shaped filters in CT. Further the effective photon spectra and the effective Al thicknesses of shaped filters that were obtained can be incorporated into Monte Carlo simulations.« less

  16. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard’s estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560

  17. Estimation of blade airloads from rotor blade bending moments

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    1987-01-01

    A method is developed to estimate the blade normal airloads by using measured flap bending moments; that is, the rotor blade is used as a force balance. The blade's rotation is calculated in vacuum modes and the airloads are then expressed as an algebraic sum of the mode shapes, modal amplitudes, mass distribution, and frequency properties. The modal amplitudes are identified from the blade bending moments using the Strain Pattern Analysis Method. The application of the method is examined using simulated flap bending moment data that have been calculated for measured airloads for a full-scale rotor in a wind tunnel. The estimated airloads are compared with the wind tunnel measurements. The effects of the number of measurements, the number of modes, and errors in the measurements and the blade properties are examined, and the method is shown to be robust.

  18. Simultaneous measurement of ventilation using tracer gas techniques and VOC concentrations in homes, garages and vehicles.

    PubMed

    Batterman, Stuart; Jia, Chunrong; Hatzivasilis, Gina; Godwin, Chris

    2006-02-01

    Air exchange rates and interzonal flows are critical ventilation parameters that affect thermal comfort, air migration, and contaminant exposure in buildings and other environments. This paper presents the development of an updated approach to measure these parameters using perfluorocarbon tracer (PFT) gases, the constant injection rate method, and adsorbent-based sampling of PFT concentrations. The design of miniature PFT sources using hexafluorotoluene and octafluorobenzene tracers, and the development and validation of an analytical GC/MS method for these tracers are described. We show that simultaneous deployment of sources and passive samplers, which is logistically advantageous, will not cause significant errors over multiday measurement periods in building, or over shorter periods in rapidly ventilated spaces like vehicle cabins. Measurement of the tracers over periods of hours to a week may be accomplished using active or passive samplers, and low method detection limits (<0.025 microg m(-3)) and high precisions (<10%) are easily achieved. The method obtains the effective air exchange rate (AER), which is relevant to characterizing long-term exposures, especially when ventilation rates are time-varying. In addition to measuring the PFT tracers, concentrations of other volatile organic compounds (VOCs) are simultaneously determined. Pilot tests in three environments (residence, garage, and vehicle cabin) demonstrate the utility of the method. The 4 day effective AER in the house was 0.20 h(-1), the 4 day AER in the attached garage was 0.80 h(-1), and 16% of the ventilation in the house migrated from the garage. The 5 h AER in a vehicle traveling at 100 km h(-1) under a low-to-medium vent condition was 92 h(-1), and this represents the highest speed test found in the literature. The method is attractive in that it simultaneously determines AERs, interzonal flows, and VOC concentrations over long and representative test periods. These measurements are practical, cost-effective, and helpful in indoor air quality and other investigations.

  19. Effect of Anisotropy on Shape Measurement Accuracy of Silicon Wafer Using Three-Point-Support Inverting Method

    NASA Astrophysics Data System (ADS)

    Ito, Yukihiro; Natsu, Wataru; Kunieda, Masanori

    This paper describes the influences of anisotropy found in the elastic modulus of monocrystalline silicon wafers on the measurement accuracy of the three-point-support inverting method which can measure the warp and thickness of thin large panels simultaneously. Deflection due to gravity depends on the crystal orientation relative to the positions of the three-point-supports. Thus the deviation of actual crystal orientation from the direction indicated by the notch fabricated on the wafer causes measurement errors. Numerical analysis of the deflection confirmed that the uncertainty of thickness measurement increases from 0.168µm to 0.524µm due to this measurement error. In addition, experimental results showed that the rotation of crystal orientation relative to the three-point-supports is effective for preventing wafer vibration excited by disturbance vibration because the resonance frequency of wafers can be changed. Thus, surface shape measurement accuracy was improved by preventing resonant vibration during measurement.

  20. Comparing temporal order judgments and choice reaction time tasks as indices of exogenous spatial cuing.

    PubMed

    Eskes, Gail A; Klein, Raymond M; Dove, Mary Beth; Coolican, Jamesie; Shore, David I

    2007-11-30

    Attentional disorders are common in individuals with neurological or psychiatric conditions and impact on recovery and outcome. Thus, it is critical to develop theory-based measures of attentional function to understand potential mechanisms underlying the disorder and to evaluate the effect of intervention. The present study compared two alternative methods to measure the effects of attentional cuing that could be used in populations of individuals who may not be able to make manual responses normally or may show overall slowing in responses. Spatial attention was measured with speeded and unspeeded methods using either manual or voice responses in two standard attention paradigms: the cued target discrimination reaction time (RT) paradigm and the unspeeded temporal order judgment (TOJ) task. The comparison of speeded and unspeeded tasks specifically addresses the concern about interpreting RT differences between cued and uncued trials (taken as a proxy for attention) in the context of drastically different baseline RTs. We found significant cuing effects for both tasks (speeded RT and untimed TOJ) and both response types (vocal and manual) giving clinicians and researchers alternative methods with which to measure the effects of attention in different populations who may not be able to perform the standard speeded RT task.

  1. Exploring the problem of mold growth and the efficacy of various mold inhibitor methods during moisture sorption isotherm measurements.

    PubMed

    Yu, X; Martin, S E; Schmidt, S J

    2008-03-01

    Mold growth is a common problem during the equilibration of food materials at high relative humidity values using the standard saturated salt slurry method. Exposing samples to toluene vapor and mixing samples with mold inhibitor chemicals are suggested methods for preventing mold growth while obtaining isotherms. However, no published research was found that examined the effect of mold growth on isotherm performance or the efficacy of various mold inhibitor methods, including their possible effect on the physicochemical properties of food materials. Therefore, the objectives of this study were to (1) explore the effect of mold growth on isotherm performance in a range of food materials, (2) investigate the effectiveness of 4 mold inhibitor methods, irradiation, 2 chemical inhibitors (potassium sorbate and sodium acetate), and toluene vapor, on mold growth on dent corn starch inoculated with A. niger, and (3) examine the effect of mold inhibitor methods on the physicochemical properties of dent corn starch, including isotherm performance, pasting properties, gelatinization temperature, and enthalpy. Mold growth was found to affect starch isotherm performance by contributing to weight changes during sample equilibration. Among the 4 mold inhibitor methods tested, irradiation and toluene vapor were found to be the most effective for inhibiting growth of A. niger on dent cornstarch. However, both methods exhibited a significant impact on the starches' physiochemical properties, suggesting the need to probe the efficacy of other mold inhibitor methods and explore the use of new rapid isotherm instruments, which hamper mold growth by significantly decreasing measurement time.

  2. Some effects on SPM based surface measurement

    NASA Astrophysics Data System (ADS)

    Wenhao, Huang; Yuhang, Chen

    2005-01-01

    The scanning probe microscope (SPM) has been used as a powerful tool for nanotechnology, especially in surface nanometrology. However, there are a lot of false images and modifications during the SPM measurement on the surfaces. This is because of the complex interaction between the SPM tip and the surface. The origin is not only due to the tip material or shape, but also to the structure of the sample. So people are paying much attention to draw true information from the SPM images. In this paper, we present some simulation methods and reconstruction examples for the microstructures and surface roughness based on SPM measurement. For example, in AFM measurement, we consider the effects of tip shape and dimension, also the surface topography distribution in both height and space. Some simulation results are compared with other measurement methods to verify the reliability.

  3. Astronaut mass measurement using linear acceleration method and the effect of body non-rigidity

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Li, LuMing; Hu, ChunHua; Chen, Hao; Hao, HongWei

    2011-04-01

    Astronaut's body mass is an essential factor of health monitoring in space. The latest mass measurement device for the International Space Station (ISS) has employed a linear acceleration method. The principle of this method is that the device generates a constant pulling force, and the astronaut is accelerated on a parallelogram motion guide which rotates at a large radius to achieve a nearly linear trajectory. The acceleration is calculated by regression analysis of the displacement versus time trajectory and the body mass is calculated by using the formula m= F/ a. However, in actual flight, the device is instable that the deviation between runs could be 6-7 kg. This paper considers the body non-rigidity as the major cause of error and instability and analyzes the effects of body non-rigidity from different aspects. Body non-rigidity makes the acceleration of the center of mass (C.M.) oscillate and fall behind the point where force is applied. Actual acceleration curves showed that the overall effect of body non-rigidity is an oscillation at about 7 Hz and a deviation of about 25%. To enhance body rigidity, better body restraints were introduced and a prototype based on linear acceleration method was built. Measurement experiment was carried out on ground on an air table. Three human subjects weighing 60-70 kg were measured. The average variance was 0.04 kg and the average measurement error was 0.4%. This study will provide reference for future development of China's own mass measurement device.

  4. Automated cerebral infarct volume measurement in follow-up noncontrast CT scans of patients with acute ischemic stroke.

    PubMed

    Boers, A M; Marquering, H A; Jochem, J J; Besselink, N J; Berkhemer, O A; van der Lugt, A; Beenen, L F; Majoie, C B

    2013-08-01

    Cerebral infarct volume as observed in follow-up CT is an important radiologic outcome measure of the effectiveness of treatment of patients with acute ischemic stroke. However, manual measurement of CIV is time-consuming and operator-dependent. The purpose of this study was to develop and evaluate a robust automated measurement of the CIV. The CIV in early follow-up CT images of 34 consecutive patients with acute ischemic stroke was segmented with an automated intensity-based region-growing algorithm, which includes partial volume effect correction near the skull, midline determination, and ventricle and hemorrhage exclusion. Two observers manually delineated the CIV. Interobserver variability of the manual assessments and the accuracy of the automated method were evaluated by using the Pearson correlation, Bland-Altman analysis, and Dice coefficients. The accuracy was defined as the correlation with the manual assessment as a reference standard. The Pearson correlation for the automated method compared with the reference standard was similar to the manual correlation (R = 0.98). The accuracy of the automated method was excellent with a mean difference of 0.5 mL with limits of agreement of -38.0-39.1 mL, which were more consistent than the interobserver variability of the 2 observers (-40.9-44.1 mL). However, the Dice coefficients were higher for the manual delineation. The automated method showed a strong correlation and accuracy with the manual reference measurement. This approach has the potential to become the standard in assessing the infarct volume as a secondary outcome measure for evaluating the effectiveness of treatment.

  5. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  7. 3-D surface profilometry based on modulation measurement by applying wavelet transform method

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao

    2017-01-01

    A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.

  8. Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer

    2017-03-01

    Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.

  9. Measuring Down: Evaluating Digital Storytelling as a Process for Narrative Health Promotion.

    PubMed

    Gubrium, Aline C; Fiddian-Green, Alice; Lowe, Sarah; DiFulvio, Gloria; Del Toro-Mejías, Lizbeth

    2016-05-15

    Digital storytelling (DST) engages participants in a group-based process to create and share narrative accounts of life events. We present key evaluation findings of a 2-year, mixed-methods study that focused on effects of participating in the DST process on young Puerto Rican Latina's self-esteem, social support, empowerment, and sexual attitudes and behaviors. Quantitative results did not show significant changes in the expected outcomes. However, in our qualitative findings we identified several ways in which the DST made positive, health-bearing effects. We argue for the importance of "measuring down" to reflect the locally grounded, felt experiences of participants who engage in the process, as current quantitative scales do not "measure up" to accurately capture these effects. We end by suggesting the need to develop mixed-methods, culturally relevant, and sensitive evaluation tools that prioritize process effects as they inform intervention and health promotion. © The Author(s) 2016.

  10. Estimating scaled treatment effects with multiple outcomes.

    PubMed

    Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita

    2017-01-01

    In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.

  11. Practical considerations for measuring hydrogen concentrations in groundwater

    USGS Publications Warehouse

    Chapelle, F.H.; Vroblesky, D.A.; Woodward, J.C.; Lovley, D.R.

    1997-01-01

    Several practical considerations for measuring concentrations of dissolved molecular hydrogen (H2) in groundwater including 1 sampling methods 2 pumping methods and (3) effects of well casing materials were evaluated. Three different sampling methodologies (a downhole sampler, a gas- stripping method, and a diffusion sampler) were compared. The downhole sampler and gas-stripping methods gave similar results when applied to the same wells, the other hand, appeared to The diffusion sampler, on overestimate H2 concentrations relative to the downhole sampler. Of these methods, the gas-stripping method is better suited to field conditions because it is faster (~ 30 min for a single analysis as opposed to 2 h for the downhole sampler or 8 h for the diffusion sampler), the analysis is easier (less sample manipulation is required), and the data computations are more straightforward (H2 concentrations need not be corrected for water sample volume). Measurement of H2 using the gas-stripping method can be affected by different pumping equipment. Peristaltic, piston, and bladder pumps all gave similar results when applied to water produced from the same well. It was observed, however, that peristaltic-pumped water (which draws water under a negative pressure) enhanced the gas-stripping process and equilibrated slightly faster than either piston or bladder pumps (which push water under a positive pressure). A direct current(dc) electrically driven submersible pump was observed to produce H2 and was not suitable for measuring H2 in groundwater. Measurements from two field sites indicate that iron or steel well casings, produce H2, which masks H2 concentrations in groundwater. PVC-cased wells or wells cased with other materials that do not produce H2 are necessary for measuring H2 concentrations in groundwater.Several practical considerations for measuring concentrations of dissolved molecular hydrogen in groundwater including sampling methods, pumping methods, and effects of well casing materials were evaluated. The downhole sampler and gas-stripping methods gave similar results when applied to the same wells. The diffusional sampler appears to overestimate H2 concentrations relative to the downhole sampler. Gas-stripping method is better for a single analysis and the data computations are more straightforward. Measurement of H2 using the gas-stripping method can be affected by different pumping equipment.

  12. Molar cusp deformation evaluated by micro-CT and enamel crack formation to compare incremental and bulk-filling techniques.

    PubMed

    Oliveira, Laís Rani Sales; Braga, Stella Sueli Lourenço; Bicalho, Aline Arêdes; Ribeiro, Maria Tereza Hordones; Price, Richard Bengt; Soares, Carlos José

    2018-07-01

    To describe a method of measuring the molar cusp deformation using micro-computed tomography (micro-CT), the propagation of enamel cracks using transillumination, and the effects of hygroscopic expansion after incremental and bulk-filling resin composite restorations. Twenty human molars received standardized Class II mesio-occlusal-distal cavity preparations. They were restored with either a bulk-fill resin composite, X-tra fil (XTRA), or a conventional resin composite, Filtek Z100 (Z100). The resin composites were tested for post-gel shrinkage using a strain gauge method. Cusp deformation (CD) was evaluated using the images obtained using a micro-CT protocol and using a strain-gauge method. Enamel cracks were detected using transillumination. The post-gel shrinkage of Z100 was higher than XTRA (P < 0.001). The amount of cusp deformation produced using Z100 was higher compared to XTRA, irrespective of the measurement method used (P < 0.001). The thinner lingual cusp always had a higher CD than the buccal cusp, irrespective of the measurement method (P < 0.001). A positive correlation (r = 0.78) was found between cusp deformation measured by micro-CT or by the strain-gauge method. After hygroscopic expansion of the resin composite, the cusp displacement recovered around 85% (P < 0.001). After restoration, Z100 produced more cracks than XTRA (P = 0.012). Micro-CT was an effective method for evaluating the cusp deformation. Transillumination was effective for detecting enamel cracks. There were fewer negative effects of polymerization shrinkage in bulk-fill resin restorations using XTRA than for the conventional incremental filling technique using conventional composite resin Z100. Shrinkage and cusp deformation are directly related to the formation of enamel cracks. Cusp deformation and crack propagation may increase the risk of tooth fracture. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Improving solar ultraviolet irradiance measurements by applying a temperature correction method for Teflon diffusers.

    PubMed

    Jäkel, Evelyn; den Outer, Peter N; Tax, Rick B; Görts, Peter C; Reinen, Henk A J M

    2007-07-10

    To establish trends in surface ultraviolet radiation levels, accurate and stable long-term measurements are required. The accuracy level of today's measurements has become high enough to notice even smaller effects that influence instrument sensitivity. Laboratory measurements of the sensitivity of the entrance optics have shown a decrease of as much as 0.07-0.1%/deg temperature increase. Since the entrance optics can heat to greater than 45 degrees C in Dutch summers, corrections are necessary. A method is developed to estimate the entrance optics temperatures from pyranometer measurements and meteorological data. The method enables us to correct historic data records for which temperature information is not available. The temperature retrieval method has an uncertainty of less than 2.5 degrees C, resulting in a 0.3% uncertainty in the correction to be performed. The temperature correction improves the agreement between modeled and measured doses and instrument intercomparison as performed within the Quality Assurance of Spectral Ultraviolet Measurements in Europe project. The retrieval method is easily transferable to other instruments.

  14. CRAG (Composite Research Advisory Group) Test Methods for the Measurement of the Engineering Properties of Fibre Reinforced Plastics

    DTIC Science & Technology

    1988-02-01

    Environmental effects Method 900 - Background information on environmental effects 78 Method 901 - Method of assessment of diffusivity properties of fibre... environmental conditioning. Strain gauges can sometimes fail during prolonged fatigue testing, and it is therefore prudent to undertake a secondary check...fibre volume fraction of the laminate as described in section 1000. (iv) The environmental history of the specimen prior to test. (v) The environmental

  15. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  16. Method and apparatus for monitoring two-phase flow. [PWR

    DOEpatents

    Sheppard, J.D.; Tong, L.S.

    1975-12-19

    A method and apparatus for monitoring two-phase flow is provided that is particularly related to the monitoring of transient two-phase (liquid-vapor) flow rates such as may occur during a pressurized water reactor core blow-down. The present invention essentially comprises the use of flanged wire screens or similar devices, such as perforated plates, to produce certain desirable effects in the flow regime for monitoring purposes. One desirable effect is a measurable and reproducible pressure drop across the screen. The pressure drop can be characterized for various known flow rates and then used to monitor nonhomogeneous flow regimes. Another useful effect of the use of screens or plates in nonhomogeneous flow is that such apparatus tends to create a uniformly dispersed flow regime in the immediate downstream vicinity. This is a desirable effect because it usually increases the accuracy of flow rate measurements determined by conventional methods.

  17. Advancement of Analysis Method for Electromagnetic Screening Effect of Mountain Tunnel

    NASA Astrophysics Data System (ADS)

    Okutani, Tamio; Nakamura, Nobuyuki; Terada, Natsuki; Fukuda, Mitsuyoshi; Tate, Yutaka; Inada, Satoshi; Itoh, Hidenori; Wakao, Shinji

    In this paper we report advancement of an analysis method for electromagnetic screening effect of mountain tunnel with a multiple conductor circuit model. On A.C. electrified railways it is a great issue to manage the influence of electromagnetic induction caused by feeding circuits. Tunnels are said to have a screening effect to reduce the electromagnetic induction because a large amount of steel is used in the tunnels. But recently the screening effect is less expected because New Austrian Tunneling Method (NATM), in which the amount of steel used is less than in conventional methods, is adopted as the standard tunneling method for constructing mountain tunnels. So we measured and analyzed the actual screening effect of mountain tunnels constructed with NATM. In the process of the analysis we have advanced a method to analyze the screening effect more precisely. In this method we can adequately model tunnel structure as a part of multiple conductor circuit.

  18. Nondestructive evaluation of composite materials by pulsed time domain methods in imbedded optical fibers

    NASA Technical Reports Server (NTRS)

    Claus, R. O.; Bennett, K. D.; Jackson, B. S.

    1986-01-01

    The application of fiber-optical time domain reflectometry (OTDR) to nondestructive quantitative measurements of distributed internal strain in graphite-epoxy composites, using optical fiber waveguides imbedded between plies, is discussed. The basic OTDR measurement system is described, together with the methods used to imbed optical fibers within composites. Measurement results, system limitations, and the effect of the imbedded fiber on the integrity of the host composite material are considered.

  19. Multimethod assessment of psychopathy in relation to factors of internalizing and externalizing from the Personality Assessment Inventory: the impact of method variance and suppressor effects.

    PubMed

    Blonigen, Daniel M; Patrick, Christopher J; Douglas, Kevin S; Poythress, Norman G; Skeem, Jennifer L; Lilienfeld, Scott O; Edens, John F; Krueger, Robert F

    2010-03-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multimethod approach to psychopathy assessment (self-report, interview and file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  20. Radiotracer investigation in gold leaching tanks.

    PubMed

    Dagadu, C P K; Akaho, E H K; Danso, K A; Stegowski, Z; Furman, L

    2012-01-01

    Measurement and analysis of residence time distribution (RTD) is a classical method to investigate performance of chemical reactors. In the present investigation, the radioactive tracer technique was used to measure the RTD of aqueous phase in a series of gold leaching tanks at the Damang gold processing plant in Ghana. The objective of the investigation was to measure the effective volume of each tank and validate the design data after recent process intensification or revamping of the plant. I-131 was used as a radioactive tracer and was instantaneously injected into the feed stream of the first tank and monitored at the outlet of different tanks. Both sampling and online measurement methods were used to monitor the tracer concentration. The results of measurements indicated that both the methods provided identical RTD curves. The mean residence time (MRT) and effective volume of each tank was estimated. The tanks-in-series model with exchange between active and stagnant volume was used and found suitable to describe the flow structure of aqueous phase in the tanks. The estimated effective volume of the tanks and high degree of mixing in tanks could validate the design data and confirmed the expectation of the plant engineer after intensification of the process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Long-term continuous acoustical suspended-sediment measurements in rivers – Theory, evaluation, and results from 14 stations on five rivers

    USGS Publications Warehouse

    Topping, David; Wright, Scott A.; Griffiths, Ronald; Dean, David

    2016-01-01

    We have developed a physically based method for using two acoustic frequencies to measure suspended-silt-and-clay concentration, suspended-sand concentration, and suspended-sand median grain size in river cross sections at 15-minute intervals over decadal timescales. The method is strongly grounded in the extensive scientific literature on the scattering of sound by suspensions of small particles. In particular, the method takes advantage of the specific theoretical relations among acoustic frequency, acoustic attenuation, acoustic backscatter, suspended-sediment concentration, and suspended-sediment grain-size distribution. We briefly describe the theory and methods, demonstrate the application of the method, and compute biases and errors in the method at 14 stations in the Colorado River and Rio Grande basins, where large numbers of suspended-sediment samples have been collected concurrently with acoustical measurements over many years. Quantification of errors in sediment-transport measurements made using this method is essential if the measurements are to be used effectively, e.g., to evaluate uncertainty in long-term sediment loads and budgets

  2. Ultrasonic power measurement system based on acousto-optic interaction.

    PubMed

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  3. Ultrasonic power measurement system based on acousto-optic interaction

    NASA Astrophysics Data System (ADS)

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  4. Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.

    PubMed

    Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi

    2015-09-01

    A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.

  5. ULTRAVIOLET DISINFECTION OF A SECONDARY EFFLUENT: MEASUREMENT OF DOSE AND EFFECTS OF FILTRATION

    EPA Science Inventory

    Ultraviolet (UV) disinfection of wastewater secondary effluent was investigated in a two-phase study to develop methods for measuring UV dose and to determine the effects of filtration on UV disinfection. The first phase of this study involved a pilot plant study comparing filtra...

  6. The Josephson Effect and e/h

    ERIC Educational Resources Information Center

    Clarke, John

    1970-01-01

    Discusses the theory of the Josephson Effect, the derivation of the Josephson voltage-frequency relation, and methods of measuring the fundamental constatn ratio e/h. Various types of Josephson junctions are described. The impact of the measurement of e/h upin the fundamental constants and quantum electro-dynamics is briefly discussed.…

  7. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  8. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  9. Comparison of Direct and Indirect Methods of Measuring Arterial Blood Pressure in Healthy Male Rhesus Macaques (Macaca mulatta).

    PubMed

    France, Logan K; Vermillion, Meghan S; Garrett, Caroline M

    2018-01-01

    Blood pressure is a critical parameter for evaluating cardiovascular health, assessing effects of drugs and procedures, monitoring physiologic status during anesthesia, and making clinical decisions. The placement of an arterial catheter is the most direct and accurate method for measuring blood pressure; however, this approach is invasive and of limited use during brief sedated examinations. The objective of this study was to determine which method of indirect blood pressure monitoring was most accurate compared with measurement by direct arterial catheterization. In addition, we sought to determine the relative accuracy of each indirect method (compared with direct arterial measurement) at a given body location and to assess whether the accuracy of each indirect method was dependent on body location. We compared direct blood pressure measurements by means of catheterization of the saphenous artery with oscillometric and ultrasonic Doppler flow detection measurements at 3 body locations (forearm, distal leg, and tail base) in 16 anesthetized, male rhesus macaques. The results indicate that oscillometry at the forearm is the best indirect method and location for accurately and consistently measuring blood pressure in healthy male rhesus macaques.

  10. A study on measuring occlusal contact area using silicone impression materials: an application of this method to the bite force measurement system using the pressure-sensitive sheet.

    PubMed

    Ando, Katsuya; Kurosawa, Masahiro; Fuwa, Yuji; Kondo, Takamasa; Goto, Shigemi

    2007-11-01

    The aim of this study was to establish an objective and quantitative method of measuring occlusal contact areas. To this end, bite records were taken with a silicone impression material and a light transmission device was used to read the silicone impression material. To examine the effectiveness of this novel method, the occlusal contact area of the silicone impression material and its thickness limit of readable range were measured. Results of this study suggested that easy and highly accurate measurements of occlusal contact area could be obtained by selecting an optimal applied voltage of the light transmission device and an appropriate color of the silicone impression material.

  11. Research on measurement method of optical camouflage effect of moving object

    NASA Astrophysics Data System (ADS)

    Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen

    2016-10-01

    Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.

  12. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  13. Measuring Government Effectiveness and Its Consequences for Social Welfare in Sub-Saharan African Countries

    ERIC Educational Resources Information Center

    Sacks, Audrey; Levi, Margaret

    2010-01-01

    We introduce a method for measuring effective government and modeling its consequences for social welfare at the individual level. Our focus is on the experiences of citizens living in African countries where famine remains a serious threat. If a government is effective, it will be able to deliver goods that individuals need to improve their…

  14. Alternative Methods for Estimating Plane Parameters Based on a Point Cloud

    NASA Astrophysics Data System (ADS)

    Stryczek, Roman

    2017-12-01

    Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.

  15. Effective Methods of Teaching Moon Phases

    NASA Astrophysics Data System (ADS)

    Jones, Heather; Hintz, E. G.; Lawler, M. J.; Jones, M.; Mangrubang, F. R.; Neeley, J. E.

    2010-01-01

    This research investigates the effectiveness of several commonly used methods for teaching the causes of moon phases to sixth grade students. Common teaching methods being investigated are the use of diagrams, animations, modeling/kinesthetics and direct observations of moon phases using a planetarium. Data for each method will be measured by a pre and post assessment of students understanding of moon phases taught using one of the methods. The data will then be used to evaluate the effectiveness of each teaching method individually and comparatively, as well as the method's ability to discourage common misconceptions about moon phases. Results from this research will provide foundational data for the development of educational planetarium shows for the deaf or other linguistically disadvantage children.

  16. Influence of the weighing bar position in vessel on measurement of cement’s particle size distribution by using the buoyancy weighing-bar method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Sihombing, R. O.; Simanjuntak, A.; Hanum, F.

    2018-02-01

    The buoyancy weighing-bar method is a new simple and cost-effective method to determine the particle size distribution both settling and floating particle. In this method, the density change in a suspension due to particle migration is measured by weighing buoyancy against a weighing-bar hung in the suspension, and then the particle size distribution is calculated using the length of the bar and the time-course change in the mass of the bar. The apparatus of this method consists of a weighing-bar and an analytical balance with a hook for under-floor weighing. The weighing bar is used to detect the density change in suspension. In this study we investigate the influences of position of weighing bar in vessel on settling particle size distribution measurements of cement by using the buoyancy weighing-bar method. The vessel used in this experiment is graduated cylinder with the diameter of 65 mm and the position of weighing bar is in center and off center of vessel. The diameter of weighing bar in this experiment is 10 mm, and the kerosene is used as a dispersion liquids. The results obtained show that the positions of weighing bar in vessel have no significant effect on determination the cement’s particle size distribution by using buoyancy weighing-bar method, and the results obtained are comparable to those measured by using settling balance method.

  17. Sensing of fluid viscoelasticity from piezoelectric actuation of cantilever flexural vibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jeongwon; Jeong, Seongbin; Kim, Seung Joon

    2015-01-15

    An experimental method is proposed to measure the rheological properties of fluids. The effects of fluids on the vibration actuated by piezoelectric patches were analyzed and used in measuring viscoelastic properties. Fluid-structure interactions induced changes in the beam vibration properties and frequency-dependent variations of the complex wavenumber of the beam structure were used in monitoring these changes. To account for the effects of fluid-structure interaction, fluids were modelled as a simple viscoelastic support at one end of the beam. The measured properties were the fluid’s dynamic shear modulus and loss tangent. Using the proposed method, the rheological properties of variousmore » non-Newtonian fluids were measured. The frequency range for which reliable viscoelasticity results could be obtained was 10–400 Hz. Viscosity standard fluids were tested to verify the accuracy of the proposed method, and the results agreed well with the manufacturer’s reported values. The simple proposed laboratory setup for measurements was flexible so that the frequency ranges of data acquisition were adjustable by changing the beam’s mechanical properties.« less

  18. Air Pollution Translations: A Bibliography with Abstracts - Volume 2.

    ERIC Educational Resources Information Center

    National Air Pollution Control Administration (DHEW), Raleigh, NC.

    This volume is the second in a series of compilations presenting abstracts and indexes of translations of technical air pollution literature. The 444 entries are grouped into 12 subject categories: General; Emission Sources; Atmospheric Interaction; Measurement Methods; Control Methods; Effects--Human Health; Effects--Plants and Livestock;…

  19. Monitoring colony-level effects of sublethal pesticide exposure on honey bees

    USDA-ARS?s Scientific Manuscript database

    The effects of sublethal pesticide exposure to honey bee colonies may be significant but difficult to detect in the field using standard visual assessment methods. Here we describe methods to measure the quantities of adult bees, brood and food resources by weighing hives and hive parts, by photogra...

  20. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  1. Enhancing the Possibility of Success by Measuring the Probability of Failure in an Educational Program.

    ERIC Educational Resources Information Center

    Brookhart, Susan M.; And Others

    1997-01-01

    Process Analysis is described as a method for identifying and measuring the probability of events that could cause the failure of a program, resulting in a cause-and-effect tree structure of events. The method is illustrated through the evaluation of a pilot instructional program at an elementary school. (SLD)

  2. Comparison of sprinkler droplet size and velocity measurements using a laser precipitation meter and photographic method

    USDA-ARS?s Scientific Manuscript database

    Kinetic energy of water droplets has a substantial effect on development of a soil surface seal and infiltration rate of bare soil. Methods for measuring sprinkler droplet size and velocity needed to calculate droplet kinetic energy have been developed and tested over the past 50 years, each with ad...

  3. Optimal Scoring Methods of Hand-Strength Tests in Patients with Stroke

    ERIC Educational Resources Information Center

    Huang, Sheau-Ling; Hsieh, Ching-Lin; Lin, Jau-Hong; Chen, Hui-Mei

    2011-01-01

    The purpose of this study was to determine the optimal scoring methods for measuring strength of the more-affected hand in patients with stroke by examining the effect of reducing measurement errors. Three hand-strength tests of grip, palmar pinch, and lateral pinch were administered at two sessions in 56 patients with stroke. Five scoring methods…

  4. Biological Field and Laboratory Methods for Measuring the Quality of Surface Waters and Effluents. Program Element 1BA027.

    ERIC Educational Resources Information Center

    Weber, Cornelius I., Ed.

    This Environmental Protection Agency manual was developed to provide pollution biologists with the most recent methods for measuring the effects of environmental contaminants on freshwater and marine organisms. The sections of this manual include: (1) Biometrics; (2) Plankton; (3) Periphyton; (4) Macrophyton; (5) Macroinvertebrates; (6) Fish; and…

  5. Absolute Steady-State Thermal Conductivity Measurements by Use of a Transient Hot-Wire System.

    PubMed

    Roder, H M; Perkins, R A; Laesecke, A; Nieto de Castro, C A

    2000-01-01

    A transient hot-wire apparatus was used to measure the thermal conductivity of argon with both steady-state and transient methods. The effects of wire diameter, eccentricity of the wire in the cavity, axial conduction, and natural convection were accounted for in the analysis of the steady-state measurements. Based on measurements on argon, the relative uncertainty at the 95 % level of confidence of the new steady-state measurements is 2 % at low densities. Using the same hot wires, the relative uncertainty of the transient measurements is 1 % at the 95 % level of confidence. This is the first report of thermal conductivity measurements made by two different methods in the same apparatus. The steady-state method is shown to complement normal transient measurements at low densities, particularly for fluids where the thermophysical properties at low densities are not known with high accuracy.

  6. Digital Moiré based transient interferometry and its application in optical surface measurement

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Tan, Yifeng; Wang, Shaopu; Hu, Yao

    2017-10-01

    Digital Moiré based transient interferometry (DMTI) is an effective non-contact testing methods for optical surfaces. In DMTI system, only one frame of real interferogram is experimentally captured for the transient measurement of the surface under test (SUT). When combined with partial compensation interferometry (PCI), DMTI is especially appropriate for the measurement of aspheres with large apertures, large asphericity or different surface parameters. Residual wavefront is allowed in PCI, so the same partial compensator can be applied to the detection of multiple SUTs. Excessive residual wavefront aberration results in spectrum aliasing, and the dynamic range of DMTI is limited. In order to solve this problem, a method based on wavelet transform is proposed to extract phase from the fringe pattern with spectrum aliasing. Results of simulation demonstrate the validity of this method. The dynamic range of Digital Moiré technology is effectively expanded, which makes DMTI prospective in surface figure error measurement for intelligent fabrication of aspheric surfaces.

  7. Evaluation of structural and thermophysical effects on the measurement accuracy of deep body thermometers based on dual-heat-flux method.

    PubMed

    Huang, Ming; Tamura, Toshiyo; Chen, Wenxi; Kanaya, Shigehiko

    2015-01-01

    To help pave a path toward the practical use of continuous unconstrained noninvasive deep body temperature measurement, this study aims to evaluate the structural and thermophysical effects on measurement accuracy for the dual-heat-flux method (DHFM). By considering the thermometer's height, radius, conductivity, density and specific heat as variables affecting the accuracy of DHFM measurement, we investigated the relationship between those variables and accuracy using 3-D models based on finite element method. The results of our simulation study show that accuracy is proportional to the radius but inversely proportional to the thickness of the thermometer when the radius is less than 30.0mm, and is also inversely proportional to the heat conductivity of the heat insulator inside the thermometer. The insights from this study would help to build a guideline for design, fabrication and optimization of DHFM-based thermometers, as well as their practical use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Event-based measurement of boundary-layer winds and topographic effects with a small unmanned aircraft system (sUas)

    NASA Astrophysics Data System (ADS)

    Riddell, K.; Hugenholtz, C.

    2012-12-01

    Numerical models are invaluable tools for developing and testing hypotheses about interactions and feedbacks between wind and topography. However, field-based measurements are equally important for building and enhancing confidence in model output. Several field methods are available, including conventional approaches using tall masts equipped with an array of anemometers, as well as weather balloons, but few methods are able to match the level of detail available in model simulations of topographically-modified windflow. Here we propose an alternative method that may enhance numerical models. The method involves a small unmanned aircraft system (sUas) equipped with a meteorological sensor payload. The sUas is a two blade helicopter that weighs 5.5 kg, and has a length of 1.32 m. We designed a simple measurement and control system using an Arduino micro-controller, which acquired measurements at pre-defined coordinates autonomously. The entire survey was pre-configured and uploaded to the aircraft, effectively avoiding the need for manual aircraft operation and data collection. We collected raw measurements at each waypoint, yielding a point cloud of windspeed data. During test flights the sUas was able to maintain a stable position (± 0.6 m vertical and horizontal) in wind speeds up to 50 km/h. We used the raw data to map the wind speed-up ratio relative to a reference anemometer. Although it would be preferable to acquire continuous measurements at each waypoint, the sUas method only provides a snapshot of wind at each location. However, despite this limitation, the sUas does fill a void in terms of spatial measurements within the boundary layer. It may be possible to enhance this method in the future through deployment of sUas swarms that measure wind concurrently at many locations. Furthermore, other sensors can be deployed on sUas for measuring aeolian processes such as dust.

  9. Vibration measurement with nonlinear converter in the presence of noise

    NASA Astrophysics Data System (ADS)

    Mozuras, Almantas

    2017-10-01

    Conventional vibration measurement methods use the linear properties of physical converters. These methods are strongly influenced by nonlinear distortions, because ideal linear converters are not available. Practically, any converter can be considered as a linear one, when an output signal is very small. However, the influence of noise increases significantly and signal-to-noise ratio decreases at lower signals. When the output signal is increasing, the nonlinear distortions are also augmenting. If the wide spectrum vibration is measured, conventional methods face a harmonic distortion as well as intermodulation effects. Purpose of this research is to develop a measurement method of wide spectrum vibration by using a converter described by a nonlinear function of type f(x), where x =x(t) denotes the dependence of coordinate x on time t due to the vibration. Parameter x(t) describing the vibration is expressed as Fourier series. The spectral components of the converter output f(x(t)) are determined by using Fourier transform. The obtained system of nonlinear equations is solved using the least squares technique that permits to find x(t) in the presence of noise. This method allows one to carry out the absolute or relative vibration measurements. High resistance to noise is typical for the absolute vibration measurement, but it is necessary to know the Taylor expansion coefficients of the function f(x). If the Taylor expansion is not known, the relative measurement of vibration parameters is also possible, but with lower resistance to noise. This method allows one to eliminate the influence of nonlinear distortions to the measurement results, and consequently to eliminate harmonic distortion and intermodulation effects. The use of nonlinear properties of the converter for measurement gives some advantages related to an increased frequency range of the output signal (consequently increasing the number of equations) that allows one to decrease the noise influence on the measurement results. The greater is the nonlinearity the lower is noise. This method enables the use of the converters that are normally not suitable due to the high nonlinearity.

  10. Label-free viscosity measurement of complex fluids using reversal flow switching manipulation in a microfluidic channel

    PubMed Central

    Jun Kang, Yang; Ryu, Jeongeun; Lee, Sang-Joon

    2013-01-01

    The accurate viscosity measurement of complex fluids is essential for characterizing fluidic behaviors in blood vessels and in microfluidic channels of lab-on-a-chip devices. A microfluidic platform that accurately identifies biophysical properties of blood can be used as a promising tool for the early detections of cardiovascular and microcirculation diseases. In this study, a flow-switching phenomenon depending on hydrodynamic balancing in a microfluidic channel was adopted to conduct viscosity measurement of complex fluids with label-free operation. A microfluidic device for demonstrating this proposed method was designed to have two inlets for supplying the test and reference fluids, two side channels in parallel, and a junction channel connected to the midpoint of the two side channels. According to this proposed method, viscosities of various fluids with different phases (aqueous, oil, and blood) in relation to that of reference fluid were accurately determined by measuring the switching flow-rate ratio between the test and reference fluids, when a reverse flow of the test or reference fluid occurs in the junction channel. An analytical viscosity formula was derived to measure the viscosity of a test fluid in relation to that of the corresponding reference fluid using a discrete circuit model for the microfluidic device. The experimental analysis for evaluating the effects of various parameters on the performance of the proposed method revealed that the fluidic resistance ratio (RJL/RL, fluidic resistance in the junction channel (RJL) to fluidic resistance in the side channel (RL)) strongly affects the measurement accuracy. The microfluidic device with smaller RJL/RL values is helpful to measure accurately the viscosity of the test fluid. The proposed method accurately measured the viscosities of various fluids, including single-phase (Glycerin and plasma) and oil-water phase (oil vs. deionized water) fluids, compared with conventional methods. The proposed method was also successfully applied to measure viscosities of blood with varying hematocrits, chemically fixed RBCS, and channel sizes. Based on these experimental results, the proposed method can be effectively used to measure the viscosities of various fluids easily, without any fluorescent labeling and tedious calibration procedures. PMID:24404040

  11. Comparing the Effectiveness of Self-Paced and Collaborative Frame-of-Reference Training on Rater Accuracy in a Large-Scale Writing Assessment

    ERIC Educational Resources Information Center

    Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu

    2015-01-01

    There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…

  12. Multi-Target State Extraction for the SMC-PHD Filter

    PubMed Central

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-01-01

    The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274

  13. Synergetic analgesic effect of the combination of arnica and hydroxyethyl salicylate in ethanolic solution following cutaneous application by transcutaneous electrostimulation.

    PubMed

    Kucera, Miroslav; Horácek, Ondrej; Kálal, Jan; Kolár, Pavel; Korbelar, Peter; Polesná, Zora

    2003-01-01

    A combination of the active agents arnica and hydroxyethyl salicylate (HES) in ethanolic solution (Sportino Acute Spray) is cutaneously applied for the treatment of sports injuries and diseases of the locomotor apparatus. The aim was to examine the efficacy and synergism of the single substances and the combination with regard to the analgesic effect after cutaneous application as well as to validate the method of transcutaneous electronic stimulation as a method of measuring the analgesic effect. In the present article, the method of transcutaneous electrostimulation was used in a randomized, controlled, single-blind trial on healthy volunteers to provide objective evidence that the combination of active agents displays a significantly greater analgesic effect than the individual active agents. Thus there is synergy between the active agents arnica and hydroxyethyl salicylate in the combination preparation. In addition, the effect of the vehicle ethanol and the reference substance water could be determined within the framework of these comparative experiments and the difference between the combination preparation and the individual substances arnica and HES could be shown. The method of transcutaneous electrostimulation used for the objective measurement of the analgesic effect was validated.

  14. Acoustic computer tomographic pyrometry for two-dimensional measurement of gases taking into account the effect of refraction of sound wave paths

    NASA Astrophysics Data System (ADS)

    Lu, J.; Wakai, K.; Takahashi, S.; Shimizu, S.

    2000-06-01

    The algorithm which takes into account the effect of refraction of sound wave paths for acoustic computer tomography (CT) is developed. Incorporating the algorithm of refraction into ordinary CT algorithms which are based on Fourier transformation is very difficult. In this paper, the least-squares method, which is capable of considering the refraction effect, is employed to reconstruct the two-dimensional temperature distribution. The refraction effect is solved by writing a set of differential equations which is derived from Fermat's theorem and the calculus of variations. It is impossible to carry out refraction analysis and the reconstruction of temperature distribution simultaneously, so the problem is solved using the iteration method. The measurement field is assumed to take the shape of a circle and 16 speakers, also serving as the receivers, are set around it isometrically. The algorithm is checked through computer simulation with various kinds of temperature distributions. It is shown that the present method which takes into account the algorithm of the refraction effect can reconstruct temperature distributions with much greater accuracy than can methods which do not include the refraction effect.

  15. On the feasibility of the Chevron Notch Beam method to measure fracture toughness of fine-grained zirconia ceramics.

    PubMed

    Kailer, Andreas; Stephan, Marc

    2016-10-01

    The fracture toughness determination of fine-grained zirconia ceramics using the chevron notched beam method (CNB) was investigated to assess the feasibility of this method for quality assurance and material characterization. CNB tests were performed using four different yttria-stabilized zirconia ceramics under various testing modes and conditions, including displacement-controlled and load-rate-controlled four point bending to assess the influence of slow crack growth and identify most suitable test parameters. For comparison, tests using single-edge V-notch beams (SEVNB) were conducted. It was observed that the CNB method yields well-reproducible results. However, slow crack growth effects significantly affect the measured KIC values, especially when slow loading rates are used. To minimize the effect of slow crack growth, the application of high loading rates is recommended. Despite a certain effort needed for setting up a sample preparation routine, the CNB method is considered to be very useful for measuring and controlling the fracture toughness of zirconia ceramics. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  16. The Effect of Laminar Flow on Rotor Hover Performance

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.; Martin, Preston B.

    2017-01-01

    The topic of laminar flow effects on hover performance is introduced with respect to some historical efforts where laminar flow was either measured or attempted. An analysis method is outlined using combined blade element, momentum method coupled to an airfoil analysis method, which includes the full e(sup N) transition model. The analysis results compared well with the measured hover performance including the measured location of transition on both the upper and lower blade surfaces. The analysis method is then used to understand the upper limits of hover efficiency as a function of disk loading. The impact of laminar flow is higher at low disk loading, but significant improvement in terms of power loading appears possible even up to high disk loading approaching 20 ps f. A optimum planform design equation is derived for cases of zero profile drag and finite drag levels. These results are intended to be a guide for design studies and as a benchmark to compare higher fidelity analysis results. The details of the analysis method are given to enable other researchers to use the same approach for comparison to other approaches.

  17. Passive wireless strain monitoring of tyres using capacitance and tuning frequency changes

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Ryosuke; Todoroki, Akira

    2005-08-01

    In-service strain monitoring of tyres of automobiles is quite effective for improving the reliability of tyres and anti-lock braking systems (ABS). Conventional strain gauges have high stiffness and require lead wires. Therefore, they are cumbersome for tyre strain measurements. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tyre itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tyre monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tyre is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of the tyre alters the tuning frequency. This change of the tuned radio wave facilitates wireless measurement of the applied strain of the specimen without any power supply. This passive wireless method is applied to a specimen and the static applied strain is measured. Experiments demonstrate that the method is effective for passive wireless strain monitoring of tyres.

  18. Effectiveness of Biodiversity Surrogates for Conservation Planning: Different Measures of Effectiveness Generate a Kaleidoscope of Variation

    PubMed Central

    Grantham, Hedley S.; Pressey, Robert L.; Wells, Jessie A.; Beattie, Andrew J.

    2010-01-01

    Conservation planners represent many aspects of biodiversity by using surrogates with spatial distributions readily observed or quantified, but tests of their effectiveness have produced varied and conflicting results. We identified four factors likely to have a strong influence on the apparent effectiveness of surrogates: (1) the choice of surrogate; (2) differences among study regions, which might be large and unquantified (3) the test method, that is, how effectiveness is quantified, and (4) the test features that the surrogates are intended to represent. Analysis of an unusually rich dataset enabled us, for the first time, to disentangle these factors and to compare their individual and interacting influences. Using two data-rich regions, we estimated effectiveness using five alternative methods: two forms of incidental representation, two forms of species accumulation index and irreplaceability correlation, to assess the performance of ‘forest ecosystems’ and ‘environmental units’ as surrogates for six groups of threatened species—the test features—mammals, birds, reptiles, frogs, plants and all of these combined. Four methods tested the effectiveness of the surrogates by selecting areas for conservation of the surrogates then estimating how effective those areas were at representing test features. One method measured the spatial match between conservation priorities for surrogates and test features. For methods that selected conservation areas, we measured effectiveness using two analytical approaches: (1) when representation targets for the surrogates were achieved (incidental representation), or (2) progressively as areas were selected (species accumulation index). We estimated the spatial correlation of conservation priorities using an index known as summed irreplaceability. In general, the effectiveness of surrogates for our taxa (mostly threatened species) was low, although environmental units tended to be more effective than forest ecosystems. The surrogates were most effective for plants and mammals and least effective for frogs and reptiles. The five testing methods differed in their rankings of effectiveness of the two surrogates in relation to different groups of test features. There were differences between study areas in terms of the effectiveness of surrogates for different test feature groups. Overall, the effectiveness of the surrogates was sensitive to all four factors. This indicates the need for caution in generalizing surrogacy tests. PMID:20644726

  19. In Situ Effective Diffusion Coefficient Profiles in Live Biofilms Using Pulsed-Field Gradient Nuclear Magnetic Resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renslow, Ryan S.; Majors, Paul D.; McLean, Jeffrey S.

    2010-08-15

    Diffusive mass transfer in biofilms is characterized by the effective diffusion coefficient. It is well-documented that the effective diffusion coefficient can vary by location in a biofilm. The current literature is dominated by effective diffusion coefficient measurements for distinct cell clusters and stratified biofilms showing this spatial variation. Regardless of whether distinct cell clusters or surface-averaging methods are used, position-dependent measurements of the effective diffusion coefficient are currently: 1) invasive to the biofilm, 2) performed under unnatural conditions, 3) lethal to cells, and/or 4) spatially restricted to only certain regions of the biofilm. Invasive measurements can lead to inaccurate resultsmore » and prohibit further (time dependent) measurements which are important for the mathematical modeling of biofilms. In this study our goals were to: 1) measure the effective diffusion coefficient for water in live biofilms, 2) monitor how the effective diffusion coefficient changes over time under growth conditions, and 3) correlate the effective diffusion coefficient with depth in the biofilm. We measured in situ two-dimensional effective diffusion coefficient maps within Shewanella oneidensis MR-1biofilms using pulsed-field gradient nuclear magnetic resonance methods, and used them to calculate surface-averaged relative effective diffusion coefficient (Drs) profiles. We found that 1) Drs decreased from the top of the biofilm to the bottom, 2) Drs profiles differed for biofilms of different ages, 3) Drs profiles changed over time and generally decreased with time, 4) all the biofilms showed very similar Drs profiles near the top of the biofilm, and 5) the Drs profile near the bottom of the biofilm was different for each biofilm. Practically, our results demonstrate that advanced biofilm models should use a variable effective diffusivity which changes with time and location in the biofilm.« less

  20. Reconstruction of solar spectral surface UV irradiances using radiative transfer simulations.

    PubMed

    Lindfors, Anders; Heikkilä, Anu; Kaurola, Jussi; Koskela, Tapani; Lakkala, Kaisa

    2009-01-01

    UV radiation exerts several effects concerning life on Earth, and spectral information on the prevailing UV radiation conditions is needed in order to study each of these effects. In this paper, we present a method for reconstruction of solar spectral UV irradiances at the Earth's surface. The method, which is a further development of an earlier published method for reconstruction of erythemally weighted UV, relies on radiative transfer simulations, and takes as input (1) the effective cloud optical depth as inferred from pyranometer measurements of global radiation (300-3000 nm); (2) the total ozone column; (3) the surface albedo as estimated from measurements of snow depth; (4) the total water vapor column; and (5) the altitude of the location. Reconstructed daily cumulative spectral irradiances at Jokioinen and Sodankylä in Finland are, in general, in good agreement with measurements. The mean percentage difference, for instance, is mostly within +/-8%, and the root mean square of the percentage difference is around 10% or below for wavelengths over 310 nm and daily minimum solar zenith angles (SZA) less than 70 degrees . In this study, we used pseudospherical radiative transfer simulations, which were shown to improve the performance of our method under large SZA (low Sun).

  1. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    PubMed

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.

  2. The Effectiveness of PNF Versus Static Stretching on Increasing Hip-Flexion Range of Motion.

    PubMed

    Lempke, Landon; Wilkinson, Rebecca; Murray, Caitlin; Stanek, Justin

    2018-05-22

    Clinical Scenario: Stretching is applied for the purposes of injury prevention, increasing joint range of motion (ROM), and increasing muscle extensibility. Many researchers have investigated various methods and techniques to determine the most effective way to increase joint ROM and muscle extensibility. Despite the numerous studies conducted, controversy still remains within clinical practice and the literature regarding the best methods and techniques for stretching. Focused Clinical Question: Is proprioceptive neuromuscular facilitation (PNF) stretching more effective than static stretching for increasing hamstring muscle extensibility through increased hip ROM or increased knee extension angle (KEA) in a physically active population? Summary of Key Findings: Five studies met the inclusion criteria and were included. All 5 studies were randomized control trials examining mobility of the hamstring group. The studies measured hamstring ROM in a variety of ways. Three studies measured active KEA, 1 study measured passive KEA, and 1 study measured hip ROM via the single-leg raise test. Of the 5 studies, 1 study found greater improvements using PNF over static stretching for increasing hip flexion, and the remaining 4 studies found no significant difference between PNF stretching and static stretching in increasing muscle extensibility, active KEA, or hip ROM. Clinical Bottom Line: PNF stretching was not demonstrated to be more effective at increasing hamstring extensibility compared to static stretching. The literature reviewed suggests both are effective methods for increasing hip-flexion ROM. Strength of Recommendation: Using level 2 evidence and higher, the results show both static and PNF stretching effectively increase ROM; however, one does not appear to be more effective than the other.

  3. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    PubMed

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong-Hwan; Hong, Suk-Ho; National Fusion Research Institute

    Plasma characteristics in the far scrape-off layer region of tokamak play a crucial role in the stable plasma operation and its sustainability. Due to the huge facility, electrical diagnostic systems to measure plasma properties have extremely long cable length resulting in large stray current. To overcome this problem, a sideband harmonic method was applied to the Korea Superconducting Tokamak Advanced Research tokamak plasma. The sideband method allows the measurement of the electron temperature and the plasma density without the effect of the stray current. The measured plasma densities are compared with those from the interferometer, and the results show reliabilitymore » of the method.« less

  5. Gravitational Physics Research

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    2000-01-01

    Gravitational physics research at ISPAE is connected with NASA's Relativity Mission (Gravity Probe B (GP-B)) which will perform a test of Einstein's General Relativity Theory. GP-B will measure the geodetic and motional effect predicted by General Relativity Theory with extremely stable and sensitive gyroscopes in an earth orbiting satellite. Both effects cause a very small precession of the gyroscope spin axis. The goal of the GP-B experiment is the measurement of the gyroscope precession with very high precision. GP-B is being developed by a team at Stanford University and is scheduled for launch in the year 2001. The related UAH research is a collaboration with Stanford University and MSFC. This research is focussed primarily on the error analysis and data reduction methods of the experiment but includes other topics concerned with experiment systems and their performance affecting the science measurements. The hydrogen maser is the most accurate and stable clock available. It will be used in future gravitational physics missions to measure relativistic effects such as the second order Doppler effect. The HMC experiment, currently under development at the Smithsonian Astrophysical Observatory (SAO), will test the performance and capability of the hydrogen maser clock for gravitational physics measurements. UAH in collaboration with the SAO science team will study methods to evaluate the behavior and performance of the HMC. The GP-B data analysis developed by the Stanford group involves complicated mathematical operations. This situation led to the idea to investigate alternate and possibly simpler mathematical procedures to extract the GP-B measurements form the data stream. Comparison of different methods would increase the confidence in the selected scheme.

  6. Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.

    PubMed

    Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R

    2011-01-01

    Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Comparison of ion-exchange resin counterions in the nutrient measurement of calcareous soils: implications for correlative studies of plant-soil relationships

    USGS Publications Warehouse

    Sherrod, S.K.; Belnap, Jayne; Miller, M.E.

    2003-01-01

    For more than 40 years, ion-exchange resins have been used to characterize nutrient bioavailability in terrestrial and aquatic ecosystems. To date, however, no standardized methodology has been developed, particularly with respect to the counterions that initially occupy resin exchange sites. To determine whether different resin counterions yield different measures of soil nutrients and rank soils differently with respect to their measured nutrient bioavailability, we compared nutrient measurements by three common counterion combinations (HCl, HOH, and NaHCO3). Five sandy calcareous soils were chosen to represent a range of soil characteristics at Canyonlands National Park, Utah, and resin capsules charged with the different counterions equilibrated in saturated pastes of these soils for one week. Data were converted to proportions of total ions of corresponding charge for ANOVA. Results from the different methods were not comparable with respect to any nutrient. Of eleven nutrients measured, all but iron (Fe2+), manganese (Mn2+), and zinc (Zn2+) differed significantly (pa??0.05) as a function of soilcounterion interactions; Fe2+ and Zn2+ varied as functions of counterion alone. Of the counterion combinations, HCl-resins yielded the most net ion exchange with all measured nutrients except Na+, and the three of which desorbed in the greatest quantities from HOH-resins. Conventional chemical extractions using ammonium acetate generally yielded high proportional values of Ca2+, K+, and Na+. Further, among-soil rankings of nutrient bioavailability varied widely among methods. This study highlights the fact that various ion-exchange resin techniques for measuring soil nutrients may have differential effects on the soil-resin environment and yield data that should not be compared nor considered interchangeable. The most appropriate methods for characterizing soil-nutrient bioavailability depends on soil characteristics and likely on the physiological uptake mechanisms of plants or functional groups of interest. The effects of different extraction techniques on nutrient measures should be understood before selecting an extraction method. For example, in the calcareous soils used for this experiment, nutrient extraction methods that alter soil carbonates through dissolution or precipitation could compromise the accurate measurement of plant-available nutrients. The implications of this study emphasize the universal importance of understanding the differential effects of alternate methods on soil chemistry.

  8. Comparison of ion-exchange resin counterions in the nutrient measurement of calcareous soils: Implications for correlative studies of plant-soil relationships

    USGS Publications Warehouse

    Sherrod, S.K.; Belnap, J.; Miller, M.E.

    2003-01-01

    For more than 40 years, ion-exchange resins have been used to characterize nutrient bioavailability in terrestrial and aquatic ecosystems. To date, however, no standardized methodology has been developed, particularly with respect to the counterions that initially occupy resin exchange sites. To determine whether different resin counterions yield different measures of soil nutrients and rank soils differently with respect to their measured nutrient bioavailability, we compared nutrient measurements by three common counterion combinations (HCl, HOH, and NaHCO3). Five sandy calcareous soils were chosen to represent a range of soil characteristics at Canyonlands National Park, Utah, and resin capsules charged with the different counterions equilibrated in saturated pastes of these soils for one week. Data were converted to proportions of total ions of corresponding charge for ANOVA. Results from the different methods were not comparable with respect to any nutrient. Of eleven nutrients measured, all but iron (Fe2+), manganese (Mn2+), and zinc (Zn2+) differed significantly (p ??? 0.05) as a function of soil x counterion interactions; Fe2+ and Zn2+ varied as functions of counterion alone. Of the counterion combinations, HCl-resins yielded the most net ion exchange with all measured nutrients except Na+, NH4+, and HPO42-, the three of which desorbed in the greatest quantities from HOH-resins. Conventional chemical extractions using ammonium acetate generally yielded high proportional values of Ca2+, K+, and Na+. Further, among-soil rankings of nutrient bioavailability varied widely among methods. This study highlights the fact that various ion-exchange resin techniques for measuring soil nutrients may have differential effects on the soil-resin environment and yield data that should not be compared nor considered interchangeable. The most appropriate methods for characterizing soil-nutrient bioavailability depends on soil characteristics and likely on the physiological uptake mechanisms of plants or functional groups of interest. The effects of different extraction techniques on nutrient measures should be understood before selecting an extraction method. For example, in the calcareous soils used for this experiment, nutrient extraction methods that alter soil carbonates through dissolution or precipitation could compromise the accurate measurement of plant-available nutrients. The implications of this study emphasize the universal importance of understanding the differential effects of alternate methods on soil chemistry.

  9. Absorbed dose measurement in low temperature samples:. comparative methods using simulated material

    NASA Astrophysics Data System (ADS)

    Garcia, Ruth; Harris, Anthony; Winters, Martell; Howard, Betty; Mellor, Paul; Patil, Deepak; Meiner, Jason

    2004-09-01

    There is a growing need to reliably measure absorbed dose in low temperature samples, especially in the pharmaceutical and tissue banking industries. All dosimetry systems commonly used in the irradiation industry are temperature sensitive. Radiation of low temperature samples, such as those packaged with dry ice, must therefore take these dosimeter temperature effects into consideration. This paper will suggest a method to accurately deliver an absorbed radiation dose using dosimetry techniques designed to abrogate the skewing effects of low temperature environments on existing dosimetry systems.

  10. An Integrated Approach for Gear Health Prognostics

    NASA Technical Reports Server (NTRS)

    He, David; Bechhoefer, Eric; Dempsey, Paula; Ma, Jinghua

    2012-01-01

    In this paper, an integrated approach for gear health prognostics using particle filters is presented. The presented method effectively addresses the issues in applying particle filters to gear health prognostics by integrating several new components into a particle filter: (1) data mining based techniques to effectively define the degradation state transition and measurement functions using a one-dimensional health index obtained by whitening transform; (2) an unbiased l-step ahead RUL estimator updated with measurement errors. The feasibility of the presented prognostics method is validated using data from a spiral bevel gear case study.

  11. Does non-ionizing radiant energy affect determination of the evaporation rate by the gradient method?

    PubMed

    Kjartansson, S; Hammarlund, K; Oberg, P A; Sedin, G

    1991-01-01

    A study was performed to investigate whether measurements of the evaporation rate from the skin of newborn infants by the gradient method are affected by the presence of non-ionizing radiation from phototherapy equipment or a radiant heater. The evaporation rate was measured experimentally with the measuring sensors either exposed to or protected from non-ionizing radiation. Either blue light (phototherapy) or infrared light (radiant heater) was used; in the former case the evaporation rate was measured from a beaker of water covered with a semipermeable membrane, and in the latter case from the hand of an adult subject, aluminium foil or with the measuring probe in the air. No adverse effect on the determinations of the evaporation rate was found in the presence of blue light. Infrared radiation caused an error of 0.8 g/m2h when the radiant heater was set at its highest effect level or when the ambient humidity was high. At low and moderate levels the observed evaporation rate was not affected. It is concluded that when clinical measurements are made from the skin of newborn infants nursed under a radiant heater, the evaporation rate can appropriately be determined by the gradient method.

  12. Investigation on synthesis, growth and characterization of CdIn2S2Se2 single crystal grown by vertical Bridgman method

    NASA Astrophysics Data System (ADS)

    Vijayakumar, P.; Ramasamy, P.

    2017-06-01

    CdIn2S2Se2 polycrystalline material has been synthesized by melt oscillation method. Vertical Bridgman method was used to grow a good quality CdIn2S2Se2 single crystal. The crystalline phase and growth orientation were confirmed by powder X-ray diffraction pattern and unit cell parameters were determined by single crystal X-ray diffraction analysis. The structural uniformity of CdIn2S2Se2 was studied using Raman scattering spectroscopy at room temperature. The stoichiometric composition variation along the CdIn2S2Se2 was measured using energy dispersive spectrometry. The transmission spectra of CdIn2S2Se2 single crystal gave 42% transmission in the NIR region. Thermal property of CdIn2S2Se2 has been studied using differential thermal analysis. Thermal diffusivity, specific heat capacity and thermal conductivity were also measured. Electrical property was measured using Hall Effect measurement and it confirms the n-type semiconducting nature. The hardness behavior has been measured using Vickers micro hardness measurement and the indentation size effect has been observed.

  13. Method of imaging the electrical conductivity distribution of a subsurface

    DOEpatents

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  14. Assessing Binocular Interaction in Amblyopia and Its Clinical Feasibility

    PubMed Central

    Kwon, MiYoung; Lu, Zhong-Lin; Miller, Alexandra; Kazlas, Melanie; Hunter, David G.; Bex, Peter J.

    2014-01-01

    Purpose To measure binocular interaction in amblyopes using a rapid and patient-friendly computer-based method, and to test the feasibility of the assessment in the clinic. Methods Binocular interaction was assessed in subjects with strabismic amblyopia (n = 7), anisometropic amblyopia (n = 6), strabismus without amblyopia (n = 15) and normal vision (n = 40). Binocular interaction was measured with a dichoptic phase matching task in which subjects matched the position of a binocular probe to the cyclopean perceived phase of a dichoptic pair of gratings whose contrast ratios were systematically varied. The resulting effective contrast ratio of the weak eye was taken as an indicator of interocular imbalance. Testing was performed in an ophthalmology clinic under 8 mins. We examined the relationships between our binocular interaction measure and standard clinical measures indicating abnormal binocularity such as interocular acuity difference and stereoacuity. The test-retest reliability of the testing method was also evaluated. Results Compared to normally-sighted controls, amblyopes exhibited significantly reduced effective contrast (∼20%) of the weak eye, suggesting a higher contrast requirement for the amblyopic eye compared to the fellow eye. We found that the effective contrast ratio of the weak eye covaried with standard clincal measures of binocular vision. Our results showed that there was a high correlation between the 1st and 2nd measurements (r = 0.94, p<0.001) but without any significant bias between the two. Conclusions Our findings demonstrate that abnormal binocular interaction can be reliably captured by measuring the effective contrast ratio of the weak eye and quantitative assessment of binocular interaction is a quick and simple test that can be performed in the clinic. We believe that reliable and timely assessment of deficits in a binocular interaction may improve detection and treatment of amblyopia. PMID:24959842

  15. Biases and Power for Groups Comparison on Subjective Health Measurements

    PubMed Central

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald’s test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative. PMID:23115620

  16. Validation of Web-Based Physical Activity Measurement Systems Using Doubly Labeled Water

    PubMed Central

    Yamaguchi, Yukio; Yamada, Yosuke; Tokushima, Satoru; Hatamoto, Yoichi; Sagayama, Hiroyuki; Kimura, Misaka; Higaki, Yasuki; Tanaka, Hiroaki

    2012-01-01

    Background Online or Web-based measurement systems have been proposed as convenient methods for collecting physical activity data. We developed two Web-based physical activity systems—the 24-hour Physical Activity Record Web (24hPAR WEB) and 7 days Recall Web (7daysRecall WEB). Objective To examine the validity of two Web-based physical activity measurement systems using the doubly labeled water (DLW) method. Methods We assessed the validity of the 24hPAR WEB and 7daysRecall WEB in 20 individuals, aged 25 to 61 years. The order of email distribution and subsequent completion of the two Web-based measurements systems was randomized. Each measurement tool was used for a week. The participants’ activity energy expenditure (AEE) and total energy expenditure (TEE) were assessed over each week using the DLW method and compared with the respective energy expenditures estimated using the Web-based systems. Results The mean AEE was 3.90 (SD 1.43) MJ estimated using the 24hPAR WEB and 3.67 (SD 1.48) MJ measured by the DLW method. The Pearson correlation for AEE between the two methods was r = .679 (P < .001). The Bland-Altman 95% limits of agreement ranged from –2.10 to 2.57 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .874 (P < .001). The mean AEE was 4.29 (SD 1.94) MJ using the 7daysRecall WEB and 3.80 (SD 1.36) MJ by the DLW method. The Pearson correlation for AEE between the two methods was r = .144 (P = .54). The Bland-Altman 95% limits of agreement ranged from –3.83 to 4.81 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .590 (P = .006). The average input times using terminal devices were 8 minutes and 10 seconds for the 24hPAR WEB and 6 minutes and 38 seconds for the 7daysRecall WEB. Conclusions Both Web-based systems were found to be effective methods for collecting physical activity data and are appropriate for use in epidemiological studies. Because the measurement accuracy of the 24hPAR WEB was moderate to high, it could be suitable for evaluating the effect of interventions on individuals as well as for examining physical activity behavior. PMID:23010345

  17. Impedance measurement using a two-microphone, random-excitation method

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Parrott, T. L.

    1978-01-01

    The feasibility of using a two-microphone, random-excitation technique for the measurement of acoustic impedance was studied. Equations were developed, including the effect of mean flow, which show that acoustic impedance is related to the pressure ratio and phase difference between two points in a duct carrying plane waves only. The impedances of a honeycomb ceramic specimen and a Helmholtz resonator were measured and compared with impedances obtained using the conventional standing-wave method. Agreement between the two methods was generally good. A sensitivity analysis was performed to pinpoint possible error sources and recommendations were made for future study. The two-microphone approach evaluated in this study appears to have some advantages over other impedance measuring techniques.

  18. Determination of magneto-optical constant of Fe films with weak measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, Xiaodong; Hu, Dejiao; Du, Jinglei

    2014-09-29

    In this letter, a detecting method for the magneto-optical constant is presented by using weak measurements. The photonic spin Hall effect (PSHE), which manifests itself as spin-dependent splitting, is introduced to characterize the magneto-optical constant, and a propagation model to describe the quantitative relation between the magneto-optical constant and the PSHE is established. According to the amplified shift of the PSHE detected by weak measurements, we determinate the magneto-optical constant of the Fe film sample. The Kerr rotation is measured via the standard polarimetry method to verify the rationality and feasibility of our method. These findings may provide possible applicationsmore » in magnetic physics research.« less

  19. Accurate mass replacement method for the sediment concentration measurement with a constant volume container

    NASA Astrophysics Data System (ADS)

    Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu

    2017-04-01

    The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m-3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m-3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.

  20. Facial Masculinity: How the Choice of Measurement Method Enables to Detect Its Influence on Behaviour

    PubMed Central

    Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique

    2014-01-01

    Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods. PMID:25389770

  1. Facial masculinity: how the choice of measurement method enables to detect its influence on behaviour.

    PubMed

    Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique

    2014-01-01

    Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods.

  2. DNA damage in mouse and rat liver by caprolactam and benzoin, evaluated with three different methods.

    PubMed

    Parodi, S; Abelmoschi, M L; Balbi, C; De Angeli, M T; Pala, M; Russo, P; Taningher, M; Santi, L

    1989-11-01

    Benzoin and caprolactam were examined for their capability of inducing alkaline DNA fragmentation in mouse and rat liver DNA after treatment in vivo. Three different methods were used. With the alkaline elution technique we measured an effect presumably related to the conformation of the DNA coil. With a viscometric and a fluorometric unwinding method we measured an effect presumably related to the number of unwinding points in DNA. For both compounds only the alkaline elution technique was clearly positive. The results suggest that both caprolactam and benzoin can induce an important change in the conformation of the DNA coil without inducing true breaks in DNA.

  3. Research on signal processing method for total organic carbon of water quality online monitor

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xie, Z. X.; Chu, D. Z.; Zhang, S. W.; Cao, X.; Wu, N.

    2017-08-01

    At present, there is no rapid, stable and effective approach of total organic carbon (TOC) measurement in the Marine environmental online monitoring field. Therefore, this paper proposes an online TOC monitor of chemiluminescence signal processing method. The weak optical signal detected by photomultiplier tube can be enhanced and converted by a series of signal processing module: phase-locked amplifier module, fourth-order band pass filter module and AD conversion module. After a long time of comparison test & measurement, compared with the traditional method, on the premise of sufficient accuracy, this chemiluminescence signal processing method can offer greatly improved measuring speed and high practicability for online monitoring.

  4. Modified coaxial wire method for measurement of transfer impedance of beam position monitors

    NASA Astrophysics Data System (ADS)

    Kumar, Mukesh; Babbar, L. K.; Deo, R. K.; Puntambekar, T. A.; Senecha, V. K.

    2018-05-01

    The transfer impedance is a very important parameter of a beam position monitor (BPM) which relates its output signal with the beam current. The coaxial wire method is a standard technique to measure transfer impedance of the BPM. The conventional coaxial wire method requires impedance matching between coaxial wire and external circuits (vector network analyzer and associated cables). This paper presents a modified coaxial wire method for bench measurement of the transfer impedance of capacitive pickups like button electrodes and shoe box BPMs. Unlike the conventional coaxial wire method, in the modified coaxial wire method no impedance matching elements have been used between the device under test and the external circuit. The effect of impedance mismatch has been solved mathematically and a new expression of transfer impedance has been derived. The proposed method is verified through simulation of a button electrode BPM using cst studio suite. The new method is also applied to measure transfer impedance of a button electrode BPM developed for insertion devices of Indus-2 and the results are also compared with its simulations. Close agreement between measured and simulation results suggests that the modified coaxial wire setup can be exploited for the measurement of transfer impedance of capacitive BPMs like button electrodes and shoe box BPM.

  5. Comparative evaluation of performance measures for shading correction in time-lapse fluorescence microscopy.

    PubMed

    Liu, L; Kan, A; Leckie, C; Hodgkin, P D

    2017-04-01

    Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  6. Method development and validation for measuring the particle size distribution of pentaerythritol tetranitrate (PETN) powders.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Sharissa Gay

    2005-09-01

    Currently, the critical particle properties of pentaerythritol tetranitrate (PETN) that influence deflagration-to-detonation time in exploding bridge wire detonators (EBW) are not known in sufficient detail to allow development of a predictive failure model. The specific surface area (SSA) of many PETN powders has been measured using both permeametry and gas absorption methods and has been found to have a critical effect on EBW detonator performance. The permeametry measure of SSA is a function of particle shape, packed bed pore geometry, and particle size distribution (PSD). Yet there is a general lack of agreement in PSD measurements between laboratories, raising concernsmore » regarding collaboration and complicating efforts to understand changes in EBW performance related to powder properties. Benchmarking of data between laboratories that routinely perform detailed PSD characterization of powder samples and the determination of the most appropriate method to measure each PETN powder are necessary to discern correlations between performance and powder properties and to collaborate with partnering laboratories. To this end, a comparison was made of the PSD measured by three laboratories using their own standard procedures for light scattering instruments. Three PETN powder samples with different surface areas and particle morphologies were characterized. Differences in bulk PSD data generated by each laboratory were found to result from variations in sonication of the samples during preparation. The effect of this sonication was found to depend on particle morphology of the PETN samples, being deleterious to some PETN samples and advantageous for others in moderation. Discrepancies in the submicron-sized particle characterization data were related to an instrument-specific artifact particular to one laboratory. The type of carrier fluid used by each laboratory to suspend the PETN particles for the light scattering measurement had no consistent effect on the resulting PSD data. Finally, the SSA of the three powders was measured using both permeametry and gas absorption methods, enabling the PSD to be linked to the SSA for these PETN powders. Consistent characterization of other PETN powders can be performed using the appropriate sample-specific preparation method, so that future studies can accurately identify the effect of changes in the PSD on the SSA and ultimately model EBW performance.« less

  7. A new method for the measurement of tremor at rest.

    PubMed

    Comby, B; Chevalier, G; Bouchoucha, M

    1992-01-01

    This paper establishes a standard method for measuring human tremor. The electronic instrument described is an application of this method. It solves the need for an effective and simple tremor-measuring instrument fit for wide distribution. This instrument consists of a piezoelectric accelerometer connected to an electronic circuit and to an LCD display. The signal is also analysed by a computer after accelerometer analogic/digital conversion in order to test the method. The tremor of 1079 healthy subjects was studied. Spectral analysis showed frequency peaks between 5.85 and 8.80 Hz. Chronic cigarette-smoking and coffee drinking did not modify the tremor as compared with controls. Relaxation session decreased tremor significantly in healthy subjects (P less than 0.01). This new tremor-measuring method opens new horizons in the understanding of physiological and pathological tremor, stress, anxiety and in the means to avoid or compensate them.

  8. A critical comparison of electrical methods for measuring spin-orbit torques

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzi; Hung, Yu-Ming; Rehm, Laura; Kent, Andrew D.

    Direct (DC) and alternating current (AC) transport measurements of spin-orbit torques (SOTs) in heavy metal-ferromagnet heterostructure with perpendicular magnetic anisotropy have been proposed and demonstrated. A DC method measures the change of perpendicular magnetization component while an AC method probes the first and second harmonic magnetization oscillation in responses to an AC current (~1 kHz). Here we conduct both types of measurements on β-Ta/CoFeB/MgO in the form of patterned Hall bars (20 μm linewidth) and compare the results. Experiments results are qualitatively in agreement with a macro spin model including Slonzewski-like and a field-like SOTs. However, the effective field from the ac method is larger than that obtained from the DC method. We discuss the possible origins of the discrepancy and its implications for quantitatively determining SOTs. Research supported by the SRC-INDEX program, NSF-DMR-1309202 and NYU-DURF award.

  9. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  10. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  11. A Blade Tip Timing Method Based on a Microwave Sensor

    PubMed Central

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-01-01

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy. PMID:28492469

  12. A Blade Tip Timing Method Based on a Microwave Sensor.

    PubMed

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-05-11

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  13. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Development of a wireless displacement measurement system using acceleration responses.

    PubMed

    Park, Jong-Woong; Sim, Sung-Han; Jung, Hyung-Jo; Spencer, Billie F

    2013-07-01

    Displacement measurements are useful information for various engineering applications such as structural health monitoring (SHM), earthquake engineering and system identification. Most existing displacement measurement methods are costly, labor-intensive, and have difficulties particularly when applying to full-scale civil structures because the methods require stationary reference points. Indirect estimation methods converting acceleration to displacement can be a good alternative as acceleration transducers are generally cost-effective, easy to install, and have low noise. However, the application of acceleration-based methods to full-scale civil structures such as long span bridges is challenging due to the need to install cables to connect the sensors to a base station. This article proposes a low-cost wireless displacement measurement system using acceleration. Developed with smart sensors that are low-cost, wireless, and capable of on-board computation, the wireless displacement measurement system has significant potential to impact many applications that need displacement information at multiple locations of a structure. The system implements an FIR-filter type displacement estimation algorithm that can remove low frequency drifts typically caused by numerical integration of discrete acceleration signals. To verify the accuracy and feasibility of the proposed system, laboratory tests are carried out using a shaking table and on a three storey shear building model, experimentally confirming the effectiveness of the proposed system.

  15. Development of a Wireless Displacement Measurement System Using Acceleration Responses

    PubMed Central

    Park, Jong-Woong; Sim, Sung-Han; Jung, Hyung-Jo; Spencer, Billie F.

    2013-01-01

    Displacement measurements are useful information for various engineering applications such as structural health monitoring (SHM), earthquake engineering and system identification. Most existing displacement measurement methods are costly, labor-intensive, and have difficulties particularly when applying to full-scale civil structures because the methods require stationary reference points. Indirect estimation methods converting acceleration to displacement can be a good alternative as acceleration transducers are generally cost-effective, easy to install, and have low noise. However, the application of acceleration-based methods to full-scale civil structures such as long span bridges is challenging due to the need to install cables to connect the sensors to a base station. This article proposes a low-cost wireless displacement measurement system using acceleration. Developed with smart sensors that are low-cost, wireless, and capable of on-board computation, the wireless displacement measurement system has significant potential to impact many applications that need displacement information at multiple locations of a structure. The system implements an FIR-filter type displacement estimation algorithm that can remove low frequency drifts typically caused by numerical integration of discrete acceleration signals. To verify the accuracy and feasibility of the proposed system, laboratory tests are carried out using a shaking table and on a three storey shear building model, experimentally confirming the effectiveness of the proposed system. PMID:23881123

  16. A Causal Model for Joint Evaluation of Placebo and Treatment-Specific Effects in Clinical Trials

    PubMed Central

    Zhang, Zhiwei; Kotz, Richard M.; Wang, Chenguang; Ruan, Shiling; Ho, Martin

    2014-01-01

    Summary Evaluation of medical treatments is frequently complicated by the presence of substantial placebo effects, especially on relatively subjective endpoints, and the standard solution to this problem is a randomized, double-blinded, placebo-controlled clinical trial. However, effective blinding does not guarantee that all patients have the same belief or mentality about which treatment they have received (or treatmentality, for brevity), making it difficult to interpret the usual intent-to-treat effect as a causal effect. We discuss the causal relationships among treatment, treatmentality and the clinical outcome of interest, and propose a causal model for joint evaluation of placebo and treatment-specific effects. The model highlights the importance of measuring and incorporating patient treatmentality and suggests that each treatment group should be considered a separate observational study with a patient's treatmentality playing the role of an uncontrolled exposure. This perspective allows us to adapt existing methods for dealing with confounding to joint estimation of placebo and treatment-specific effects using measured treatmentality data, commonly known as blinding assessment data. We first apply this approach to the most common type of blinding assessment data, which is categorical, and illustrate the methods using an example from asthma. We then propose that blinding assessment data can be collected as a continuous variable, specifically when a patient's treatmentality is measured as a subjective probability, and describe analytic methods for that case. PMID:23432119

  17. The effective temperature of Peptide ions dissociated by sustained off-resonance irradiation collisional activation in fourier transform mass spectrometry.

    PubMed

    Schnier, P D; Jurchen, J C; Williams, E R

    1999-01-28

    A method for determining the internal energy of biomolecule ions activated by collisions is demonstrated. The dissociation kinetics of protonated leucine enkephalin and doubly protonated bradykinin were measured using sustained off-resonance irradiation (SORI) collisionally activated dissociation (CAD) in a Fourier transform mass spectrometer. Dissociation rate constants are obtained from these kinetic data. In combination with Arrhenius parameters measured with blackbody infrared radiative dissociation, the "effective" temperatures of these ions are obtained. Effects of excitation voltage and frequency and the ion cell pressure were investigated. With typical SORI-CAD experimental conditions, the effective temperatures of these peptide ions range between 200 and 400 degrees C. Higher temperatures can be easily obtained for ions that require more internal energy to dissociate. The effective temperatures of both protonated leucine enkephalin and doubly protonated bradykinin measured with the same experimental conditions are similar. Effective temperatures for protonated leucine enkephalin can also be obtained from the branching ratio of the b(4) and (M + H - H(2)O)(+) pathways. Values obtained from this method are in good agreement with those obtained from the overall dissociation rate constants. Protonated leucine enkephalin is an excellent "thermometer" ion and should be well suited to establishing effective temperatures of ions activated by other dissociation techniques, such as infrared photodissociation, as well as ionization methods, such as matrix assisted laser desorption/ionization.

  18. Water level response measurement in a steel cylindrical liquid storage tank using image filter processing under seismic excitation

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Wan; Choi, Hyoung-Suk; Park, Dong-Uk; Baek, Eun-Rim; Kim, Jae-Min

    2018-02-01

    Sloshing refers to the movement of fluid that occurs when the kinetic energy of various storage tanks containing fluid (e.g., excitation and vibration) is continuously applied to the fluid inside the tanks. As the movement induced by an external force gets closer to the resonance frequency of the fluid, the effect of sloshing increases, and this can lead to a serious problem with the structural stability of the system. Thus, it is important to accurately understand the physics of sloshing, and to effectively suppress and reduce the sloshing. Also, a method for the economical measurement of the water level response of a liquid storage tank is needed for the exact analysis of sloshing. In this study, a method using images was employed among the methods for measuring the water level response of a liquid storage tank, and the water level response was measured using an image filter processing algorithm for the reduction of the noise of the fluid induced by light, and for the sharpening of the structure installed at the liquid storage tank. A shaking table test was performed to verify the validity of the method of measuring the water level response of a liquid storage tank using images, and the result was analyzed and compared with the response measured using a water level gauge.

  19. Wavelength selection for portable noninvasive blood component measurement system based on spectral difference coefficient and dynamic spectrum

    NASA Astrophysics Data System (ADS)

    Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling

    2018-03-01

    Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.

  20. Compensation of the Ionospheric Effects on SAR Interferogram Based on Range Split-Spectrum and Azimuth Offset Methods - a Case Study of Yushu Earthquake

    NASA Astrophysics Data System (ADS)

    He, Y. F.; Zhu, W.; Zhang, Q.; Zhang, W. T.

    2018-04-01

    InSAR technique can measure the surface deformation with the accuracy of centimeter-level or even millimeter and therefore has been widely used in the deformation monitoring associated with earthquakes, volcanoes, and other geologic process. However, ionospheric irregularities can lead to the wavy fringes in the low frequency SAR interferograms, which disturb the actual information of geophysical processes and thus put severe limitations on ground deformations measurements. In this paper, an application of two common methods, the range split-spectrum and azimuth offset methods are exploited to estimate the contributions of the ionosphere, with the aim to correct ionospheric effects in interferograms. Based on the theoretical analysis and experiment, a performance analysis is conducted to evaluate the efficiency of these two methods. The result indicates that both methods can mitigate the ionospheric effect in SAR interferograms and the range split-spectrum method is more precise than the other one. However, it is also found that the range split-spectrum is easily contaminated by the noise, and the achievable accuracy of the azimuth offset method is limited by the ambiguous integral constant, especially with the strong azimuth variations induced by the ionosphere disturbance.

Top