Sample records for measurement method based

  1. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    PubMed Central

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  2. Evaluation of methods for measuring particulate matter emissions from gas turbines.

    PubMed

    Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David

    2011-04-15

    The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.

  3. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  4. A COMBINED SPECTROSCOPIC AND PHOTOMETRIC STELLAR ACTIVITY STUDY OF EPSILON ERIDANI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giguere, Matthew J.; Fischer, Debra A.; Zhang, Cyril X. Y.

    2016-06-20

    We present simultaneous ground-based radial velocity (RV) measurements and space-based photometric measurements of the young and active K dwarf Epsilon Eridani. These measurements provide a data set for exploring methods of identifying and ultimately distinguishing stellar photospheric velocities from Keplerian motion. We compare three methods we have used in exploring this data set: Dalmatian, an MCMC spot modeling code that fits photometric and RV measurements simultaneously; the FF′ method, which uses photometric measurements to predict the stellar activity signal in simultaneous RV measurements; and H α analysis. We show that our H α measurements are strongly correlated with the Microvariabilitymore » and Oscillations of STars telescope ( MOST ) photometry, which led to a promising new method based solely on the spectroscopic observations. This new method, which we refer to as the HH′ method, uses H α measurements as input into the FF′ model. While the Dalmatian spot modeling analysis and the FF′ method with MOST space-based photometry are currently more robust, the HH′ method only makes use of one of the thousands of stellar lines in the visible spectrum. By leveraging additional spectral activity indicators, we believe the HH′ method may prove quite useful in disentangling stellar signals.« less

  5. Versatile light-emitting-diode-based spectral response measurement system for photovoltaic device characterization.

    PubMed

    Hamadani, Behrang H; Roller, John; Dougherty, Brian; Yoon, Howard W

    2012-07-01

    An absolute differential spectral response measurement system for solar cells is presented. The system couples an array of light emitting diodes with an optical waveguide to provide large area illumination. Two unique yet complementary measurement methods were developed and tested with the same measurement apparatus. Good agreement was observed between the two methods based on testing of a variety of solar cells. The first method is a lock-in technique that can be performed over a broad pulse frequency range. The second method is based on synchronous multifrequency optical excitation and electrical detection. An innovative scheme for providing light bias during each measurement method is discussed.

  6. Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lillaney, Prasheel, E-mail: Prasheel.Lillaney@ucsf.edu; Caton, Curtis; Martin, Alastair J.

    2014-02-15

    Purpose: Magnetic resonance imaging (MRI) is an emerging modality for interventional radiology, giving clinicians another tool for minimally invasive image-guided interventional procedures. Difficulties associated with endovascular catheter navigation using MRI guidance led to the development of a magnetically steerable catheter. The focus of this study was to mechanically characterize deflections of two different prototypes of the magnetically steerable catheterin vitro to better understand their efficacy. Methods: A mathematical model for deflection of the magnetically steerable catheter is formulated based on the principle that at equilibrium the mechanical and magnetic torques are equal to each other. Furthermore, two different image basedmore » methods for empirically measuring the catheter deflection angle are presented. The first, referred to as the absolute tip method, measures the angle of the line that is tangential to the catheter tip. The second, referred to the base to tip method, is an approximation that is used when it is not possible to measure the angle of the tangent line. Optical images of the catheter deflection are analyzed using the absolute tip method to quantitatively validate the predicted deflections from the mathematical model. Optical images of the catheter deflection are also analyzed using the base to tip method to quantitatively determine the differences between the absolute tip and base to tip methods. Finally, the optical images are compared to MR images using the base to tip method to determine the accuracy of measuring the catheter deflection using MR. Results: The optical catheter deflection angles measured for both catheter prototypes using the absolute tip method fit very well to the mathematical model (R{sup 2} = 0.91 and 0.86 for each prototype, respectively). It was found that the angles measured using the base to tip method were consistently smaller than those measured using the absolute tip method. The deflection angles measured using optical data did not demonstrate a significant difference from the angles measured using MR image data when compared using the base to tip method. Conclusions: This study validates the theoretical description of the magnetically steerable catheter, while also giving insight into different methods and modalities for measuring the deflection angles of the prototype catheters. These results can be used to mechanically model future iterations of the design. Quantifying the difference between the different methods for measuring catheter deflection will be important when making deflection measurements in future studies. Finally, MR images can be used to reliably measure deflection angles since there is no significant difference between the MR and optical measurements.« less

  7. Morphological observation and analysis using automated image cytometry for the comparison of trypan blue and fluorescence-based viability detection method.

    PubMed

    Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean

    2015-05-01

    The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.

  8. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  9. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  10. Analysis of methods to estimate spring flows in a karst aquifer

    USGS Publications Warehouse

    Sepulveda, N.

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer. ?? 2008 National Ground Water Association.

  11. Analysis of methods to estimate spring flows in a karst aquifer.

    PubMed

    Sepúlveda, Nicasio

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer.

  12. Method for 3D profilometry measurement based on contouring moire fringe

    NASA Astrophysics Data System (ADS)

    Shi, Zhiwei; Lin, Juhua

    2007-12-01

    3D shape measurement is one of the most active branches of optical research recently. A method of 3D profilometry measurement by the combination of Moire projection method and phase-shifting technology based on SCM (Single Chip Microcomputer) control is presented in the paper. Automatic measurement of 3D surface profiles can be carried out by applying this method with high speed and high precision.

  13. A new measuring method for motion accuracy of 3-axis NC equipments based on composite trajectory of circle and non-circle

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Du, Zhengchun; Yang, Jiangguo; Hong, Maisheng

    2011-12-01

    Geometric motion error measurement has been considered as an important task for accuracy enhancement and quality assurance of NC machine tools and CMMs. In consideration of the disadvantages of traditional measuring methods,a new measuring method for motion accuracy of 3-axis NC equipments based on composite trajectory including circle and non-circle(straight line and/or polygonal line) is proposed. The principles and techniques of the new measuring method are discussed in detail. 8 feasible measuring strategies based on different measuring groupings are summarized and optimized. The experiment of the most preferable strategy is carried out on the 3-axis CNC vertical machining center Cincinnati 750 Arrow by using cross grid encoder. The whole measuring time of 21 error components of the new method is cut down to 1-2 h because of easy installation, adjustment, operation and the characteristics of non-contact measurement. Result shows that the new method is suitable for `on machine" measurement and has good prospects of wide application.

  14. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  15. Accuracy of two simple methods for estimation of thyroidal {sup 131}I kinetics for dosimetry-based treatment of Graves' disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125

    2009-04-15

    One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less

  16. Automatic lumbar spine measurement in CT images

    NASA Astrophysics Data System (ADS)

    Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun

    2017-03-01

    Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.

  17. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  18. Estimating Classification Accuracy for Complex Decision Rules Based on Multiple Scores

    ERIC Educational Resources Information Center

    Douglas, Karen M.; Mislevy, Robert J.

    2010-01-01

    Important decisions about students are made by combining multiple measures using complex decision rules. Although methods for characterizing the accuracy of decisions based on a single measure have been suggested by numerous researchers, such methods are not useful for estimating the accuracy of decisions based on multiple measures. This study…

  19. An in-situ measuring method for planar straightness error

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  20. A meta-analytic review of self-reported, clinician-rated, and performance-based motivation measures in schizophrenia: Are we measuring the same "stuff"?

    PubMed

    Luther, Lauren; Firmin, Ruth L; Lysaker, Paul H; Minor, Kyle S; Salyers, Michelle P

    2018-04-07

    An array of self-reported, clinician-rated, and performance-based measures has been used to assess motivation in schizophrenia; however, the convergent validity evidence for these motivation assessment methods is mixed. The current study is a series of meta-analyses that summarize the relationships between methods of motivation measurement in 45 studies of people with schizophrenia. The overall mean effect size between self-reported and clinician-rated motivation measures (r = 0.27, k = 33) was significant, positive, and approaching medium in magnitude, and the overall effect size between performance-based and clinician-rated motivation measures (r = 0.21, k = 11) was positive, significant, and small in magnitude. The overall mean effect size between self-reported and performance-based motivation measures was negligible and non-significant (r = -0.001, k = 2), but this meta-analysis was underpowered. Findings suggest modest convergent validity between clinician-rated and both self-reported and performance-based motivation measures, but additional work is needed to clarify the convergent validity between self-reported and performance-based measures. Further, there is likely more variability than similarity in the underlying construct that is being assessed across the three methods, particularly between the performance-based and other motivation measurement types. These motivation assessment methods should not be used interchangeably, and measures should be more precisely described as the specific motivational construct or domain they are capturing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Review on recent research progress on laser power measurement based on light pressure

    NASA Astrophysics Data System (ADS)

    Lai, WenChang; Zhou, Pu

    2018-03-01

    Accurate measuring the laser power is one of the most important issue to evaluate the performance of high power laser. For the time being, most of the demonstrated technique could be attributed to direct measuring route. Indirect measuring laser power based on light pressure, which has been under intensive investigation, has the advantages such as fast response, real-time measuring and high accuracy, compared with direct measuring route. In this paper, we will review several non-traditional methods based on light pressure to precisely measure the laser power proposed recently. The system setup, measuring principle and scaling methods would be introduced and analyzed in detail. We also compare the benefit and the drawback of these methods and analyze the uncertainties of the measurements.

  3. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    PubMed Central

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  5. Accuracy improvement of multimodal measurement of speed of sound based on image processing

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Kaya, Akio; Misawa, Masaki; Hyodo, Koji; Numano, Tomokazu

    2017-07-01

    Since the speed of sound (SOS) reflects tissue characteristics and is expected as an evaluation index of elasticity and water content, the noninvasive measurement of SOS is eagerly anticipated. However, it is difficult to measure the SOS by using an ultrasound device alone. Therefore, we have presented a noninvasive measurement method of SOS using ultrasound (US) and magnetic resonance (MR) images. By this method, we determine the longitudinal SOS based on the thickness measurement using the MR image and the time of flight (TOF) measurement using the US image. The accuracy of SOS measurement is affected by the accuracy of image registration and the accuracy of thickness measurements in the MR and US images. In this study, we address the accuracy improvement in the latter thickness measurement, and present an image-processing-based method for improving the accuracy of thickness measurement. The method was investigated by using in vivo data obtained from a tissue-engineered cartilage implanted in the back of a rat, with an unclear boundary.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, J.P.

    The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.

  7. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  8. An orthogonal return method for linearly polarized beam based on the Faraday effect and its application in interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Benyong, E-mail: chenby@zstu.edu.cn; Zhang, Enzheng; Yan, Liping

    2014-10-15

    Correct return of the measuring beam is essential for laser interferometers to carry out measurement. In the actual situation, because the measured object inevitably rotates or laterally moves, not only the measurement accuracy will decrease, or even the measurement will be impossibly performed. To solve this problem, a novel orthogonal return method for linearly polarized beam based on the Faraday effect is presented. The orthogonal return of incident linearly polarized beam is realized by using a Faraday rotator with the rotational angle of 45°. The optical configuration of the method is designed and analyzed in detail. To verify its practicabilitymore » in polarization interferometry, a laser heterodyne interferometer based on this method was constructed and precision displacement measurement experiments were performed. These results show that the advantage of the method is that the correct return of the incident measuring beam is ensured when large lateral displacement or angular rotation of the measured object occurs and then the implementation of interferometric measurement can be ensured.« less

  9. Residual gravimetric method to measure nebulizer output.

    PubMed

    Vecellio None, Laurent; Grimbert, Daniel; Bordenave, Joelle; Benoit, Guy; Furet, Yves; Fauroux, Brigitte; Boissinot, Eric; De Monte, Michele; Lemarié, Etienne; Diot, Patrice

    2004-01-01

    The aim of this study was to assess a residual gravimetric method based on weighing dry filters to measure the aerosol output of nebulizers. This residual gravimetric method was compared to assay methods based on spectrophotometric measurement of terbutaline (Bricanyl, Astra Zeneca, France), high-performance liquid chromatography (HPLC) measurement of tobramycin (Tobi, Chiron, U.S.A.), and electrochemical measurements of NaF (as defined by the European standard). Two breath-enhanced jet nebulizers, one standard jet nebulizer, and one ultrasonic nebulizer were tested. Output produced by the residual gravimetric method was calculated by weighing the filters both before and after aerosol collection and by filter drying corrected by the proportion of drug contained in total solute mass. Output produced by the electrochemical, spectrophotometric, and HPLC methods was determined after assaying the drug extraction filter. The results demonstrated a strong correlation between the residual gravimetric method (x axis) and assay methods (y axis) in terms of drug mass output (y = 1.00 x -0.02, r(2) = 0.99, n = 27). We conclude that a residual gravimetric method based on dry filters, when validated for a particular agent, is an accurate way of measuring aerosol output.

  10. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    PubMed

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  11. Nanoparticle filtration performance of NIOSH-certified particulate air-purifying filtering facepiece respirators: evaluation by light scattering photometric and particle number-based test methods.

    PubMed

    Rengasamy, Samy; Eimer, Benjamin C

    2012-01-01

    National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number of particles <100 nm and a count (particle number)-based detector.

  12. Comparison of methods to estimate water access: a pilot study of a GPS-based approach in low resource settings.

    PubMed

    Pearson, Amber L

    2016-09-20

    Most water access studies involve self-reported measures such as time spent or simple spatial measures such as Euclidean distance from home to source. GPS-based measures of access are often considered actual access and have shown little correlation with self-reported measures. One main obstacle to widespread use of GPS-based measurement of access to water has been technological limitations (e.g., battery life). As such, GPS-based measures have been limited by time and in sample size. The aim of this pilot study was to develop and test a novel GPS unit, (≤4-week battery life, waterproof) to measure access to water. The GPS-based method was pilot-tested to estimate number of trips per day, time spent and distance traveled to source for all water collected over a 3-day period in five households in south-western Uganda. This method was then compared to self-reported measures and commonly used spatial measures of access for the same households. Time spent collecting water was significantly overestimated using a self-reported measure, compared to GPS-based (p < 0.05). In contrast, both the GIS Euclidean distances to nearest and actual primary source significantly underestimated distances traveled, compared to the GPS-based measurement of actual travel paths to water source (p < 0.05). Households did not consistently collect water from the source nearest their home. Comparisons between the GPS-based measure and self-reported meters traveled were not made, as respondents did not feel that they could accurately estimate distance. However, there was complete agreement between self-reported primary source and GPS-based. Reliance on cross-sectional self-reported or simple GIS measures leads to misclassification in water access measurement. This new method offers reductions in such errors and may aid in understanding dynamic measures of access to water for health studies.

  13. Study on Measuring the Viscosity of Lubricating Oil by Viscometer Based on Hele - Shaw Principle

    NASA Astrophysics Data System (ADS)

    Li, Longfei

    2017-12-01

    In order to explore the method of accurately measuring the viscosity value of oil samples using the viscometer based on Hele-Shaw principle, three different measurement methods are designed in the laboratory, and the statistical characteristics of the measured values are compared, in order to get the best measurement method. The results show that the oil sample to be measured is placed in the magnetic field formed by the magnet, and the oil sample can be sucked from the same distance from the magnet. The viscosity value of the sample can be measured accurately.

  14. Real-Time GNSS-Based Attitude Determination in the Measurement Domain.

    PubMed

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-02-05

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.

  15. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  16. Self-recalibration of a robot-assisted structured-light-based measurement system.

    PubMed

    Xu, Jing; Chen, Rui; Liu, Shuntao; Guan, Yong

    2017-11-10

    The structured-light-based measurement method is widely employed in numerous fields. However, for industrial inspection, to achieve complete scanning of a work piece and overcome occlusion, the measurement system needs to be moved to different viewpoints. Moreover, frequent reconfiguration of the measurement system may be needed based on the size of the measured object, making the self-recalibration of extrinsic parameters indispensable. To this end, this paper proposes an automatic self-recalibration and reconstruction method, wherein a robot arm is employed to move the measurement system for complete scanning; the self-recalibration is achieved using fundamental matrix calculations and point cloud registration without the need for an accurate calibration gauge. Experimental results demonstrate the feasibility and accuracy of our method.

  17. A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs

    PubMed Central

    Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio

    2010-01-01

    A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526

  18. Method for measuring the alternating current half-wave voltage of a Mach-Zehnder modulator based on opto-electronic oscillation.

    PubMed

    Hong, Jun; Chen, Dongchu; Peng, Zhiqiang; Li, Zulin; Liu, Haibo; Guo, Jian

    2018-05-01

    A new method for measuring the alternating current (AC) half-wave voltage of a Mach-Zehnder modulator is proposed and verified by experiment in this paper. Based on the opto-electronic self-oscillation technology, the physical relationship between the saturation output power of the oscillating signal and the AC half-wave voltage is revealed, and the value of the AC half-wave voltage is solved by measuring the saturation output power of the oscillating signal. The experimental results show that the measured data of this new method involved are in agreement with a traditional method, and not only an external microwave signal source but also the calibration for different frequency measurements is not needed in our new method. The measuring process is simplified with this new method on the premise of ensuring the accuracy of measurement, and it owns good practical value.

  19. Validation of Web-Based Physical Activity Measurement Systems Using Doubly Labeled Water

    PubMed Central

    Yamaguchi, Yukio; Yamada, Yosuke; Tokushima, Satoru; Hatamoto, Yoichi; Sagayama, Hiroyuki; Kimura, Misaka; Higaki, Yasuki; Tanaka, Hiroaki

    2012-01-01

    Background Online or Web-based measurement systems have been proposed as convenient methods for collecting physical activity data. We developed two Web-based physical activity systems—the 24-hour Physical Activity Record Web (24hPAR WEB) and 7 days Recall Web (7daysRecall WEB). Objective To examine the validity of two Web-based physical activity measurement systems using the doubly labeled water (DLW) method. Methods We assessed the validity of the 24hPAR WEB and 7daysRecall WEB in 20 individuals, aged 25 to 61 years. The order of email distribution and subsequent completion of the two Web-based measurements systems was randomized. Each measurement tool was used for a week. The participants’ activity energy expenditure (AEE) and total energy expenditure (TEE) were assessed over each week using the DLW method and compared with the respective energy expenditures estimated using the Web-based systems. Results The mean AEE was 3.90 (SD 1.43) MJ estimated using the 24hPAR WEB and 3.67 (SD 1.48) MJ measured by the DLW method. The Pearson correlation for AEE between the two methods was r = .679 (P < .001). The Bland-Altman 95% limits of agreement ranged from –2.10 to 2.57 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .874 (P < .001). The mean AEE was 4.29 (SD 1.94) MJ using the 7daysRecall WEB and 3.80 (SD 1.36) MJ by the DLW method. The Pearson correlation for AEE between the two methods was r = .144 (P = .54). The Bland-Altman 95% limits of agreement ranged from –3.83 to 4.81 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .590 (P = .006). The average input times using terminal devices were 8 minutes and 10 seconds for the 24hPAR WEB and 6 minutes and 38 seconds for the 7daysRecall WEB. Conclusions Both Web-based systems were found to be effective methods for collecting physical activity data and are appropriate for use in epidemiological studies. Because the measurement accuracy of the 24hPAR WEB was moderate to high, it could be suitable for evaluating the effect of interventions on individuals as well as for examining physical activity behavior. PMID:23010345

  20. Low-Resolution Raman-Spectroscopy Combustion Thermometry

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2008-01-01

    A method of optical thermometry, now undergoing development, involves low-resolution measurement of the spectrum of spontaneous Raman scattering (SRS) from N2 and O2 molecules. The method is especially suitable for measuring temperatures in high pressure combustion environments that contain N2, O2, or N2/O2 mixtures (including air). Methods based on SRS (in which scattered light is shifted in wavelength by amounts that depend on vibrational and rotational energy levels of laser-illuminated molecules) have been popular means of probing flames because they are almost the only methods that provide spatially and temporally resolved concentrations and temperatures of multiple molecular species in turbulent combustion. The present SRS-based method differs from prior SRS-based methods that have various drawbacks, a description of which would exceed the scope of this article. Two main differences between this and prior SRS-based methods are that it involves analysis in the frequency (equivalently, wavelength) domain, in contradistinction to analysis in the intensity domain in prior methods; and it involves low-resolution measurement of what amounts to predominantly the rotational Raman spectra of N2 and O2, in contradistinction to higher-resolution measurement of the vibrational Raman spectrum of N2 only in prior methods.

  1. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  2. [A new non-contact method based on relative spectral intensity for determining junction temperature of LED].

    PubMed

    Qiu, Xi-Zhen; Zhang, Fang-Hui

    2013-01-01

    The high-power white LED was prepared based on the high thermal conductivity aluminum, blue chips and YAG phosphor. By studying the spectral of different junction temperature, we found that the radiation spectrum of white LED has a minimum at 485 nm. The radiation intensity at this wavelength and the junction temperature show a good linear relationship. The LED junction temperature was measured based on the formula of relative spectral intensity and junction temperature. The result measured by radiation intensity method was compared with the forward voltage method and spectral method. The experiment results reveal that the junction temperature measured by this method was no more than 2 degrees C compared with the forward voltage method. It maintains the accuracy of the forward voltage method and overcomes the small spectral shift of spectral method, which brings the shortcoming on the results. It also had the advantages of practical, efficient and intuitive, noncontact measurement, and non-destruction to the lamp structure.

  3. Surface Temperature Measurement Using Hematite Coating

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J. (Inventor)

    2015-01-01

    Systems and methods that are capable of measuring temperature via spectrophotometry principles are discussed herein. These systems and methods are based on the temperature dependence of the reflection spectrum of hematite. Light reflected from these sensors can be measured to determine a temperature, based on changes in the reflection spectrum discussed herein.

  4. A new experimental method for the determination of the effective orifice area based on the acoustical source term

    NASA Astrophysics Data System (ADS)

    Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.

    2005-12-01

    The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.

  5. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  6. Neutron Based Non-Destructive Assay (NDA) Measurement Systems for Safeguard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swinhoe, Martyn Thomas

    2017-09-21

    The objectives of this project are to introduce the assay methods for plutonium measurements using the HLNC; introduce the assay method for bulk uranium measurements using the AWCC; and introduce the assay method for fuel assembly measurements using the UNCL.

  7. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  8. A method for surface topography measurement using a new focus function based on dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Guo, Tong; Yuan, Lin; Chen, Jinping

    2018-01-01

    Surface topography measurement is an important tool widely used in many fields to determine the characteristics and functionality of a part or material. Among existing methods for this purpose, the focus variation method has proved high performance particularly in large slope scenarios. However, its performance depends largely on the effectiveness of focus function. This paper presents a method for surface topography measurement using a new focus measurement function based on dual-tree complex wavelet transform. Experiments are conducted on simulated defocused images to prove its high performance in comparison with other traditional approaches. The results showed that the new algorithm has better unimodality and sharpness. The method was also verified by measuring a MEMS micro resonator structure.

  9. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  10. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  11. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  12. Validation of space-based polarization measurements by use of a single-scattering approximation, with application to the global ozone monitoring experiment.

    PubMed

    Aben, Ilse; Tanzi, Cristina P; Hartmann, Wouter; Stam, Daphne M; Stammes, Piet

    2003-06-20

    A method is presented for in-flight validation of space-based polarization measurements based on approximation of the direction of polarization of scattered sunlight by the Rayleigh single-scattering value. This approximation is verified by simulations of radiative transfer calculations for various atmospheric conditions. The simulations show locations along an orbit where the scattering geometries are such that the intensities of the parallel and orthogonal polarization components of the light are equal, regardless of the observed atmosphere and surface. The method can be applied to any space-based instrument that measures the polarization of reflected solar light. We successfully applied the method to validate the Global Ozone Monitoring Experiment (GOME) polarization measurements. The error in the GOME's three broadband polarization measurements appears to be approximately 1%.

  13. MEthods of ASsessing blood pressUre: identifying thReshold and target valuEs (MeasureBP): a review & study protocol.

    PubMed

    Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S

    2015-04-01

    Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.

  14. Measurement of Walking Ground Reactions in Real-Life Environments: A Systematic Review of Techniques and Technologies.

    PubMed

    Shahabpoor, Erfan; Pavic, Aleksandar

    2017-09-12

    Monitoring natural human gait in real-life environments is essential in many applications, including quantification of disease progression, monitoring the effects of treatment, and monitoring alteration of performance biomarkers in professional sports. Nevertheless, developing reliable and practical techniques and technologies necessary for continuous real-life monitoring of gait is still an open challenge. A systematic review of English-language articles from scientific databases including Scopus, ScienceDirect, Pubmed, IEEE Xplore, EBSCO and MEDLINE were carried out to analyse the 'accuracy' and 'practicality' of the current techniques and technologies for quantitative measurement of the tri-axial walking ground reactions outside the laboratory environment, and to highlight their strengths and shortcomings. In total, 679 relevant abstracts were identified, 54 full-text papers were included in the paper and the quantitative results of 17 papers were used for meta-analysis and comparison. Three classes of methods were reviewed: (1) methods based on measured kinematic data; (2) methods based on measured plantar pressure; and (3) methods based on direct measurement of ground reactions. It was found that all three classes of methods have competitive accuracy levels with methods based on direct measurement of the ground reactions showing highest accuracy while being least practical for long-term real-life measurement. On the other hand, methods that estimate ground reactions using measured body kinematics show highest practicality of the three classes of methods reviewed. Among the most prominent technical and technological challenges are: (1) reducing the size and price of tri-axial load-cells; (2) improving the accuracy of orientation measurement using IMUs; (3) minimizing the number and optimizing the location of required IMUs for kinematic measurement; (4) increasing the durability of pressure insole sensors, and (5) enhancing the robustness and versatility of the ground reactions estimation methods to include pathological gaits and natural variability of gait in real-life physical environment.

  15. Measurement of Walking Ground Reactions in Real-Life Environments: A Systematic Review of Techniques and Technologies

    PubMed Central

    Shahabpoor, Erfan; Pavic, Aleksandar

    2017-01-01

    Monitoring natural human gait in real-life environments is essential in many applications, including quantification of disease progression, monitoring the effects of treatment, and monitoring alteration of performance biomarkers in professional sports. Nevertheless, developing reliable and practical techniques and technologies necessary for continuous real-life monitoring of gait is still an open challenge. A systematic review of English-language articles from scientific databases including Scopus, ScienceDirect, Pubmed, IEEE Xplore, EBSCO and MEDLINE were carried out to analyse the ‘accuracy’ and ‘practicality’ of the current techniques and technologies for quantitative measurement of the tri-axial walking ground reactions outside the laboratory environment, and to highlight their strengths and shortcomings. In total, 679 relevant abstracts were identified, 54 full-text papers were included in the paper and the quantitative results of 17 papers were used for meta-analysis and comparison. Three classes of methods were reviewed: (1) methods based on measured kinematic data; (2) methods based on measured plantar pressure; and (3) methods based on direct measurement of ground reactions. It was found that all three classes of methods have competitive accuracy levels with methods based on direct measurement of the ground reactions showing highest accuracy while being least practical for long-term real-life measurement. On the other hand, methods that estimate ground reactions using measured body kinematics show highest practicality of the three classes of methods reviewed. Among the most prominent technical and technological challenges are: (1) reducing the size and price of tri-axial load-cells; (2) improving the accuracy of orientation measurement using IMUs; (3) minimizing the number and optimizing the location of required IMUs for kinematic measurement; (4) increasing the durability of pressure insole sensors, and (5) enhancing the robustness and versatility of the ground reactions estimation methods to include pathological gaits and natural variability of gait in real-life physical environment. PMID:28895909

  16. Survey of in-situ and remote sensing methods for soil moisture determination

    NASA Technical Reports Server (NTRS)

    Schmugge, T. J.; Jackson, T. J.; Mckim, H. L.

    1981-01-01

    General methods for determining the moisture content in the surface layers of the soil based on in situ or point measurements, soil water models and remote sensing observations are surveyed. In situ methods described include gravimetric techniques, nuclear techniques based on neutron scattering or gamma-ray attenuation, electromagnetic techniques, tensiometric techniques and hygrometric techniques. Soil water models based on column mass balance treat soil moisture contents as a result of meteorological inputs (precipitation, runoff, subsurface flow) and demands (evaporation, transpiration, percolation). The remote sensing approaches are based on measurements of the diurnal range of surface temperature and the crop canopy temperature in the thermal infrared, measurements of the radar backscattering coefficient in the microwave region, and measurements of microwave emission or brightness temperature. Advantages and disadvantages of the various methods are pointed out, and it is concluded that a successful monitoring system must incorporate all of the approaches considered.

  17. Fiber-optical method of pyrometric measurement of melts temperature

    NASA Astrophysics Data System (ADS)

    Zakharenko, V. A.; Veprikova, Ya R.

    2018-01-01

    There is a scientific problem of non-contact measurement of the temperature of metal melts now. The problem is related to the need to achieve the specified measurement errors in conditions of uncertainty of the blackness coefficients of the radiating surfaces. The aim of this work is to substantiate the new method of measurement in which the influence of the blackness coefficient is eliminated. The task consisted in calculating the design and material of special crucible placed in the molten metal, which is an emitter in the form of blackbody (BB). The methods are based on the classical concepts of thermal radiation and calculations based on the Planck function. To solve the problem, the geometry of the crucible was calculated on the basis of the Goofy method which forms the emitter of a blackbody at the immersed in the melt. The paper describes the pyrometric device based on fiber optic pyrometer for temperature measurement of melts, which implements the proposed method of measurement using a special crucible. The emitter is formed by the melt in this crucible, the temperature within which is measured by means of fiber optic pyrometer. Based on the results of experimental studies, the radiation coefficient ε‧ > 0.999, which confirms the theoretical and computational justification is given in the article

  18. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  19. Real-Time GNSS-Based Attitude Determination in the Measurement Domain

    PubMed Central

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-01-01

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434

  20. Modeling Complex Phenomena Using Multiscale Time Sequences

    DTIC Science & Technology

    2009-08-24

    measures based on Hurst and Holder exponents , auto-regressive methods and Fourier and wavelet decomposition methods. The applications for this technology...relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and Holder exponents , auto-regressive...different scales and how these scales relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and

  1. A comparison study of size-specific dose estimate calculation methods.

    PubMed

    Parikh, Roshni A; Wien, Michael A; Novak, Ronald D; Jordan, David W; Klahr, Paul; Soriano, Stephanie; Ciancibello, Leslie; Berlin, Sheila C

    2018-01-01

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ c ) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI vol, there was poor correlation, ρ c <0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide acceptable dose estimates for pediatric patients <30 cm in body width. Body weight provides a quick and practical method to identify conversion factors that can be used to estimate SSDE with reasonable accuracy in pediatric patients with body width ≥20 cm.

  2. Measurement of radon diffusion in polyethylene based on alpha detection

    NASA Astrophysics Data System (ADS)

    Rau, Wolfgang

    2012-02-01

    Radon diffusion in different materials has been measured in the past. Usually the diffusion measurements are based on a direct determination of the amount of radon that diffuses through a thin layer of material. Here we present a method based on the measurement of the radon daughter products which are deposited inside the material. Looking at the decay of 210Po allows us to directly measure the exponential diffusion profile characterized by the diffusion length. In addition we can determine the solubility of radon in PE. We also describe a second method to determine the diffusion constant based on the short-lived radon daughter products 218Po and 214Po, using the identical experimental setup. Measurements for regular polyethylene (PE) and High Molecular Weight Polyethylene (HMWPE) yielded diffusion lengths of (1.3±0.3) mm and (0.8±0.2) mm and solubilities of 0.5±0.1 and 0.7±0.2, respectively, for the first method; the diffusion lengths extracted from the second method are noticeably larger which may be caused by different experimental conditions during diffusion.

  3. Evaluation of new flux attribution methods for mapping N2O emissions at the landscape scale from EC measurements

    NASA Astrophysics Data System (ADS)

    Grossel, Agnes; Bureau, Jordan; Loubet, Benjamin; Laville, Patricia; Massad, Raia; Haas, Edwin; Butterbach-Bahl, Klaus; Guimbaud, Christophe; Hénault, Catherine

    2017-04-01

    The objective of this study was to develop and evaluate an attribution method based on a combination of Eddy Covariance (EC) and chamber measurements to map N2O emissions over a 3-km2 area of croplands and forests in France. During 2 months of spring 2015, N2O fluxes were measured (i) by EC at 15 m height and (ii) punctually with a mobile chamber at 16 places within 1-km of EC mast. The attribution method was based on coupling the EC measurements, information on footprints (Loubet et al., 20101) and emission ratios based on crops and fertilizations, calculated based on chamber measurements. The results were evaluated against an independent flux dataset measured by automatic chambers in a wheat field within the area. At the landscape scale, the method estimated a total emission of 114-271 kg N-N2O during the campaign. This new approach allowed estimating continuously N2O emission and better accounting for the spatial variability of N2O emission at the landscape scale.

  4. Calibration-free absolute frequency response measurement of directly modulated lasers based on additional modulation.

    PubMed

    Zhang, Shangjian; Zou, Xinhai; Wang, Heng; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-10-15

    A calibration-free electrical method is proposed for measuring the absolute frequency response of directly modulated semiconductor lasers based on additional modulation. The method achieves the electrical domain measurement of the modulation index of directly modulated lasers without the need for correcting the responsivity fluctuation in the photodetection. Moreover, it doubles measuring frequency range by setting a specific frequency relationship between the direct and additional modulation. Both the absolute and relative frequency response of semiconductor lasers are experimentally measured from the electrical spectrum of the twice-modulated optical signal, and the measured results are compared to those obtained with conventional methods to check the consistency. The proposed method provides calibration-free and accurate measurement for high-speed semiconductor lasers with high-resolution electrical spectrum analysis.

  5. LCR circuit: new simple methods for measuring the equivalent series resistance of a capacitor and inductance of a coil

    NASA Astrophysics Data System (ADS)

    Ivković, Saša S.; Marković, Marija Z.; Ivković, Dragica Ž.; Cvetanović, Nikola

    2017-09-01

    Equivalent series resistance (ESR) represents the measurement of total energy loss in a capacitor. In this paper a simple method for measuring the ESR of ceramic capacitors based on the analysis of the oscillations of an LCR circuit is proposed. It is shown that at frequencies under 3300 Hz, the ESR is directly proportional to the period of oscillations. Based on the determined dependence of the ESR on the period, a method is devised and tested for measuring coil inductance. All measurements were performed using the standard equipment found in student laboratories, which makes both methods very suitable for implementation at high school and university levels.

  6. Comparison of usual and alternative methods to measure height in mechanically ventilated patients: potential impact on protective ventilation.

    PubMed

    Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François

    2014-07-01

    Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.

  7. Repeatability, Reproducibility, Separative Power and Subjectivity of Different Fish Morphometric Analysis Methods

    PubMed Central

    Takács, Péter

    2016-01-01

    We compared the repeatability, reproducibility (intra- and inter-measurer similarity), separative power and subjectivity (measurer effect on results) of four morphometric methods frequently used in ichthyological research, the “traditional” caliper-based (TRA) and truss-network (TRU) distance methods and two geometric methods that compare landmark coordinates on the body (GMB) and scales (GMS). In each case, measurements were performed three times by three measurers on the same specimen of three common cyprinid species (roach Rutilus rutilus (Linnaeus, 1758), bleak Alburnus alburnus (Linnaeus, 1758) and Prussian carp Carassius gibelio (Bloch, 1782)) collected from three closely-situated sites in the Lake Balaton catchment (Hungary) in 2014. TRA measurements were made on conserved specimens using a digital caliper, while TRU, GMB and GMS measurements were undertaken on digital images of the bodies and scales. In most cases, intra-measurer repeatability was similar. While all four methods were able to differentiate the source populations, significant differences were observed in their repeatability, reproducibility and subjectivity. GMB displayed highest overall repeatability and reproducibility and was least burdened by measurer effect. While GMS showed similar repeatability to GMB when fish scales had a characteristic shape, it showed significantly lower reproducability (compared with its repeatability) for each species than the other methods. TRU showed similar repeatability as the GMS. TRA was the least applicable method as measurements were obtained from the fish itself, resulting in poor repeatability and reproducibility. Although all four methods showed some degree of subjectivity, TRA was the only method where population-level detachment was entirely overwritten by measurer effect. Based on these results, we recommend a) avoidance of aggregating different measurer’s datasets when using TRA and GMS methods; and b) use of image-based methods for morphometric surveys. Automation of the morphometric workflow would also reduce any measurer effect and eliminate measurement and data-input errors. PMID:27327896

  8. Method and apparatus of a portable imaging-based measurement with self calibration

    DOEpatents

    Chang, Tzyy-Shuh [Ann Arbor, MI; Huang, Hsun-Hau [Ann Arbor, MI

    2012-07-31

    A portable imaging-based measurement device is developed to perform 2D projection based measurements on an object that is difficult or dangerous to access. This device is equipped with self calibration capability and built-in operating procedures to ensure proper imaging based measurement.

  9. The robustness and accuracy of in vivo linear wear measurements for knee prostheses based on model-based RSA.

    PubMed

    van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L

    2011-10-13

    Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  11. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.

  12. Research on distributed optical fiber sensing data processing method based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  13. Blind system identification of two-thermocouple sensor based on cross-relation method.

    PubMed

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  14. Blind system identification of two-thermocouple sensor based on cross-relation method

    NASA Astrophysics Data System (ADS)

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  15. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    NASA Astrophysics Data System (ADS)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  16. An unsupervised method for estimating the global horizontal irradiance from photovoltaic power measurements

    NASA Astrophysics Data System (ADS)

    Nespoli, Lorenzo; Medici, Vasco

    2017-12-01

    In this paper, we present a method to determine the global horizontal irradiance (GHI) from the power measurements of one or more PV systems, located in the same neighborhood. The method is completely unsupervised and is based on a physical model of a PV plant. The precise assessment of solar irradiance is pivotal for the forecast of the electric power generated by photovoltaic (PV) plants. However, on-ground measurements are expensive and are generally not performed for small and medium-sized PV plants. Satellite-based services represent a valid alternative to on site measurements, but their space-time resolution is limited. Results from two case studies located in Switzerland are presented. The performance of the proposed method at assessing GHI is compared with that of free and commercial satellite services. Our results show that the presented method is generally better than satellite-based services, especially at high temporal resolutions.

  17. An AFM-based pit-measuring method for indirect measurements of cell-surface membrane vesicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaojun; Department of Biotechnology, Nanchang University, Nanchang, Jiangxi 330031; Chen, Yuan

    2014-03-28

    Highlights: • Air drying induced the transformation of cell-surface membrane vesicles into pits. • An AFM-based pit-measuring method was developed to measure cell-surface vesicles. • Our method detected at least two populations of cell-surface membrane vesicles. - Abstract: Circulating membrane vesicles, which are shed from many cell types, have multiple functions and have been correlated with many diseases. Although circulating membrane vesicles have been extensively characterized, the status of cell-surface membrane vesicles prior to their release is less understood due to the lack of effective measurement methods. Recently, as a powerful, micro- or nano-scale imaging tool, atomic force microscopy (AFM)more » has been applied in measuring circulating membrane vesicles. However, it seems very difficult for AFM to directly image/identify and measure cell-bound membrane vesicles due to the similarity of surface morphology between membrane vesicles and cell surfaces. Therefore, until now no AFM studies on cell-surface membrane vesicles have been reported. In this study, we found that air drying can induce the transformation of most cell-surface membrane vesicles into pits that are more readily detectable by AFM. Based on this, we developed an AFM-based pit-measuring method and, for the first time, used AFM to indirectly measure cell-surface membrane vesicles on cultured endothelial cells. Using this approach, we observed and quantitatively measured at least two populations of cell-surface membrane vesicles, a nanoscale population (<500 nm in diameter peaking at ∼250 nm) and a microscale population (from 500 nm to ∼2 μm peaking at ∼0.8 μm), whereas confocal microscopy only detected the microscale population. The AFM-based pit-measuring method is potentially useful for studying cell-surface membrane vesicles and for investigating the mechanisms of membrane vesicle formation/release.« less

  18. Implications to Postsecondary Faculty of Alternative Calculation Methods of Gender-Based Wage Differentials.

    ERIC Educational Resources Information Center

    Hagedorn, Linda Serra

    1998-01-01

    A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…

  19. An investigation of density measurement method for yarn-dyed woven fabrics based on dual-side fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Xin, Binjie

    2016-08-01

    Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.

  20. Methanogenic activity tests by Infrared Tunable Diode Laser Absorption Spectroscopy.

    PubMed

    Martinez-Cruz, Karla; Sepulveda-Jauregui, Armando; Escobar-Orozco, Nayeli; Thalasso, Frederic

    2012-10-01

    Methanogenic activity (MA) tests are commonly carried out to estimate the capability of anaerobic biomass to treat effluents, to evaluate anaerobic activity in bioreactors or natural ecosystems, or to quantify inhibitory effects on methanogenic activity. These activity tests are usually based on the measurement of the volume of biogas produced by volumetric, pressure increase or gas chromatography (GC) methods. In this study, we present an alternative method for non-invasive measurement of methane produced during activity tests in closed vials, based on Infrared Tunable Diode Laser Absorption Spectroscopy (MA-TDLAS). This new method was tested during model acetoclastic and hydrogenotrophic methanogenic activity tests and was compared to a more traditional method based on gas chromatography. From the results obtained, the CH(4) detection limit of the method was estimated to 60 ppm and the minimum measurable methane production rate was estimated to 1.09(.)10(-3) mg l(-1) h(-1), which is below CH(4) production rate usually reported in both anaerobic reactors and natural ecosystems. Additionally to sensitivity, the method has several potential interests compared to more traditional methods among which short measurements time allowing the measurement of a large number of MA test vials, non-invasive measurements avoiding leakage or external interferences and similar cost to GC based methods. It is concluded that MA-TDLAS is a promising method that could be of interest not only in the field of anaerobic digestion but also, in the field of environmental ecology where CH(4) production rates are usually very low. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Talker Localization Based on Interference between Transmitted and Reflected Audible Sound

    NASA Astrophysics Data System (ADS)

    Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji

    In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.

  2. Possible incorporation of petroleum-based carbons in biochemicals produced by bioprocess--biomass carbon ratio measured by accelerator mass spectrometry.

    PubMed

    Kunioka, Masao

    2010-06-01

    The biomass carbon ratios of biochemicals related to biomass have been reviewed. Commercial products from biomass were explained. The biomass carbon ratios of biochemical compounds were measured by accelerator mass spectrometry (AMS) based on the (14)C concentration of carbons in the compounds. This measuring method uses the mechanism that biomass carbons include a very low level of (14)C and petroleum carbons do not include (14)C similar to the carbon dating measuring method. It was confirmed that there were some biochemicals synthesized from petroleum-based carbons. This AMS method has a high accuracy with a small standard deviation and can be applied to plastic products.

  3. Calibration-free in vivo transverse blood flowmetry based on cross correlation of slow-time profiles from photoacoustic microscopy

    PubMed Central

    Zhou, Yong; Liang, Jinyang; Maslov, Konstantin I.; Wang, Lihong V.

    2013-01-01

    We propose a cross-correlation-based method to measure blood flow velocity by using photoacoustic microscopy. Unlike in previous auto-correlation-based methods, the measured flow velocity here is independent of particle size. Thus, an absolute flow velocity can be obtained without calibration. We first measured the flow velocity ex vivo, using defibrinated bovine blood. Then, flow velocities in vessels with different structures in a mouse ear were quantified in vivo. We further measured the flow variation in the same vessel and at a vessel bifurcation. All the experimental results indicate that our method can be used to accurately quantify blood velocity in vivo. PMID:24081077

  4. Calorimetric Measurement for Internal Conversion Efficiency of Photovoltaic Cells/Modules Based on Electrical Substitution Method

    NASA Astrophysics Data System (ADS)

    Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato

    2018-02-01

    We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.

  5. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  6. A hybrid degradation tendency measurement method for mechanical equipment based on moving window and Grey-Markov model

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han

    2017-11-01

    Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.

  7. Measuring signal-to-noise ratio automatically

    NASA Technical Reports Server (NTRS)

    Bergman, L. A.; Johnston, A. R.

    1980-01-01

    Automated method of measuring signal-to-noise ratio in digital communication channels is more precise and 100 times faster than previous methods used. Method based on bit-error-rate (B&R) measurement can be used with cable, microwave radio, or optical links.

  8. Measuring Symmetry in Children With Unrepaired Cleft Lip: Defining a Standard for the Three-Dimensional Midfacial Reference Plane.

    PubMed

    Wu, Jia; Heike, Carrie; Birgfeld, Craig; Evans, Kelly; Maga, Murat; Morrison, Clinton; Saltzman, Babette; Shapiro, Linda; Tse, Raymond

    2016-11-01

      Quantitative measures of facial form to evaluate treatment outcomes for cleft lip (CL) are currently limited. Computer-based analysis of three-dimensional (3D) images provides an opportunity for efficient and objective analysis. The purpose of this study was to define a computer-based standard of identifying the 3D midfacial reference plane of the face in children with unrepaired cleft lip for measurement of facial symmetry.   The 3D images of 50 subjects (35 with unilateral CL, 10 with bilateral CL, five controls) were included in this study.   Five methods of defining a midfacial plane were applied to each image, including two human-based (Direct Placement, Manual Landmark) and three computer-based (Mirror, Deformation, Learning) methods.   Six blinded raters (three cleft surgeons, two craniofacial pediatricians, and one craniofacial researcher) independently ranked and rated the accuracy of the defined planes.   Among computer-based methods, the Deformation method performed significantly better than the others. Although human-based methods performed best, there was no significant difference compared with the Deformation method. The average correlation coefficient among raters was .4; however, it was .7 and .9 when the angular difference between planes was greater than 6° and 8°, respectively.   Raters can agree on the 3D midfacial reference plane in children with unrepaired CL using digital surface mesh. The Deformation method performed best among computer-based methods evaluated and can be considered a useful tool to carry out automated measurements of facial symmetry in children with unrepaired cleft lip.

  9. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    ERIC Educational Resources Information Center

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  10. Optimization methods of pulse-to-pulse alignment using femtosecond pulse laser based on temporal coherence function for practical distance measurement

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui

    2018-02-01

    An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.

  11. Microsiemens or Milligrams: Measures of Ionic Mixtures ...

    EPA Pesticide Factsheets

    In December of 2016, EPA released the Draft Field-Based Methods for Developing Aquatic Life Criteria for Specific Conductivity for public comment. Once final, states and authorized tribes may use these methods to derive field-based ecoregional ambient Aquatic Life Ambient Water Quality Criteria (AWQC) for specific conductivity (SC) in flowing waters. The methods provide flexible approaches for developing science-based SC criteria that reflect ecoregional or state specific factors. The concentration of a dissolved salt mixture can be measured in a number of ways including measurement of total dissolved solids, freezing point depression, refractive index, density, or the sum of the concentrations of individually measured ions. For the draft method, SC was selected as the measure because SC is a measure of all ions in the mixture; the measurement technology is fast, inexpensive, and accurate, and it measures only dissolved ions. When developing water quality criteria for major ions, some stakeholders may prefer to identify the ionic constituents as a measure of exposure instead of SC. A field-based method was used to derive example chronic and acute water quality criteria for SC and two anions a common mixture of ions (bicarbonate plus sulfate, [HCO3−] + [SO42−] in mg/L) that represent common mixtures in streams. These two anions are sufficient to model the ion mixture and SC (R2 = 0.94). Using [HCO3−] + [SO42−] does not imply that these two anions are the

  12. Novel method for measurement of transistor gate length using energy-filtered transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Sungho; Kim, Tae-Hoon; Kang, Jonghyuk; Yang, Cheol-Woong

    2016-12-01

    As the feature size of devices continues to decrease, transmission electron microscopy (TEM) is becoming indispensable for measuring the critical dimension (CD) of structures. Semiconductors consist primarily of silicon-based materials such as silicon, silicon dioxide, and silicon nitride, and the electrons transmitted through a plan-view TEM sample provide diverse information about various overlapped silicon-based materials. This information is exceedingly complex, which makes it difficult to clarify the boundary to be measured. Therefore, we propose a simple measurement method using energy-filtered TEM (EF-TEM). A precise and effective measurement condition was obtained by determining the maximum value of the integrated area ratio of the electron energy loss spectrum at the boundary to be measured. This method employs an adjustable slit allowing only electrons with a certain energy range to pass. EF-TEM imaging showed a sharp transition at the boundary when the energy-filter’s passband centre was set at 90 eV, with a slit width of 40 eV. This was the optimum condition for the CD measurement of silicon-based materials involving silicon nitride. Electron energy loss spectroscopy (EELS) and EF-TEM images were used to verify this method, which makes it possible to measure the transistor gate length in a dynamic random access memory manufactured using 35 nm process technology. This method can be adapted to measure the CD of other non-silicon-based materials using the EELS area ratio of the boundary materials.

  13. Comparison of SVM RBF-NN and DT for crop and weed identification based on spectral measurement over corn fields

    USDA-ARS?s Scientific Manuscript database

    It is important to find an appropriate pattern-recognition method for in-field plant identification based on spectral measurement in order to classify the crop and weeds accurately. In this study, the method of Support Vector Machine (SVM) was evaluated and compared with two other methods, Decision ...

  14. Correction Methods for Organic Carbon Artifacts when Using Quartz-Fiber Filters in Large Particulate Matter Monitoring Networks: The Regression Method and Other Options

    EPA Science Inventory

    Sampling and handling artifacts can bias filter-based measurements of particulate organic carbon (OC). Several measurement-based methods for OC artifact reduction and/or estimation are currently used in research-grade field studies. OC frequently is not artifact-corrected in larg...

  15. An ultrasound-based liquid pressure measurement method in small diameter pipelines considering the installation and temperature.

    PubMed

    Li, Xue; Song, Zhengxiang

    2015-04-09

    Liquid pressure is a key parameter for detecting and judging faults in hydraulic mechanisms, but traditional measurement methods have many deficiencies. An effective non-intrusive method using an ultrasound-based technique to measure liquid pressure in small diameter (less than 15 mm) pipelines is presented in this paper. The proposed method is based on the principle that the transmission speed of an ultrasonic wave in a Kneser liquid correlates with liquid pressure. Liquid pressure was calculated using the variation of ultrasonic propagation time in a liquid under different pressures: 0 Pa and X Pa. In this research the time difference was obtained by an electrical processing approach and was accurately measured to the nanosecond level through a high-resolution time measurement module. Because installation differences and liquid temperatures could influence the measurement accuracy, a special type of circuit called automatic gain control (AGC) circuit and a new back propagation network (BPN) model accounting for liquid temperature were employed to improve the measurement results. The corresponding pressure values were finally obtained by utilizing the relationship between time difference, transient temperature and liquid pressure. An experimental pressure measurement platform was built and the experimental results confirm that the proposed method has good measurement accuracy.

  16. Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods

    Treesearch

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...

  17. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    PubMed

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. [Application of three heat pulse technique-based methods to determine the stem sap flow].

    PubMed

    Wang, Sheng; Fan, Jun

    2015-08-01

    It is of critical importance to acquire tree transpiration characters through sap flow methodology to understand tree water physiology, forest ecology and ecosystem water exchange. Tri-probe heat pulse sensors, which are widely utilized in soil thermal parameters and soil evaporation measurement, were applied to implement Salix matsudana sap flow density (Vs) measurements via heat-ratio method (HRM), T-Max method (T-Max) and single-probe heat pulse probe (SHPP) method, and comparative analysis was conducted with additional Grainer's thermal diffusion probes (TDP) measured results. The results showed that, it took about five weeks to reach a stable measurement stage after TPHP installation, Vs measured with three methods in the early stage after installation was 135%-220% higher than Vs in the stable measurement stage, and Vs estimated via HRM, T-Max and SHPP methods were significantly linearly correlated with Vs estimated via TDP method, with R2 of 0.93, 0.73 and 0.91, respectively, and R2 for Vs measured by SHPP and HRM reached 0.94. HRM had relatively higher precision in measuring low rates and reverse sap flow. SHPP method seemed to be very promising to measure sap flow for configuration simplicity and high measuring accuracy, whereas it couldn' t distinguish directions of flow. T-Max method had relatively higher error in sap flow measurement, and it couldn' t measure sap flow below 5 cm3 · cm(-2) · h(-1), thus this method could not be used alone, however it could measure thermal diffusivity for calculating sap flow when other methods were imposed. It was recommended to choose a proper method or a combination of several methods to measure stem sap flow, based on specific research purpose.

  19. Prognostic score–based balance measures for propensity score methods in comparative effectiveness research

    PubMed Central

    Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.

    2013-01-01

    Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158

  20. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  1. Non-contact plant growth measurement method and system based on ubiquitous sensor network technologies.

    PubMed

    Suk, Jinweon; Kim, Seokhoon; Ryoo, Intae

    2011-01-01

    This paper proposes a non-contact plant growth measurement system using infrared sensors based on the ubiquitous sensor network (USN) technology. The proposed system measures plant growth parameters such as the stem radius of plants using real-time non-contact methods, and generates diameter, cross-sectional area and thickening form of plant stems using this measured data. Non-contact sensors have been used not to cause any damage to plants during measurement of the growth parameters. Once the growth parameters are measured, they are transmitted to a remote server using the sensor network technology and analyzed in the application program server. The analyzed data are then provided for administrators and a group of interested users. The proposed plant growth measurement system has been designed and implemented using fixed-type and rotary-type infrared sensor based measurement methods and devices. Finally, the system performance is compared and verified with the measurement data that have been obtained by practical field experiments.

  2. Temperature measurement of burning aluminum powder based on the double line method of atomic emission spectra

    NASA Astrophysics Data System (ADS)

    Tang, Huijuan; Hao, Xiaojian; Hu, Xiaotao

    2018-01-01

    In the case of conventional contact temperature measurement, there is a delay phenomenon and high temperature resistant materials limitation. By using the faster response speed and theoretically no upper limit of the non-contact temperature method, the measurement system based on the principle of double line atomic emission spectroscopy temperature measurement is put forward, the structure and theory of temperature measuring device are introduced. According to the atomic spectrum database (ASD), Aluminum(Al) I 690.6 nm and Al I 708.5 nm are selected as the two lines in the temperature measurement. The intensity ratio of the two emission lines was measured by a spectrometer to obtain the temperature of Al burning in pure oxygen, and the result compared to the temperature measured by the thermocouple. It turns out that the temperature correlation between the two methods is good, and it proves the feasibility of the method.

  3. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  4. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  5. Measurement of Crystalline Silica Aerosol Using Quantum Cascade Laser-Based Infrared Spectroscopy.

    PubMed

    Wei, Shijun; Kulkarni, Pramod; Ashley, Kevin; Zheng, Lina

    2017-10-24

    Inhalation exposure to airborne respirable crystalline silica (RCS) poses major health risks in many industrial environments. There is a need for new sensitive instruments and methods for in-field or near real-time measurement of crystalline silica aerosol. The objective of this study was to develop an approach, using quantum cascade laser (QCL)-based infrared spectroscopy (IR), to quantify airborne concentrations of RCS. Three sampling methods were investigated for their potential for effective coupling with QCL-based transmittance measurements: (i) conventional aerosol filter collection, (ii) focused spot sample collection directly from the aerosol phase, and (iii) dried spot obtained from deposition of liquid suspensions. Spectral analysis methods were developed to obtain IR spectra from the collected particulate samples in the range 750-1030 cm -1 . The new instrument was calibrated and the results were compared with standardized methods based on Fourier transform infrared (FTIR) spectrometry. Results show that significantly lower detection limits for RCS (≈330 ng), compared to conventional infrared methods, could be achieved with effective microconcentration and careful coupling of the particulate sample with the QCL beam. These results offer promise for further development of sensitive filter-based laboratory methods and portable sensors for near real-time measurement of crystalline silica aerosol.

  6. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data.

    PubMed

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-05-15

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection.

  7. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data

    PubMed Central

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-01-01

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection. PMID:28505135

  8. A study on the measurement of radar cross section of flighting model based on the range-Doppler imaging

    NASA Astrophysics Data System (ADS)

    Hashimoto, Osamu; Mizokami, Osamu

    The method for measuring radar cross section (RCS) based on Range-Doppler Imaging is discussed. In this method, the measured targets are rotated and the Doppler frequencies caused by each scattering element along the targets are analyzed by FFT. Using this method, each scattered power peak along the flying model is measured. It is found that each part of the RCS of a flying model can be measured and its RCS of a main wing (about 46 dB/sq cm) is greater than of its body (about 20-30 dB/sq cm).

  9. Spatial feature analysis of a cosmic-ray sensor for measuring the soil water content: Comparison of four weighting methods

    NASA Astrophysics Data System (ADS)

    Cai, Jingya; Pang, Zhiguo; Fu, Jun'e.

    2018-04-01

    To quantitatively analyze the spatial features of a cosmic-ray sensor (CRS) (i.e., the measurement support volume of the CRS and the weight of the in situ point-scale soil water content (SWC) in terms of the regionally averaged SWC derived from the CRS) in measuring the SWC, cooperative observations based on CRS, oven drying and frequency domain reflectometry (FDR) methods are performed at the point and regional scales in a desert steppe area of the Inner Mongolia Autonomous Region. This region is flat with sparse vegetation cover consisting of only grass, thereby minimizing the effects of terrain and vegetation. Considering the two possibilities of the measurement support volume of the CRS, the results of four weighting methods are compared with the SWC monitored by FDR within an appropriate measurement support volume. The weighted average calculated using the neutron intensity-based weighting method (Ni weighting method) best fits the regionally averaged SWC measured by the CRS. Therefore, we conclude that the gyroscopic support volume and the weights determined by the Ni weighting method are the closest to the actual spatial features of the CRS when measuring the SWC. Based on these findings, a scale transformation model of the SWC from the point scale to the scale of the CRS measurement support volume is established. In addition, the spatial features simulated using the Ni weighting method are visualized by developing a software system.

  10. Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer

    2017-03-01

    Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.

  11. Experimental comparison and validation of hot-ball method with guarded hot plate method on polyurethane foams

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Glorieux, Christ; Dieška, Peter; Kubičár, Ľudovít

    2016-07-01

    The Hot-ball method is an innovative transient method for measuring thermophysical properties. The principle is based on heating of a small ball, incorporated in measured medium, by constant heating power and simultaneous measuring of the ball's temperature response since the heating was initiated. The shape of the temperature response depends on thermophysical properties of the medium, where the sensor is placed. This method is patented by Institute of Physics, SAS, where the method and sensors based on this method are being developed. At the beginning of the development of sensors for this method we were oriented on monitoring applications, where relative precision is much more important than accuracy. Meanwhile, the quality of sensors was improved good enough to be used for a new application - absolute measuring of thermophysical parameters of low thermally conductive materials. This paper describes experimental verification and validation of measurement by hot-ball method. Thanks to cooperation with Laboratory of Soft Matter and Biophysics of Catholic University of Leuven in Belgium, established Guarded Hot Plate method was used as a reference. Details about measuring setups, description of the experiments and results of the comparison are presented.

  12. A comparison of five approaches to measurement of anatomic knee alignment from radiographs.

    PubMed

    McDaniel, G; Mitchell, K L; Charles, C; Kraus, V B

    2010-02-01

    The recent recognition of the correlation of the hip-knee-ankle angle (HKA) with femur-tibia angle (FTA) on a standard knee radiograph has led to the increasing inclusion of FTA assessments in OA studies due to its clinical relevance, cost effectiveness and minimal radiation exposure. Our goal was to investigate the performance metrics of currently used methods of FTA measurement to determine whether a specific protocol could be recommended based on these results. Inter- and intra-rater reliability of FTA measurements were determined by intraclass correlation coefficient (ICC) of two independent analysts. Minimal detectable differences were determined and the correlation of FTA and HKA was analyzed by linear regression. Differences among methods of measuring HKA were assessed by ANOVA. All five methods of FTA measurement demonstrated high precision by inter- and intra-rater reproducibility (ICCs>or=0.93). All five methods displayed good accuracy, but after correction for the offset of FTA from HKA, the femoral notch landmark method was the least accurate. However, the methods differed according to their minimal detectable differences; the FTA methods utilizing the center of the base of the tibial spines or the center of the tibial plateau as knee center landmarks yielded the smallest minimal detectable differences (1.25 degrees and 1.72 degrees, respectively). All methods of FTA were highly reproducible, but varied in their accuracy and sensitivity to detect meaningful differences. Based on these parameters we recommend standardizing measurement angles with vertices at the base of the tibial spines or the center of the tibia and comparing single-point and two-point methods in larger studies. Copyright 2009 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  13. An image registration-based technique for noninvasive vascular elastography

    NASA Astrophysics Data System (ADS)

    Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza

    2018-02-01

    Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.

  14. A method for the retrieval of atomic oxygen density and temperature profiles from ground-based measurements of the O(+)(2D-2P) 7320 A twilight airglow

    NASA Technical Reports Server (NTRS)

    Fennelly, J. A.; Torr, D. G.; Richards, P. G.; Torr, M. R.; Sharp, W. E.

    1991-01-01

    This paper describes a technique for extracting thermospheric profiles of the atomic-oxygen density and temperature, using ground-based measurements of the O(+)(2D-2P) doublet at 7320 and 7330 A in the twilight airglow. In this method, a local photochemical model is used to calculate the 7320-A intensity; the method also utilizes an iterative inversion procedure based on the Levenberg-Marquardt method described by Press et al. (1986). The results demonstrate that, if the measurements are only limited by errors due to Poisson noise, the altitude profiles of neutral temperature and atomic oxygen concentration can be determined accurately using currently available spectrometers.

  15. Intra-rater reliability and agreement of various methods of measurement to assess dorsiflexion in the Weight Bearing Dorsiflexion Lunge Test (WBLT) among female athletes.

    PubMed

    Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio

    2017-01-01

    To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Comparison of on-site field measured inorganic arsenic in rice with laboratory measurements using a field deployable method: Method validation.

    PubMed

    Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg

    2018-10-15

    A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A new method for calculating number concentrations of cloud condensation nuclei based on measurements of a three-wavelength humidified nephelometer system

    NASA Astrophysics Data System (ADS)

    Tao, Jiangchuan; Zhao, Chunsheng; Kuang, Ye; Zhao, Gang; Shen, Chuanyang; Yu, Yingli; Bian, Yuxuan; Xu, Wanyun

    2018-02-01

    The number concentration of cloud condensation nuclei (CCN) plays a fundamental role in cloud physics. Instrumentations of direct measurements of CCN number concentration (NCCN) based on chamber technology are complex and costly; thus a simple way for measuring NCCN is needed. In this study, a new method for NCCN calculation based on measurements of a three-wavelength humidified nephelometer system is proposed. A three-wavelength humidified nephelometer system can measure the aerosol light-scattering coefficient (σsp) at three wavelengths and the light-scattering enhancement factor (fRH). The Ångström exponent (Å) inferred from σsp at three wavelengths provides information on mean predominate aerosol size, and hygroscopicity parameter (κ) can be calculated from the combination of fRH and Å. Given this, a lookup table that includes σsp, κ and Å is established to predict NCCN. Due to the precondition for the application, this new method is not suitable for externally mixed particles, large particles (e.g., dust and sea salt) or fresh aerosol particles. This method is validated with direct measurements of NCCN using a CCN counter on the North China Plain. Results show that relative deviations between calculated NCCN and measured NCCN are within 30 % and confirm the robustness of this method. This method enables simplerNCCN measurements because the humidified nephelometer system is easily operated and stable. Compared with the method using a CCN counter, another advantage of this newly proposed method is that it can obtain NCCN at lower supersaturations in the ambient atmosphere.

  18. Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS

    NASA Astrophysics Data System (ADS)

    Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan

    2018-03-01

    As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.

  19. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability.

    PubMed

    Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina

    2018-01-01

    Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.

  20. A complexity measure based method for studying the dependance of 222Rn concentration time series on indoor air temperature and humidity.

    PubMed

    Mihailovic, D T; Udovičić, V; Krmar, M; Arsenić, I

    2014-02-01

    We have suggested a complexity measure based method for studying the dependence of measured (222)Rn concentration time series on indoor air temperature and humidity. This method is based on the Kolmogorov complexity (KL). We have introduced (i) the sequence of the KL, (ii) the Kolmogorov complexity highest value in the sequence (KLM) and (iii) the KL of the product of time series. The noticed loss of the KLM complexity of (222)Rn concentration time series can be attributed to the indoor air humidity that keeps the radon daughters in air. © 2013 Published by Elsevier Ltd.

  1. An evidential link prediction method and link predictability based on Shannon entropy

    NASA Astrophysics Data System (ADS)

    Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong

    2017-09-01

    Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.

  2. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion

    PubMed Central

    Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.

    2013-01-01

    Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413

  3. High sensitivity optical measurement of skin gloss

    PubMed Central

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F.; Urbach, H. Paul; Varghese, Babu

    2017-01-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents. PMID:29026683

  4. High sensitivity optical measurement of skin gloss.

    PubMed

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F; Urbach, H Paul; Varghese, Babu

    2017-09-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents.

  5. The Uncertainty of Mass Discharge Measurements Using Pumping Methods Under Simplified Conditions

    EPA Science Inventory

    Mass discharge measurements at contaminated sites have been used to assist with site management decisions, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Pumping methods can be sub-divided based on the pumping procedures use...

  6. A new leakage measurement method for damaged seal material

    NASA Astrophysics Data System (ADS)

    Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng

    2018-07-01

    In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.

  7. A New Proposal to Redefine Kilogram by Measuring the Planck Constant Based on Inertial Mass

    NASA Astrophysics Data System (ADS)

    Liu, Yongmeng; Wang, Dawei

    2018-04-01

    A novel method to measure the Planck constant based on inertial mass is proposed here, which is distinguished from the conventional Kibble balance experiment which is based on the gravitational mass. The kilogram unit is linked to the Planck constant by calculating the difference of the parameters, i.e. resistance, voltage, velocity and time, which is measured in a two-mode experiment, unloaded mass mode and the loaded mass mode. In principle, all parameters measured in this experiment can reach a high accuracy, as that in Kibble balance experiment. This method has an advantage that some systematic error can be eliminated in difference calculation of measurements. In addition, this method is insensitive to air buoyancy and the alignment work in this experiment is easy. At last, the initial design of the apparatus is presented.

  8. What gets measured gets managed: A new method of measuring household food waste.

    PubMed

    Elimelech, Efrat; Ayalon, Ofira; Ert, Eyal

    2018-03-22

    The quantification of household food waste is an essential part of setting policies and waste reduction goals, but it is very difficult to estimate. Current methods include either direct measurements (physical waste surveys) or measurements based on self-reports (diaries, interviews, and questionnaires). The main limitation of the first method is that it cannot always trace the waste source, i.e., an individual household, whereas the second method lacks objectivity. This article presents a new measurement method that offers a solution to these challenges by measuring daily produced food waste at the household level. This method is based on four main principles: (1) capturing waste as it enters the stream, (2) collecting waste samples at the doorstep, (3) using the individual household as the sampling unit, and (4) collecting and sorting waste daily. We tested the feasibility of the new method with an empirical study of 192 households, measuring the actual amounts of food waste from households as well as its composition. Household food waste accounted for 45% of total waste (573 g/day per capita), of which 54% was identified as avoidable. Approximately two thirds of avoidable waste consisted of vegetables and fruit. These results are similar to previous findings from waste surveys, yet the new method showed a higher level of accuracy. The feasibility test suggests that the proposed method provides a practical tool for policy makers for setting policy based on reliable empirical data and monitoring the effectiveness of different policies over time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. A Network Method of Measuring Affiliation-Based Peer Influence: Assessing the Influences of Teammates' Smoking on Adolescent Smoking

    ERIC Educational Resources Information Center

    Fujimoto, Kayo; Unger, Jennifer B.; Valente, Thomas W.

    2012-01-01

    Using a network analytic framework, this study introduces a new method to measure peer influence based on adolescents' affiliations or 2-mode social network data. Exposure based on affiliations is referred to as the "affiliation exposure model." This study demonstrates the methodology using data on young adolescent smoking being influenced by…

  10. [A method of temperature measurement for hot forging with surface oxide based on infrared spectroscopy].

    PubMed

    Zhang, Yu-cun; Qi, Yan-de; Fu, Xian-bin

    2012-05-01

    High temperature large forging is covered with a thick oxide during forging. It leads to a big measurement data error. In this paper, a method of measuring temperature based on infrared spectroscopy is presented. It can effectively eliminate the influence of surface oxide on the measurement of temperature. The method can measure the surface temperature and emissivity of the oxide directly using the infrared spectrum. The infrared spectrum is radiated from surface oxide of forging. Then it can derive the real temperature of hot forging covered with the oxide using the heat exchange equation. In order to greatly restrain interference spectroscopy through included in the received infrared radiation spectrum, three interference filter system was proposed, and a group of optimal gap parameter values using spectral simulation were obtained. The precision of temperature measurement was improved. The experimental results show that the method can accurately measure the surface temperature of high temperature forging covered with oxide. It meets the requirements of measurement accuracy, and the temperature measurement method is feasible according to the experiment result.

  11. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  12. Gain determination of optical active doped planar waveguides

    NASA Astrophysics Data System (ADS)

    Šmejcký, J.; Jeřábek, V.; Nekvindová, P.

    2017-12-01

    This paper summarizes the results of the gain transmission characteristics measurement carried out on the new ion exchange Ag+ - Na+ optical Er3+ and Yb3+ doped active planar waveguides realized on a silica based glass substrates. The results were used for optimization of the precursor concentration in the glass substrates. The gain measurements were performed by the time domain method using a pulse generator, as well as broadband measurement method using supercontinuum optical source in the wavelength domain. Both methods were compared and the results were graphically processed. It has been confirmed that pulse method is useful as it provides a very accurate measurement of the gain - pumping power characteristics for one wavelength. In the case of radiation spectral characteristics, our measurement exactly determined the maximum gain wavelength bandwidth of the active waveguide. The spectral characteristics of the pumped and unpumped waveguides were compared. The gain parameters of the reported silica-based glasses can be compared with the phosphate-based parameters, typically used for optical active devices application.

  13. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D

    2013-07-01

    To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.

  14. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    PubMed Central

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  15. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  16. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  17. Development of a commercially viable piezoelectric force sensor system for static force measurement

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Luo, Xinwei; Liu, Jingcheng; Li, Min; Qin, Lan

    2017-09-01

    A compensation method for measuring static force with a commercial piezoelectric force sensor is proposed to disprove the theory that piezoelectric sensors and generators can only operate under dynamic force. After studying the model of the piezoelectric force sensor measurement system, the principle of static force measurement using a piezoelectric material or piezoelectric force sensor is analyzed. Then, the distribution law of the decay time constant of the measurement system and the variation law of the measurement system’s output are studied, and a compensation method based on the time interval threshold Δ t and attenuation threshold Δ {{u}th} is proposed. By calibrating the system and considering the influences of the environment and the hardware, a suitable Δ {{u}th} value is determined, and the system’s output attenuation is compensated based on the Δ {{u}th} value to realize the measurement. Finally, a static force measurement system with a piezoelectric force sensor is developed based on the compensation method. The experimental results confirm the successful development of a simple compensation method for static force measurement with a commercial piezoelectric force sensor. In addition, it is established that, contrary to the current perception, a piezoelectric force sensor system can be used to measure static force through further calibration.

  18. Curriculum-Based Measurement of Oral Reading: Evaluation of Growth Estimates Derived with Pre-Post Assessment Methods

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Monaghen, Barbara D.; Zopluoglu, Cengiz; Van Norman, Ethan R.

    2013-01-01

    Curriculum-based measurement of oral reading (CBM-R) is used to index the level and rate of student growth across the academic year. The method is frequently used to set student goals and monitor student progress. This study examined the diagnostic accuracy and quality of growth estimates derived from pre-post measurement using CBM-R data. A…

  19. Blood oxygenation and flow measurements using a single 720-nm tunable V-cavity laser.

    PubMed

    Feng, Yafei; Deng, Haoyu; Chen, Xin; He, Jian-Jun

    2017-08-01

    We propose and demonstrate a single-laser-based sensing method for measuring both blood oxygenation and microvascular blood flow. Based on the optimal wavelength range found from theoretical analysis on differential absorption based blood oxygenation measurement, we designed and fabricated a 720-nm-band wavelength tunable V-cavity laser. Without any grating or bandgap engineering, the laser has a wavelength tuning range of 14.1 nm. By using the laser emitting at 710.3 nm and 724.4 nm to measure the oxygenation and blood flow, we experimentally demonstrate the proposed method.

  20. Investigating Measurement Invariance in Computer-Based Personality Testing: The Impact of Using Anchor Items on Effect Size Indices

    ERIC Educational Resources Information Center

    Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N.

    2015-01-01

    A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…

  1. [Welding arc temperature field measurements based on Boltzmann spectrometry].

    PubMed

    Si, Hong; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao

    2012-09-01

    Arc plasma, as non-uniform plasma, has complicated energy and mass transport processes in its internal, so plasma temperature measurement is of great significance. Compared with absolute spectral line intensity method and standard temperature method, Boltzmann plot measuring is more accurate and convenient. Based on the Boltzmann theory, the present paper calculates the temperature distribution of the plasma and analyzes the principle of lines selection by real time scanning the space of the TIG are measurements.

  2. Ultrasound semi-automated measurement of fetal nuchal translucency thickness based on principal direction estimation

    NASA Astrophysics Data System (ADS)

    Yoon, Heechul; Lee, Hyuntaek; Jung, Haekyung; Lee, Mi-Young; Won, Hye-Sung

    2015-03-01

    The objective of the paper is to introduce a novel method for nuchal translucency (NT) boundary detection and thickness measurement, which is one of the most significant markers in the early screening of chromosomal defects, namely Down syndrome. To improve the reliability and reproducibility of NT measurements, several automated methods have been introduced. However, the performance of their methods degrades when NT borders are tilted due to varying fetal movements. Therefore, we propose a principal direction estimation based NT measurement method to provide reliable and consistent performance regardless of both fetal positions and NT directions. At first, Radon Transform and cost function are used to estimate the principal direction of NT borders. Then, on the estimated angle bin, i.e., the main direction of NT, gradient based features are employed to find initial NT lines which are beginning points of the active contour fitting method to find real NT borders. Finally, the maximum thickness is measured from distances between the upper and lower border of NT by searching along to the orthogonal lines of main NT direction. To evaluate the performance, 89 of in vivo fetal images were collected and the ground-truth database was measured by clinical experts. Quantitative results using intraclass correlation coefficients and difference analysis verify that the proposed method can improve the reliability and reproducibility in the measurement of maximum NT thickness.

  3. Measurement of the Microwave Refractive Index of Materials Based on Parallel Plate Waveguides

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Pei, J.; Kan, J. S.; Zhao, Q.

    2017-12-01

    An electrical field scanning apparatus based on a parallel plate waveguide method is constructed, which collects the amplitude and phase matrices as a function of the relative position. On the basis of such data, a method for calculating the refractive index of the measured wedge samples is proposed in this paper. The measurement and calculation results of different PTFE samples reveal that the refractive index measured by the apparatus is substantially consistent with the refractive index inferred with the permittivity of the sample. The proposed refractive index calculation method proposed in this paper is a competitive method for the characterization of the refractive index of materials with positive refractive index. Since the apparatus and method can be used to measure and calculate arbitrary direction of the microwave propagation, it is believed that both of them can be applied to the negative refractive index materials, such as metamaterials or “left-handed” materials.

  4. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  5. Satellite-based estimation of cloud-base updrafts for convective clouds and stratocumulus

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Rosenfeld, D.; Li, Z.

    2017-12-01

    Updraft speeds of thermals have always been notoriously difficult to measure, despite significant roles they play in transporting pollutants and in cloud formation and precipitation. To our knowledge, no attempt to date has been made to estimate updraft speed from satellite information. In this study, we introduce three methods of retrieving updraft speeds at cloud base () for convective clouds and marine stratocumulus with VIIRS onboard Suomi-NPP satellite. The first method uses ground-air temperature difference to characterize the surface sensible heat flux, which is found to be correlated with updraft speeds measured by the Doppler lidar over the Southern Great Plains (SGP). Based on the relationship, we use the satellite-retrieved surface skin temperature and reanalysis surface air temperature to estimate the updrafts. The second method is based on a good linear correlation between cloud base height and updrafts, which was found over the SGP, the central Amazon, and on board a ship sailing between Honolulu and Los Angeles. We found a universal relationship for both land and ocean. The third method is for marine stratocumulus. A statistically significant relationship between Wb and cloud-top radiative cooling rate (CTRC) is found from measurements over northeastern Pacific and Atlantic. Based on this relation, satellite- and reanalysis-derived CTRC is utilized to infer the Wb of stratocumulus clouds. Evaluations against ground-based Doppler lidar measurements show estimation errors of 24%, 21% and 22% for the three methods, respectively.

  6. DNA base-calling from a nanopore using a Viterbi algorithm.

    PubMed

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models.

    PubMed

    van IJsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Baka, N; Van't Klooster, R; Kaptein, B L

    2016-08-01

    An important measure for the diagnosis and monitoring of knee osteoarthritis is the minimum joint space width (mJSW). This requires accurate alignment of the x-ray beam with the tibial plateau, which may not be accomplished in practice. We investigate the feasibility of a new mJSW measurement method from stereo radiographs using 3D statistical shape models (SSM) and evaluate its sensitivity to changes in the mJSW and its robustness to variations in patient positioning and bone geometry. A validation study was performed using five cadaver specimens. The actual mJSW was varied and images were acquired with variation in the cadaver positioning. For comparison purposes, the mJSW was also assessed from plain radiographs. To study the influence of SSM model accuracy, the 3D mJSW measurement was repeated with models from the actual bones, obtained from CT scans. The SSM-based measurement method was more robust (consistent output for a wide range of input data/consistent output under varying measurement circumstances) than the conventional 2D method, showing that the 3D reconstruction indeed reduces the influence of patient positioning. However, the SSM-based method showed comparable sensitivity to changes in the mJSW with respect to the conventional method. The CT-based measurement was more accurate than the SSM-based measurement (smallest detectable differences 0.55 mm versus 0. 82 mm, respectively). The proposed measurement method is not a substitute for the conventional 2D measurement due to limitations in the SSM model accuracy. However, further improvement of the model accuracy and optimisation technique can be obtained. Combined with the promising options for applications using quantitative information on bone morphology, SSM based 3D reconstructions of natural knees are attractive for further development.Cite this article: E. A. van IJsseldijk, E. R. Valstar, B. C. Stoel, R. G. H. H. Nelissen, N. Baka, R. van't Klooster, B. L. Kaptein. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;320-327. DOI: 10.1302/2046-3758.58.2000626. © 2016 van IJsseldijk et al.

  8. Measurement of susceptibility artifacts with histogram-based reference value on magnetic resonance images according to standard ASTM F2119.

    PubMed

    Heinrich, Andreas; Teichgräber, Ulf K; Güttler, Felix V

    2015-12-01

    The standard ASTM F2119 describes a test method for measuring the size of a susceptibility artifact based on the example of a passive implant. A pixel in an image is considered to be a part of an image artifact if the intensity is changed by at least 30% in the presence of a test object, compared to a reference image in which the test object is absent (reference value). The aim of this paper is to simplify and accelerate the test method using a histogram-based reference value. Four test objects were scanned parallel and perpendicular to the main magnetic field, and the largest susceptibility artifacts were measured using two methods of reference value determination (reference image-based and histogram-based reference value). The results between both methods were compared using the Mann-Whitney U-test. The difference between both reference values was 42.35 ± 23.66. The difference of artifact size was 0.64 ± 0.69 mm. The artifact sizes of both methods did not show significant differences; the p-value of the Mann-Whitney U-test was between 0.710 and 0.521. A standard-conform method for a rapid, objective, and reproducible evaluation of susceptibility artifacts could be implemented. The result of the histogram-based method does not significantly differ from the ASTM-conform method.

  9. A rough set-based measurement model study on high-speed railway safety operation.

    PubMed

    Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun

    2018-01-01

    Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.

  10. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm

    PubMed Central

    Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao

    2017-01-01

    To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181

  11. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  12. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  13. Reproducibility measurements of three methods for calculating in vivo MR-based knee kinematics.

    PubMed

    Lansdown, Drew A; Zaid, Musa; Pedoia, Valentina; Subburaj, Karupppasamy; Souza, Richard; Benjamin, C; Li, Xiaojuan

    2015-08-01

    To describe three quantification methods for magnetic resonance imaging (MRI)-based knee kinematic evaluation and to report on the reproducibility of these algorithms. T2 -weighted, fast-spin echo images were obtained of the bilateral knees in six healthy volunteers. Scans were repeated for each knee after repositioning to evaluate protocol reproducibility. Semiautomatic segmentation defined regions of interest for the tibia and femur. The posterior femoral condyles and diaphyseal axes were defined using the previously defined tibia and femur. All segmentation was performed twice to evaluate segmentation reliability. Anterior tibial translation (ATT) and internal tibial rotation (ITR) were calculated using three methods: a tibial-based registration system, a combined tibiofemoral-based registration method with all manual segmentation, and a combined tibiofemoral-based registration method with automatic definition of condyles and axes. Intraclass correlation coefficients and standard deviations across multiple measures were determined. Reproducibility of segmentation was excellent (ATT = 0.98; ITR = 0.99) for both combined methods. ATT and ITR measurements were also reproducible across multiple scans in the combined registration measurements with manual (ATT = 0.94; ITR = 0.94) or automatic (ATT = 0.95; ITR = 0.94) condyles and axes. The combined tibiofemoral registration with automatic definition of the posterior femoral condyle and diaphyseal axes allows for improved knee kinematics quantification with excellent in vivo reproducibility. © 2014 Wiley Periodicals, Inc.

  14. Pulse retrieval algorithm for interferometric frequency-resolved optical gating based on differential evolution.

    PubMed

    Hyyti, Janne; Escoto, Esmerando; Steinmeyer, Günter

    2017-10-01

    A novel algorithm for the ultrashort laser pulse characterization method of interferometric frequency-resolved optical gating (iFROG) is presented. Based on a genetic method, namely, differential evolution, the algorithm can exploit all available information of an iFROG measurement to retrieve the complex electric field of a pulse. The retrieval is subjected to a series of numerical tests to prove the robustness of the algorithm against experimental artifacts and noise. These tests show that the integrated error-correction mechanisms of the iFROG method can be successfully used to remove the effect from timing errors and spectrally varying efficiency in the detection. Moreover, the accuracy and noise resilience of the new algorithm are shown to outperform retrieval based on the generalized projections algorithm, which is widely used as the standard method in FROG retrieval. The differential evolution algorithm is further validated with experimental data, measured with unamplified three-cycle pulses from a mode-locked Ti:sapphire laser. Additionally introducing group delay dispersion in the beam path, the retrieval results show excellent agreement with independent measurements with a commercial pulse measurement device based on spectral phase interferometry for direct electric-field retrieval. Further experimental tests with strongly attenuated pulses indicate resilience of differential-evolution-based retrieval against massive measurement noise.

  15. [Electormagnetic field of the mobile phone base station: case study].

    PubMed

    Bieńkowski, Paweł; Zubrzak, Bartłomiej; Surma, Robert

    2011-01-01

    The paper presents changes in the electromagnetic field intensity in a school building and its surrounding after the mobile phone base station installation on the roof of the school. The comparison of EMF intensity measured before the base station was launched (electromagnetic background measurement) and after starting its operation (two independent control measurements) is discussed. Analyses of measurements are presented and the authors also propose the method of the electromagnetic field distribution adjustment in the area of radiation antennas side lobe to reduce the intensity of the EMF level in the base station proximity. The presented method involves the regulation of the inclination. On the basis of the measurements, it was found that the EMF intensity increased in the building and its surroundings, but the values measured with wide margins meet the requirements of the Polish law on environmental protection.

  16. Analysis of radon and thoron progeny measurements based on air filtration.

    PubMed

    Stajic, J M; Nikezic, D

    2015-02-01

    Measuring of radon and thoron progeny concentrations in air, based on air filtration, was analysed in order to assess the reliability of the method. Changes of radon and thoron progeny activities on the filter during and after air sampling were investigated. Simulation experiments were performed involving realistic measuring parameters. The sensitivity of results (radon and thoron concentrations in air) to the variations of alpha counting in three and five intervals was studied. The concentration of (218)Po showed up to be the most sensitive to these changes, as was expected because of its short half-life. The well-known method for measuring of progeny concentrations based on air filtration is rather unreliable and obtaining unrealistic or incorrect results appears to be quite possible. A simple method for quick estimation of radon potential alpha energy concentration (PAEC), based on measurements of alpha activity in a saturation regime, was proposed. Thoron PAEC can be determined from the saturation activity on the filter, through beta or alpha measurements. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  18. Thermophysical Properties Measurements of Zr62Cu20Al10Ni8

    NASA Technical Reports Server (NTRS)

    Bradshaw, Richard C.; Waren, Mary; Rogers, Jan R.; Rathz, Thomas J.; Gangopadhyay, Anup K.; Kelton, Ken F.; Hyers, Robert W.

    2006-01-01

    Thermophysical property studies performed at high temperature can prove challenging because of reactivity problems brought on by the elevated temperatures. Contaminants from measuring devices and container walls can cause changes in properties. To prevent this, containerless processing techniques can be employed to isolate a sample during study. A common method used for this is levitation. Typical levitation methods used for containerless processing are, aerodynamically, electromagnetically and electrostatically based. All levitation methods reduce heterogeneous nucleation sites, 'which in turn provide access to metastable undercooled phases. In particular, electrostatic levitation is appealing because sample motion and stirring are minimized; and by combining it with optically based non-contact measuring techniques, many thermophysical properties can be measured. Applying some of these techniques, surface tension, viscosity and density have been measured for the glass forming alloy Zr62Cu20Al10Ni8 and will be presented with a brief overview of the non-contact measuring method used.

  19. Ultrasound measurement of the brachial artery flow-mediated dilation without ECG gating.

    PubMed

    Gemignani, Vincenzo; Bianchini, Elisabetta; Faita, Francesco; Giannarelli, Chiara; Plantinga, Yvonne; Ghiadoni, Lorenzo; Demi, Marcello

    2008-03-01

    The methods commonly used for noninvasive ultrasound assessment of endothelium-dependent flow-mediated dilation (FMD) require an electrocardiogram (ECG) signal to synchronize the measurements with the cardiac cycle. In this article, we present a method for assessing FMD that does not require ECG gating. The approach is based on temporal filtering of the diameter-time curve, which is obtained by means of a B-mode image processing system. The method was tested on 22 healthy volunteers without cardiovascular risk factors. The measurements obtained with the proposed approach were compared with those obtained with ECG gating and with both systolic and end-diastolic measurements. Results showed good agreement between the methods and a higher precision of the new method due to the fact that it is based on a larger number of measurements. Further advantages were also found both in terms of reliability of the measure and simplification of the instrumentation. (E-mail: gemi@ifc.cnr.it).

  20. A study of active learning methods for named entity recognition in clinical text.

    PubMed

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Mapping Urban Environmental Noise Using Smartphones.

    PubMed

    Zuo, Jinbo; Xia, Hao; Liu, Shuo; Qiao, Yanyou

    2016-10-13

    Noise mapping is an effective method of visualizing and accessing noise pollution. In this paper, a noise-mapping method based on smartphones to effectively and easily measure environmental noise is proposed. By using this method, a noise map of an entire area can be created using limited measurement data. To achieve the measurement with certain precision, a set of methods was designed to calibrate the smartphones. Measuring noise with mobile phones is different from the traditional static observations. The users may be moving at any time. Therefore, a method of attaching an additional microphone with a windscreen is proposed to reduce the wind effect. However, covering an entire area is impossible. Therefore, an interpolation method is needed to achieve full coverage of the area. To reduce the influence of spatial heterogeneity and improve the precision of noise mapping, a region-based noise-mapping method is proposed in this paper, which is based on the distribution of noise in different region types tagged by volunteers, to interpolate and combine them to create a noise map. To validate the effect of the method, a comparison of the interpolation results was made to analyse our method and the ordinary Kriging method. The result shows that our method is more accurate in reflecting the local distribution of noise and has better interpolation precision. We believe that the proposed noise-mapping method is a feasible and low-cost noise-mapping solution.

  2. Mapping Urban Environmental Noise Using Smartphones

    PubMed Central

    Zuo, Jinbo; Xia, Hao; Liu, Shuo; Qiao, Yanyou

    2016-01-01

    Noise mapping is an effective method of visualizing and accessing noise pollution. In this paper, a noise-mapping method based on smartphones to effectively and easily measure environmental noise is proposed. By using this method, a noise map of an entire area can be created using limited measurement data. To achieve the measurement with certain precision, a set of methods was designed to calibrate the smartphones. Measuring noise with mobile phones is different from the traditional static observations. The users may be moving at any time. Therefore, a method of attaching an additional microphone with a windscreen is proposed to reduce the wind effect. However, covering an entire area is impossible. Therefore, an interpolation method is needed to achieve full coverage of the area. To reduce the influence of spatial heterogeneity and improve the precision of noise mapping, a region-based noise-mapping method is proposed in this paper, which is based on the distribution of noise in different region types tagged by volunteers, to interpolate and combine them to create a noise map. To validate the effect of the method, a comparison of the interpolation results was made to analyse our method and the ordinary Kriging method. The result shows that our method is more accurate in reflecting the local distribution of noise and has better interpolation precision. We believe that the proposed noise-mapping method is a feasible and low-cost noise-mapping solution. PMID:27754359

  3. Measurement of left ventricular torsion using block-matching-based speckle tracking for two-dimensional echocardiography

    NASA Astrophysics Data System (ADS)

    Sun, Feng-Rong; Wang, Xiao-Jing; Wu, Qiang; Yao, Gui-Hua; Zhang, Yun

    2013-01-01

    Left ventricular (LV) torsion is a sensitive and global index of LV systolic and diastolic function, but how to noninvasively measure it is challenging. Two-dimensional echocardiography and the block-matching based speckle tracking method were used to measure LV torsion. Main advantages of the proposed method over the previous ones are summarized as follows: (1) The method is automatic, except for manually selecting some endocardium points on the end-diastolic frame in initialization step. (2) The diamond search strategy is applied, with a spatial smoothness constraint introduced into the sum of absolute differences matching criterion; and the reference frame during the search is determined adaptively. (3) The method is capable of removing abnormal measurement data automatically. The proposed method was validated against that using Doppler tissue imaging and some preliminary clinical experimental studies were presented to illustrate clinical values of the proposed method.

  4. Vehicle Lateral State Estimation Based on Measured Tyre Forces

    PubMed Central

    Tuononen, Ari J.

    2009-01-01

    Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535

  5. Measuring geographic access to health care: raster and network-based methods

    PubMed Central

    2012-01-01

    Background Inequalities in geographic access to health care result from the configuration of facilities, population distribution, and the transportation infrastructure. In recent accessibility studies, the traditional distance measure (Euclidean) has been replaced with more plausible measures such as travel distance or time. Both network and raster-based methods are often utilized for estimating travel time in a Geographic Information System. Therefore, exploring the differences in the underlying data models and associated methods and their impact on geographic accessibility estimates is warranted. Methods We examine the assumptions present in population-based travel time models. Conceptual and practical differences between raster and network data models are reviewed, along with methodological implications for service area estimates. Our case study investigates Limited Access Areas defined by Michigan’s Certificate of Need (CON) Program. Geographic accessibility is calculated by identifying the number of people residing more than 30 minutes from an acute care hospital. Both network and raster-based methods are implemented and their results are compared. We also examine sensitivity to changes in travel speed settings and population assignment. Results In both methods, the areas identified as having limited accessibility were similar in their location, configuration, and shape. However, the number of people identified as having limited accessibility varied substantially between methods. Over all permutations, the raster-based method identified more area and people with limited accessibility. The raster-based method was more sensitive to travel speed settings, while the network-based method was more sensitive to the specific population assignment method employed in Michigan. Conclusions Differences between the underlying data models help to explain the variation in results between raster and network-based methods. Considering that the choice of data model/method may substantially alter the outcomes of a geographic accessibility analysis, we advise researchers to use caution in model selection. For policy, we recommend that Michigan adopt the network-based method or reevaluate the travel speed assignment rule in the raster-based method. Additionally, we recommend that the state revisit the population assignment method. PMID:22587023

  6. S-F graphic representation analysis of photoelectric facula focometer poroo-plate glass

    NASA Astrophysics Data System (ADS)

    Tong, Yilin; Han, Xuecai

    2016-10-01

    Optical system focal length is usually based on the magnification method with focal length measurement poroo-plate glass is used as base element measuring focal length of focometer. On the basis of using analysis of magnification method to measure the accuracy of optical lens focal length, an expression between the ruling span of poroo-plate glass and the focal length of measured optical system was deduced, an efficient method to work out S-F graph with AUTOCAD was developed, the selecting principle of focometer parameter was analyzed, and Applied examples for designing poroo-plate glass in S-F figure was obtained.

  7. MEMS piezoresistive cantilever for the direct measurement of cardiomyocyte contractile force

    NASA Astrophysics Data System (ADS)

    Matsudaira, Kenei; Nguyen, Thanh-Vinh; Hirayama Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao

    2017-10-01

    This paper reports on a method to directly measure the contractile forces of cardiomyocytes using MEMS (micro electro mechanical systems)-based force sensors. The fabricated sensor chip consists of piezoresistive cantilevers that can measure contractile forces with high frequency (several tens of kHz) and high sensing resolution (less than 0.1 nN). Moreover, the proposed method does not require a complex observation system or image processing, which are necessary in conventional optical-based methods. This paper describes the design, fabrication, and evaluation of the proposed device and demonstrates the direct measurements of contractile forces of cardiomyocytes using the fabricated device.

  8. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  9. In-line bulk supersaturation measurement by electrical conductometry in KDP crystal growth from aqueous solution

    NASA Astrophysics Data System (ADS)

    Bordui, P. F.; Loiacono, G. M.

    1984-07-01

    A method is presented for in-line bulk supersaturation measurement in crystal growth from aqueous solution. The method is based on a computer-controlled concentration measurement exploiting an experimentally predetermined cross-correlation between the concentration, electrical conductivity, and temperature of the growth solution. The method was applied to Holden crystallization of potassium dihydrogen phosphate (KDP). An extensive conductivity-temperature-concentration data base was generated for this system over a temperature range of 31 to 41°C. The method yielded continous, automated bulk supersaturation output accurate to within ±0.05 g KDP100 g water (±0.15% relative supersaturation).

  10. Initial review of rapid moisture measurement for roadway base and subgrade.

    DOT National Transportation Integrated Search

    2013-05-01

    This project searched available moisture-measurement technologies using gravimetric, dielectric, electrical conductivity, and suction-based methods, as potential replacements for the nuclear gauge to provide rapid moisture measurement on field constr...

  11. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    ERIC Educational Resources Information Center

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  12. A data processing method based on tracking light spot for the laser differential confocal component parameters measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin

    2013-12-01

    We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.

  13. Poor visualization limits diagnosis of proximal junctional kyphosis in adolescent idiopathic scoliosis.

    PubMed

    Basques, Bryce A; Long, William D; Golinvaux, Nicholas S; Bohl, Daniel D; Samuel, Andre M; Lukasiewicz, Adam M; Webb, Matthew L; Grauer, Jonathan N

    2017-06-01

    Multiple methods are used to measure proximal junctional angle (PJA) and diagnose proximal junctional kyphosis (PJK) after fusion for adolescent idiopathic scoliosis (AIS); however, there is no gold standard. Previous studies using the three most common measurement methods, upper-instrumented vertebra (UIV)+1, UIV+2, and UIV to T2, have minimized the difficulty in obtaining these measurements, and often exclude patients for which measurements cannot be recorded. The purpose of this study is to assess the technical feasibility of measuring PJA and PJK in a series of AIS patients who have undergone posterior instrumented fusion and to assess the variability in results depending on the measurement technique used. A retrospective cohort study was carried out. There were 460 radiographs from 98 patients with AIS who underwent posterior spinal fusion at a single institution from 2006 through 2012. The outcomes for this study were the ability to obtain a PJA measurement for each method, the ability to diagnose PJK, and the inter- and intra-rater reliability of these measurements. Proximal junctional angle was determined by measuring the sagittal Cobb angle on preoperative and postoperative lateral upright films using the three most common methods (UIV+1, UIV+2, and UIV to T2). The ability to obtain a PJA measurement, the ability to assess PJK, and the total number of patients with a PJK diagnosis were tabulated for each method based on established definitions. Intra- and inter-rater reliability of each measurement method was assessed using intra-class correlation coefficients (ICCs). A total of 460 radiographs from 98 patients were evaluated. The average number of radiographs per patient was 5.3±1.7 (mean±standard deviation), with an average follow-up of 2.1 years (780±562 days). A PJA measurement was only readable on 13%-18% of preoperative filmsand 31%-49% of postoperative films (range based on measurement technique). Only 12%-31% of films were able to be assessed for PJK based on established definitions. The rate of PJK diagnosis ranged from 1% to 29%. Of these diagnoses, 21%-100% disappeared on at least one subsequent film for the given patient. ICC ranges for intra-rater and inter-rater reliability were 0.730-0.799 and 0.794-0.836, respectively. This study suggests significant limitations of the three most common methods of measuring and diagnosing PJK. The results of studies using these methods can be significantly affected based on the exclusion of patients for whom measurements cannot be made and choice of measurement technique. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Is Aggression the Same for Boys and Girls? Assessing Measurement Invariance with Confirmatory Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Kim, Sangwon; Kim, Seock-Ho; Kamphaus, Randy W.

    2010-01-01

    Gender differences in aggression have typically been based on studies utilizing a mean difference method. From a measurement perspective, this method is inherently problematic unless an aggression measure possesses comparable validity across gender. Stated differently, establishing measurement invariance on the measure of aggression is…

  15. Research on Damage Identification of Bridge Based on Digital Image Measurement

    NASA Astrophysics Data System (ADS)

    Liang, Yingjing; Huan, Shi; Tao, Weijun

    2017-12-01

    In recent years, the number of the damage bridge due to excessive deformation gradually increased, which caused significant property damage and casualties. Hence health monitoring and the damage detection of the bridge structure based on the deflection measurement are particularly important. The current conventional deflection measurement methods, such as total station, connected pipe, GPS, etc., have many shortcomings as low efficiency, heavy workload, low degree of automation, operating frequency and working time constrained. GPS has a low accuracy in the vertical displacement measurement and cannot meet the dynamic measured requirements of the current bridge engineering. This paper presents a bridge health monitoring and damage detection technology based on digital image measurement method in which the measurement accuracy is sub-millimeter level and can achieve the 24-hour automatic non-destructive monitoring for the deflection. It can be concluded from this paper that it is feasible to use digital image measurement method for identification of the damage in the bridge structure, because it has been validated by the theoretical analysis, the laboratory model and the application of the real bridge.

  16. A new method of time difference measurement: The time difference method by dual phase coincidence points detection

    NASA Technical Reports Server (NTRS)

    Zhou, Wei

    1993-01-01

    In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.

  17. 3-D surface profilometry based on modulation measurement by applying wavelet transform method

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao

    2017-01-01

    A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.

  18. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  19. "Reagent-free" L-asparaginase activity assay based on CD spectroscopy and conductometry.

    PubMed

    Kudryashova, Elena V; Sukhoverkov, Kirill V

    2016-02-01

    A new method to determine the catalytic parameters of L-asparaginase using circular dichroism spectroscopy (CD spectroscopy) has been developed. The assay is based on the difference in CD signal between the substrate (L-asparagine) and the product (L-aspartic acid) of enzymatic reaction. CD spectroscopy, being a direct method, enables continuous measurement, and thus differentiates from multistage and laborious approach based on Nessler's method, and overcomes limitations of conjugated enzymatic reaction methods. In this work, we show robust measurements of L-asparaginase activity in conjugates with PEG-chitosan copolymers, which otherwise would not have been possible. The main limitation associated with the CD method is that the analysis should be performed at substrate saturation conditions (V max regime). For K M measurement, the conductometry method is suggested, which can serve as a complimentary method to CD spectroscopy. The activity assay based on CD spectroscopy and conductometry was successfully implicated to examine the catalytic parameters of L-asparaginase conjugates with chitosan and its derivatives, and for optimization of the molecular architecture and composition of such conjugates for improving biocatalytic properties of the enzyme in the physiological conditions. The approach developed is potentially applicable to other enzymatic reactions where the spectroscopic properties of substrate and product do not enable direct measurement with absorption or fluorescence spectroscopy. This may include a number of amino acid or glycoside-transforming enzymes.

  20. Efficient, graph-based white matter connectivity from orientation distribution functions via multi-directional graph propagation

    NASA Astrophysics Data System (ADS)

    Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin

    2011-03-01

    The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.

  1. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-02-18

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  2. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  3. Field test of available methods to measure remotely SOx and NOx emissions from ships

    NASA Astrophysics Data System (ADS)

    Balzani Lööv, J. M.; Alfoldy, B.; Gast, L. F. L.; Hjorth, J.; Lagler, F.; Mellqvist, J.; Beecken, J.; Berg, N.; Duyzer, J.; Westrate, H.; Swart, D. P. J.; Berkhout, A. J. C.; Jalkanen, J.-P.; Prata, A. J.; van der Hoff, G. R.; Borowiak, A.

    2014-08-01

    Methods for the determination of ship fuel sulphur content and NOx emission factors based on remote measurements have been compared in the harbour of Rotterdam and compared to direct stack emission measurements on the ferry Stena Hollandica. The methods were selected based on a review of the available literature on ship emission measurements. They were either optical (LIDAR, Differential Optical Absorption Spectroscopy (DOAS), UV camera), combined with model-based estimates of fuel consumption, or based on the so called "sniffer" principle, where SO2 or NOx emission factors are determined from simultaneous measurement of the increase of CO2 and SO2 or NOx concentrations in the plume of the ship compared to the background. The measurements were performed from stations at land, from a boat and from a helicopter. Mobile measurement platforms were found to have important advantages compared to the land-based ones because they allow optimizing the sampling conditions and sampling from ships on the open sea. Although optical methods can provide reliable results it was found that at the state of the art level, the "sniffer" approach is the most convenient technique for determining both SO2 and NOx emission factors remotely. The average random error on the determination of SO2 emission factors comparing two identical instrumental set-ups was 6%. However, it was found that apparently minor differences in the instrumental characteristics, such as response time, could cause significant differences between the emission factors determined. Direct stack measurements showed that about 14% of the fuel sulphur content was not emitted as SO2. This was supported by the remote measurements and is in agreement with the results of other field studies.

  4. A method for modelling GP practice level deprivation scores using GIS

    PubMed Central

    Strong, Mark; Maheswaran, Ravi; Pearson, Tim; Fryers, Paul

    2007-01-01

    Background A measure of general practice level socioeconomic deprivation can be used to explore the association between deprivation and other practice characteristics. An area-based categorisation is commonly chosen as the basis for such a deprivation measure. Ideally a practice population-weighted area-based deprivation score would be calculated using individual level spatially referenced data. However, these data are often unavailable. One approach is to link the practice postcode to an area-based deprivation score, but this method has limitations. This study aimed to develop a Geographical Information Systems (GIS) based model that could better predict a practice population-weighted deprivation score in the absence of patient level data than simple practice postcode linkage. Results We calculated predicted practice level Index of Multiple Deprivation (IMD) 2004 deprivation scores using two methods that did not require patient level data. Firstly we linked the practice postcode to an IMD 2004 score, and secondly we used a GIS model derived using data from Rotherham, UK. We compared our two sets of predicted scores to "gold standard" practice population-weighted scores for practices in Doncaster, Havering and Warrington. Overall, the practice postcode linkage method overestimated "gold standard" IMD scores by 2.54 points (95% CI 0.94, 4.14), whereas our modelling method showed no such bias (mean difference 0.36, 95% CI -0.30, 1.02). The postcode-linked method systematically underestimated the gold standard score in less deprived areas, and overestimated it in more deprived areas. Our modelling method showed a small underestimation in scores at higher levels of deprivation in Havering, but showed no bias in Doncaster or Warrington. The postcode-linked method showed more variability when predicting scores than did the GIS modelling method. Conclusion A GIS based model can be used to predict a practice population-weighted area-based deprivation measure in the absence of patient level data. Our modelled measure generally had better agreement with the population-weighted measure than did a postcode-linked measure. Our model may also avoid an underestimation of IMD scores in less deprived areas, and overestimation of scores in more deprived areas, seen when using postcode linked scores. The proposed method may be of use to researchers who do not have access to patient level spatially referenced data. PMID:17822545

  5. An integrative framework for sensor-based measurement of teamwork in healthcare

    PubMed Central

    Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J

    2015-01-01

    There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. PMID:25053579

  6. An Intelligent Optical Dissolved Oxygen Measurement Method Based on a Fluorescent Quenching Mechanism.

    PubMed

    Li, Fengmei; Wei, Yaoguang; Chen, Yingyi; Li, Daoliang; Zhang, Xu

    2015-12-09

    Dissolved oxygen (DO) is a key factor that influences the healthy growth of fishes in aquaculture. The DO content changes with the aquatic environment and should therefore be monitored online. However, traditional measurement methods, such as iodometry and other chemical analysis methods, are not suitable for online monitoring. The Clark method is not stable enough for extended periods of monitoring. To solve these problems, this paper proposes an intelligent DO measurement method based on the fluorescence quenching mechanism. The measurement system is composed of fluorescent quenching detection, signal conditioning, intelligent processing, and power supply modules. The optical probe adopts the fluorescent quenching mechanism to detect the DO content and solves the problem, whereas traditional chemical methods are easily influenced by the environment. The optical probe contains a thermistor and dual excitation sources to isolate visible parasitic light and execute a compensation strategy. The intelligent processing module adopts the IEEE 1451.2 standard and realizes intelligent compensation. Experimental results show that the optical measurement method is stable, accurate, and suitable for online DO monitoring in aquaculture applications.

  7. Ultrasonic power measurement system based on acousto-optic interaction.

    PubMed

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  8. An Intelligent Optical Dissolved Oxygen Measurement Method Based on a Fluorescent Quenching Mechanism

    PubMed Central

    Li, Fengmei; Wei, Yaoguang; Chen, Yingyi; Li, Daoliang; Zhang, Xu

    2015-01-01

    Dissolved oxygen (DO) is a key factor that influences the healthy growth of fishes in aquaculture. The DO content changes with the aquatic environment and should therefore be monitored online. However, traditional measurement methods, such as iodometry and other chemical analysis methods, are not suitable for online monitoring. The Clark method is not stable enough for extended periods of monitoring. To solve these problems, this paper proposes an intelligent DO measurement method based on the fluorescence quenching mechanism. The measurement system is composed of fluorescent quenching detection, signal conditioning, intelligent processing, and power supply modules. The optical probe adopts the fluorescent quenching mechanism to detect the DO content and solves the problem, whereas traditional chemical methods are easily influenced by the environment. The optical probe contains a thermistor and dual excitation sources to isolate visible parasitic light and execute a compensation strategy. The intelligent processing module adopts the IEEE 1451.2 standard and realizes intelligent compensation. Experimental results show that the optical measurement method is stable, accurate, and suitable for online DO monitoring in aquaculture applications. PMID:26690176

  9. Ultrasonic power measurement system based on acousto-optic interaction

    NASA Astrophysics Data System (ADS)

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  10. Cost of Einstein-Podolsky-Rosen steering in the context of extremal boxes

    NASA Astrophysics Data System (ADS)

    Das, Debarshi; Datta, Shounak; Jebaratnam, C.; Majumdar, A. S.

    2018-02-01

    Einstein-Podolsky-Rosen steering is a form of quantum nonlocality, which is weaker than Bell nonlocality, but stronger than entanglement. Here we present a method to check Einstein-Podolsky-Rosen steering in the scenario where the steering party performs two black-box measurements and the trusted party performs projective qubit measurements corresponding to two arbitrary mutually unbiased bases. This method is based on decomposing the measurement correlations in terms of extremal boxes of the steering scenario. In this context, we propose a measure of steerability called steering cost. We show that our steering cost is a convex steering monotone. We illustrate our method to check steerability with two families of measurement correlations and find out their steering cost.

  11. Adding Personality to Gifted Identification: Relationships among Traditional and Personality-Based Constructs

    ERIC Educational Resources Information Center

    Carman, Carol A.

    2011-01-01

    One of the underutilized tools in gifted identification is personality-based measures. A multiple confirmatory factor analysis was utilized to examine the relationships between traditional identification methods and personality-based measures. The pattern of correlations indicated this model could be measuring two constructs, one related to…

  12. Passive emission colorimetric sensor (PECS) for measuring emission rates of formaldehyde based on an enzymatic reaction and reflectance photometry.

    PubMed

    Shinohara, Naohide; Kajiwara, Tomohisa; Ohnishi, Masato; Kodama, Kenichi; Yanagisawa, Yukio

    2008-06-15

    A coin-sized passive emission colorimetric sensor (PECS) based on an enzymatic reaction and a portable reflectance photometry device were developed to determine the emission rates of formaldehyde from building materials and other materials found indoors in only 30 minutes on-site. The color change of the PECS linearly correlated to the concentration of formaldehyde aqueous solutions up to 28 microg/mL. The correlation between the emission rates measured by using the PECS and those measured by using a desiccator method or by using a chamber method was fitted with a linear function and a power function, and the determination coefficients were more than 0.98. The reproducible results indicate that the emission rates could be obtained with the correlation equations from the data measured by using the PECS and the portable reflectance photometry device. Limits of detection (LODs) were 0.051 mg/L for the desiccator method and 3.1 microg/m2/h for the chamber method. Thus, it was confirmed that the emission rates of formaldehyde from the building materials classified as F four-star (< 0.3 mg/L (desiccator method) or < 5.0 microg/m2/h (chamber method)), based on Japanese Industrial Standards (JIS), could be measured with the PECS. The measurement with PECS was confirmed to be precise (RSD < 10%). Other chemicals emitted from indoor materials, such as methanol, ethanol, acetone, toluene, and xylene, interfered little with the measurement of formaldehyde emission rates by using the PECS.

  13. Reduction of atmospheric disturbances in PSInSAR measure technique based on ENVISAT ASAR data for Erta Ale Ridge

    NASA Astrophysics Data System (ADS)

    Kopeć, Anna

    2018-01-01

    The interferometric synthetic aperture radar (InSAR) is becoming more and more popular to investigate surface deformation, associated with volcanism, earthquakes, landslides, and post-mining surface subsidence. The measuring accuracy depends on many factors: surface, time and geometric decorrelation, orbit errors, however the largest challenges are the tropospheric delays. The spatial and temporal variations in temperature, pressure, and relative humidity are responsible for tropospheric delays. So far, many methods have been developed, but researchers are still searching for the one, that will allow to correct interferograms consistently in different regions and times. The article focuses on examining the methods based on empirical phase-based methods, spectrometer measurements and weather model. These methods were applied to the ENVISAT ASAR data for the Erta Ale Ridge in the Afar Depression, East Africa

  14. A new method for noninvasive venous blood oxygen detection.

    PubMed

    Zhang, Xu; Zhang, Meimei; Zheng, Shengkun; Wang, Liqi; Ye, Jilun

    2016-07-19

    Blood oxygen saturation of vein (SvO2) is an important clinical parameter for patient monitoring. However, the existing clinical methods are invasive, expensive, which are also painful for patients. Based on light-absorption, this study describes a new noninvasive SvO2 measurement method by using external stimulation signal to generate cyclical fluctuation signal in the vein, which overcomes the low signal-to-noise ratio problem in the measurement process. In this way, the value of SvO2 can be obtained continuously in real time. The experimental results demonstrate that the method can successfully measure venous oxygen saturation by artificial addition of stimulation. Under hypoxic conditions, the system can reflect the overall decline of venous oxygen saturation better. When the results measured by the new method are compared with those measured by the invasive method, the root mean square error of the difference is 5.31 and the correlation coefficient of the difference is 0.72. The new method can be used to measure SvO2 and evaluate body oxygen consumption, and its accuracy needs improvement. Real-time and continuous monitoring can be achieved by replacing invasive method with noninvasive method, which provides more comprehensive clinical information in a timely manner and better meet the needs of clinical treatment. However, the accuracy of the new noninvasive SvO2 measurement based on light-absorption has to be further improved.

  15. Heart rate measurement based on face video sequence

    NASA Astrophysics Data System (ADS)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  16. Method for measuring target rotation angle by theodolites

    NASA Astrophysics Data System (ADS)

    Sun, Zelin; Wang, Zhao; Zhai, Huanchun; Yang, Xiaoxu

    2013-05-01

    To overcome the disadvantage of the current measurement methods using theodolites in an environment with shock and long working hours and so on, this paper proposes a new method for 3D coordinate measurement that is based on an immovable measuring coordinate system. According to the measuring principle, the mathematics model is established and the measurement uncertainty is analysed. The measurement uncertainty of the new method is a function of the theodolite observation angles and their uncertainty, and can be reduced by optimizing the theodolites’ placement. Compared to other methods, this method allows the theodolite positions to be changed in the measuring process, and mutual collimation between the theodolites is not required. The experimental results show that the measurement model and the optimal placement principle are correct, and the measurement error is less than 0.01° after optimizing the theodolites’ placement.

  17. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  18. Statistical approaches to lifetime measurements with restricted observation times

    NASA Astrophysics Data System (ADS)

    Chen, X. C.; Zeng, Q.; Litvinov, Yu. A.; Tu, X. L.; Walker, P. M.; Wang, M.; Wang, Q.; Yue, K.; Zhang, Y. H.

    2017-09-01

    Two generic methods based on frequentism and Bayesianism are presented in this work aiming to adequately estimate decay lifetimes from measured data, while accounting for restricted observation times in the measurements. All the experimental scenarios that can possibly arise from the observation constraints are treated systematically and formulas are derived. The methods are then tested against the decay data of bare isomeric 44+94mRu, which were measured using isochronous mass spectrometry with a timing detector at the CSRe in Lanzhou, China. Applying both methods in three distinct scenarios yields six different but consistent lifetime estimates. The deduced values are all in good agreement with a prediction based on the neutral-atom value modified to take the absence of internal conversion into account. Potential applications of such methods are discussed.

  19. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  20. An image-based method to measure all-terrain vehicle dimensions for engineering safety purposes.

    PubMed

    Jennissen, Charles A; Miller, Nathan S; Tang, Kaiyang; Denning, Gerene M

    2014-04-01

    All-terrain vehicle (ATV) crashes are a serious public health and safety concern. Engineering approaches that address ATV injury prevention are critically needed. Avenues to pursue include evidence-based seat design that decreases risky behaviours, such as carrying passengers and operation of adult-size vehicles by children. The goal of this study was to create and validate an image-based method to measure ATV seat length and placement. Publicly available ATV images were downloaded. Adobe Photoshop was then used to generate a vertical grid through the centre of the vehicle, to define the grid scale using the manufacturer's reported wheelbase, and to determine seat length and placement relative to the front and rear axles using this scale. Images that yielded a difference greater than 5% between the calculated and the manufacturer's reported ATV lengths were excluded from further analysis. For the 77 images that met inclusion criteria, the mean±SD for the difference in calculated versus reported vehicle length was 1.8%±1.2%. The Pearson correlation coefficient for comparing image-based seat lengths determined by two independent measurers (20 models) and image-based lengths versus lengths measured at dealerships (12 models) were 0.95 and 0.96, respectively. The image-based method provides accurate and reproducible results for determining ATV measurements, including seat length and placement. This method greatly expands the number of ATV models that can be studied, and may be generalisable to other motor vehicle types. These measurements can be used to guide engineering approaches that improve ATV safety design.

  1. Development of Gravity Acceleration Measurement Using Simple Harmonic Motion Pendulum Method Based on Digital Technology and Photogate Sensor

    NASA Astrophysics Data System (ADS)

    Yulkifli; Afandi, Zurian; Yohandri

    2018-04-01

    Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.

  2. A study on new method of noninvasive esophageal venous pressure measurement based on the airflow and laser detection technology.

    PubMed

    Hu, Chenghuan; Huang, Feizhou; Zhang, Rui; Zhu, Shaihong; Nie, Wanpin; Liu, Xunyang; Liu, Yinglong; Li, Peng

    2015-01-01

    Using optics combined with automatic control and computer real-time image detection technology, a novel noninvasive method of noncontact pressure manometry was developed based on the airflow and laser detection technology in this study. The new esophageal venous pressure measurement system was tested in-vitro experiments. A stable and adjustable pulse stream was produced from a self-developed pump and a laser emitting apparatus could generate optical signals which can be captured by image acquisition and analysis system program. A synchronization system simultaneous measured the changes of air pressure and the deformation of the vein wall to capture the vascular deformation while simultaneously record the current pressure value. The results of this study indicated that the pressure values tested by the new method have good correlation with the actual pressure value in animal experiments. The new method of noninvasive pressure measurement based on the airflow and laser detection technology is accurate, feasible, repeatable and has a good application prospects.

  3. An ultra-wide bandwidth-based range/GPS tight integration approach for relative positioning in vehicular ad hoc networks

    NASA Astrophysics Data System (ADS)

    Shen, Feng; Wayn Cheong, Joon; Dempster, Andrew G.

    2015-04-01

    Relative position awareness is a vital premise for the implementation of emerging intelligent transportation systems, such as collision warning. However, commercial global navigation satellite systems (GNSS) receivers do not satisfy the requirements of these applications. Fortunately, cooperative positioning (CP) techniques, through sharing the GNSS measurements between vehicles, can improve the performance of relative positioning in a vehicular ad hoc network (VANET). In this paper, while assuming there are no obstacles between vehicles, a new enhanced tightly coupled CP technique is presented by adding ultra-wide bandwidth (UWB)-based inter-vehicular range measurements. In the proposed CP method, each vehicle fuses the GPS measurements and the inter-vehicular range measurements. Based on analytical and experimental results, in the full GPS coverage environment, the new tight integration CP method outperforms the INS-aided tight CP method, tight CP method, and DGPS by 11%, 15%, and 24%, respectively; in the GPS outage scenario, the performance improvement achieves 60%, 65%, and 73%, respectively.

  4. Flexible, multi-measurement guided wave damage detection under varying temperatures

    NASA Astrophysics Data System (ADS)

    Douglass, Alexander C. S.; Harley, Joel B.

    2018-04-01

    Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).

  5. A Profilometry-Based Dentifrice Abrasion Method for V8 Brushing Machines Part II: Comparison of RDA-PE and Radiotracer RDA Measures.

    PubMed

    Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel

    2015-01-01

    The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.

  6. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  7. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  8. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  9. A Pilot Study of a Novel Method of Measuring Stigma about Depression Developed for Latinos in the Faith-Based Setting.

    PubMed

    Caplan, Susan

    2016-08-01

    In order to understand the effects of interventions designed to reduce stigma about mental illness, we need valid measures. However, the validity of commonly used measures is compromised by social desirability bias. The purpose of this pilot study was to test an anonymous method of measuring stigma in the community setting. The method of data collection, Preguntas con Cartas (Questions with Cards) used numbered playing cards to conduct anonymous group polling about stigmatizing beliefs during a mental health literacy intervention. An analysis of the difference between Preguntas con Cartas stigma votes and corresponding face-to-face individual survey results for the same seven stigma questions indicated that there was a statistically significant differences in the distributions between the two methods of data collection (χ(2) = 8.27, p = 0.016). This exploratory study has shown the potential effectiveness of Preguntas con Cartas as a novel method of measuring stigma in the community-based setting.

  10. Determining surface areas of marine alga cells by acid-base titration method.

    PubMed

    Wang, X; Ma, Y; Su, Y

    1997-09-01

    A new method for determining the surface area of living marine alga cells was described. The method uses acid-base titration to measure the surface acid/base amount on the surface of alga cells and uses the BET (Brunauer, Emmett, and Teller) equation to estimate the maximum surface acid/base amount, assuming that hydrous cell walls have carbohydrates or other structural compounds which can behave like surface Brönsted acid-base sites due to coordination of environmental H2O molecules. The method was applied to 18 diverse alga species (including 7 diatoms, 2 flagellates, 8 green algae and 1 red alga) maintained in seawater cultures. For the species examined, the surface areas of individual cells ranged from 2.8 x 10(-8) m2 for Nannochloropsis oculata to 690 x 10(-8) m2 for Dunaliella viridis, specific surface areas from 1,030 m2.g-1 for Dunaliella salina to 28,900 m2.g-1 for Pyramidomonas sp. Measurement accuracy was 15.2%. Preliminary studies show that the method may be more promising and accurate than light/electron microscopic measurements for coarse estimation of the surface area of living algae.

  11. Ultrasensitive low noise voltage amplifier for spectral analysis.

    PubMed

    Giusi, G; Crupi, F; Pace, C

    2008-08-01

    Recently we have proposed several voltage noise measurement methods that allow, at least in principle, the complete elimination of the noise introduced by the measurement amplifier. The most severe drawback of these methods is that they require a multistep measurement procedure. Since environmental conditions may change in the different measurement steps, the final result could be affected by these changes. This problem is solved by the one-step voltage noise measurement methodology based on a novel amplifier topology proposed in this paper. Circuit implementations for the amplifier building blocks based on operational amplifiers are critically discussed. The proposed approach is validated through measurements performed on a prototype circuit.

  12. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  13. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  14. [Determination of net exchange of CO2 between paddy fields and atmosphere with static poaque-chamber-based measurements].

    PubMed

    Zheng, Xunhua; Xu, Zhongjun; Wang, Yuesi; Han, Shenghui; Huang, Yao; Cai, Zucong; Zhu, Jianguo

    2002-10-01

    We firstly introduced the method for determining the net ecosystem exchange fluxes of CO2 (NEE) between croplands and atmosphere, based on field measurements using static opaquechamber/gas chromatography methods was introduced, and the application of this method in the FACE (free-air CO2 enrichment) study to examine the effects of elevated CO2 on the NEE over a typical paddy ecosystem was carried out, because of lacking in observation data for some necessary parameters, e.g., dark maintenance respiration coefficient, only the minimum value of NEE (NEEmin) was calculated based on opaque-chamber measurements. The NEEmin data indicate that CO2 elevated by 200 +/- 40 mumol.mol-1 significantly increased the ecosystem uptake of atmospheric CO2 by a factor ca. 3. To accurately determine the NEE based on opaquechamber measurements, dark maintenance respiration coefficient, above-ground biomass and root: shoot, i.e. R:S, ratio of root to shoot should be observed over the whole growing season.

  15. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.

    PubMed

    Zhou, Yang; Wu, Dewei

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).

  16. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    PubMed Central

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859

  17. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  18. Comparative study of performance of neutral axis tracking based damage detection

    NASA Astrophysics Data System (ADS)

    Soman, R.; Malinowski, P.; Ostachowicz, W.

    2015-07-01

    This paper presents a comparative study of a novel SHM technique for damage isolation. The performance of the Neutral Axis (NA) tracking based damage detection strategy is compared to other popularly used vibration based damage detection methods viz. ECOMAC, Mode Shape Curvature Method and Strain Flexibility Index Method. The sensitivity of the novel method is compared under changing ambient temperature conditions and in the presence of measurement noise. Finite Element Analysis (FEA) of the DTU 10 MW Wind Turbine was conducted to compare the local damage identification capability of each method and the results are presented. Under the conditions examined, the proposed method was found to be robust to ambient condition changes and measurement noise. The damage identification in some is either at par with the methods mentioned in the literature or better under the investigated damage scenarios.

  19. Fluence-based and microdosimetric event-based methods for radiation protection in space

    NASA Technical Reports Server (NTRS)

    Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)

    2002-01-01

    The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.

  20. Simple Radiowave-Based Method For Measuring Peripheral Blood Flow Project

    NASA Technical Reports Server (NTRS)

    Oliva-Buisson, Yvette J.

    2014-01-01

    Project objective is to design small radio frequency based flow probes for the measurement of blood flow velocity in peripheral arteries such as the femoral artery and middle cerebral artery. The result will be the technological capability to measure peripheral blood flow rates and flow changes during various environmental stressors such as microgravity without contact to the individual being monitored. This technology may also lead to an easier method of detecting venous gas emboli during extravehicular activities.

  1. Quantitative data standardization of X-ray based densitometry methods

    NASA Astrophysics Data System (ADS)

    Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.

    2018-02-01

    In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.

  2. Development of Michelson interferometer based spatial phase-shift digital shearography

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Digital shearography is a non-contact, full field, optical measurement method, which has the capability of directly measuring the gradient of deformation. For high measurement sensitivity, phase evaluation method has to be introduced into digital shearography by phase-shift technique. Catalog by phase-shift method, digital phase-shift shearography can be divided into Temporal Phase-Shift Digital Shearography (TPS-DS) and Spatial Phase-Shift Digital Shearography (SPS-DS). TPS-DS is the most widely used phase-shift shearography system, due to its simple algorithm, easy operation and good phase-map quality. However, the application of TPS-DS is only limited in static/step-by-step loading measurement situation, due to its multi-step shifting process. In order to measure the strain under dynamic/continuous loading situation, a SPS-DS system has to be developed. This dissertation aims to develop a series of Michelson Interferometer based SPS-DS measurement methods to achieve the strain measurement by using only a single pair of speckle pattern images. The Michelson Interferometer based SPS-DS systems utilize special designed optical setup to introduce extra carrier frequency into the laser wavefront. The phase information corresponds to the strain field can be separated on the Fourier domain using a Fourier Transform and can further be evaluated with a Windowed Inverse Fourier Transform. With different optical setups and carrier frequency arrangements, the Michelson Interferometer based SPS-DS method is capable to achieve a variety of measurement tasks using only single pair of speckle pattern images. Catalog by the aimed measurand, these capable measurement tasks can be divided into five categories: 1) measurement of out-of-plane strain field with small shearing amount; 2) measurement of relative out-of-plane deformation field with big shearing amount; 3) simultaneous measurement of relative out-of-plane deformation field and deformation gradient field by using multiple carrier frequencies; 4) simultaneous measurement of two directional strain field using dual measurement channels 5) measurement of pure in-plane strain and pure out-of-plane strain with multiple carrier frequencies. The basic theory, optical path analysis, preliminary studies, results analysis and research plan are shown in detail in this dissertation.

  3. Analysis of rocket flight stability based on optical image measurement

    NASA Astrophysics Data System (ADS)

    Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun

    2018-02-01

    Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.

  4. A brief measure of attitudes toward mixed methods research in psychology.

    PubMed

    Roberts, Lynne D; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.

  5. Cerebral blood flow and autoregulation: current measurement techniques and prospects for noninvasive optical methods

    PubMed Central

    Fantini, Sergio; Sassaroli, Angelo; Tgavalekos, Kristen T.; Kornbluth, Joshua

    2016-01-01

    Abstract. Cerebral blood flow (CBF) and cerebral autoregulation (CA) are critically important to maintain proper brain perfusion and supply the brain with the necessary oxygen and energy substrates. Adequate brain perfusion is required to support normal brain function, to achieve successful aging, and to navigate acute and chronic medical conditions. We review the general principles of CBF measurements and the current techniques to measure CBF based on direct intravascular measurements, nuclear medicine, X-ray imaging, magnetic resonance imaging, ultrasound techniques, thermal diffusion, and optical methods. We also review techniques for arterial blood pressure measurements as well as theoretical and experimental methods for the assessment of CA, including recent approaches based on optical techniques. The assessment of cerebral perfusion in the clinical practice is also presented. The comprehensive description of principles, methods, and clinical requirements of CBF and CA measurements highlights the potentially important role that noninvasive optical methods can play in the assessment of neurovascular health. In fact, optical techniques have the ability to provide a noninvasive, quantitative, and continuous monitor of CBF and autoregulation. PMID:27403447

  6. Towards predicting the encoding capability of MR fingerprinting sequences.

    PubMed

    Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P

    2017-09-01

    Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Pulse Transit Time Based Continuous Cuffless Blood Pressure Estimation: A New Extension and A Comprehensive Evaluation.

    PubMed

    Ding, Xiaorong; Yan, Bryan P; Zhang, Yuan-Ting; Liu, Jing; Zhao, Ni; Tsang, Hon Ki

    2017-09-14

    Cuffless technique enables continuous blood pressure (BP) measurement in an unobtrusive manner, and thus has the potential to revolutionize the conventional cuff-based approaches. This study extends the pulse transit time (PTT) based cuffless BP measurement method by introducing a new indicator - the photoplethysmogram (PPG) intensity ratio (PIR). The performance of the models with PTT and PIR was comprehensively evaluated in comparison with six models that are based on sole PTT. The validation conducted on 33 subjects with and without hypertension, at rest and under various maneuvers with induced BP changes, and over an extended calibration interval, respectively. The results showed that, comparing to the PTT models, the proposed methods achieved better accuracy on each subject group at rest state and over 24 hours calibration interval. Although the BP estimation errors under dynamic maneuvers and over extended calibration interval were significantly increased for all methods, the proposed methods still outperformed the compared methods in the latter situation. These findings suggest that additional BP-related indicator other than PTT has added value for improving the accuracy of cuffless BP measurement. This study also offers insights into future research in cuffless BP measurement for tracking dynamic BP changes and over extended periods of time.

  8. Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls

    PubMed Central

    Chae, Jeongsook; Jin, Yong; Sung, Yunsick

    2018-01-01

    Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641

  9. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  10. A simple measurement method of molecular relaxation in a gas by reconstructing acoustic velocity dispersion

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun

    2018-01-01

    Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N  +  1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.

  11. Subpixel displacement measurement method based on the combination of particle swarm optimization and gradient algorithm

    NASA Astrophysics Data System (ADS)

    Guang, Chen; Qibo, Feng; Keqin, Ding; Zhan, Gao

    2017-10-01

    A subpixel displacement measurement method based on the combination of particle swarm optimization (PSO) and gradient algorithm (GA) was proposed for accuracy and speed optimization in GA, which is a subpixel displacement measurement method better applied in engineering practice. An initial integer-pixel value was obtained according to the global searching ability of PSO, and then gradient operators were adopted for a subpixel displacement search. A comparison was made between this method and GA by simulated speckle images and rigid-body displacement in metal specimens. The results showed that the computational accuracy of the combination of PSO and GA method reached 0.1 pixel in the simulated speckle images, or even 0.01 pixels in the metal specimen. Also, computational efficiency and the antinoise performance of the improved method were markedly enhanced.

  12. Apparatus and systems for measuring elongation of objects, methods of measuring, and reactor

    DOEpatents

    Rempe, Joy L [Idaho Falls, ID; Knudson, Darrell L [Firth, ID; Daw, Joshua E [Idaho Falls, ID; Condie, Keith G [Idaho Falls, ID; Stoots, Carl M [Idaho Falls, ID

    2011-11-29

    Elongation measurement apparatuses and systems comprise at least two Linear Variable Differential Transformers (LVDTs) with a push rod coupled to each of the at least two LVDTs at one longitudinal end thereof. At least one push rod extends to a base and is coupled thereto at an opposing longitudinal end, and at least one other push rod extends to a location spaced apart from the base and is configured to receive a sample between an opposing longitudinal end of the at least one other push rod and the base. Nuclear reactors comprising such apparatuses and systems and methods of measuring elongation of a material are also disclosed.

  13. The instantaneous linear motion information measurement method based on inertial sensors for ships

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Huang, Jing; Gao, Chen; Quan, Wei; Li, Ming; Zhang, Yanshun

    2018-05-01

    Ship instantaneous line motion information is the important foundation for ship control, which needs to be measured accurately. For this purpose, an instantaneous line motion measurement method based on inertial sensors is put forward for ships. By introducing a half-fixed coordinate system to realize the separation between instantaneous line motion and ship master movement, the instantaneous line motion acceleration of ships can be obtained with higher accuracy. Then, the digital high-pass filter is applied to suppress the velocity error caused by the low frequency signal such as schuler period. Finally, the instantaneous linear motion displacement of ships can be measured accurately. Simulation experimental results show that the method is reliable and effective, and can realize the precise measurement of velocity and displacement of instantaneous line motion for ships.

  14. [Quantitative evaluation of Gd-EOB-DTPA uptake in phantom study for liver MRI].

    PubMed

    Hayashi, Norio; Miyati, Tosiaki; Koda, Wataru; Suzuki, Masayuki; Sanada, Shigeru; Ohno, Naoki; Hamaguchi, Takashi; Matsuura, Yukihiro; Kawahara, Kazuhiro; Yamamoto, Tomoyuki; Matsui, Osamu

    2010-05-20

    Gd-EOB-DTPA is a new liver specific MRI contrast media. In the hepatobiliary phase, contrast media is trapped in normal liver tissue, a normal liver shows high intensity, tumor/liver contrast becomes high, and diagnostic ability improves. In order to indicate the degree of uptake of the contrast media, the enhancement ratio (ER) is calculated. The ER is obtained by calculating (signal intensity (SI) after injection-SI before injection) / SI before injection. However, because there is no linearity between contrast media concentration and SI, ER is not correctly estimated by this method. We discuss a method of measuring ER based on SI and T(1) values using the phantom. We used a column phantom, with an internal diameter of 3 cm, that was filled with Gd-EOB-DTPA diluted solution. Moreover, measurement of the T(1) value by the IR method was also performed. The ER measuring method of this technique consists of the following three components: 1) Measurement of ER based on differences in 1/T(1) values using the variable flip angle (FA) method, 2) Measurement of differences in SI, and 3) Measurement of differences in 1/T(1) values using the IR method. ER values calculated by these three methods were compared. In measurement made using the variable FA method and the IR method, linearity was found between contrast media concentration and ER. On the other hand, linearity was not found between contrast media concentration and SI. For calculation of ER using Gd-EOB-DTPA, a more correct ER is obtained by measuring the T(1) value using the variable FA method.

  15. Simultaneous in-plane and out-of-plane displacement measurement based on a dual-camera imaging system and its application to inspection of large-scale space structures

    NASA Astrophysics Data System (ADS)

    Ri, Shien; Tsuda, Hiroshi; Yoshida, Takeshi; Umebayashi, Takashi; Sato, Akiyoshi; Sato, Eiichi

    2015-07-01

    Optical methods providing full-field deformation data have potentially enormous interest for mechanical engineers. In this study, an in-plane and out-of-plane displacement measurement method based on a dual-camera imaging system is proposed. The in-plane and out-of-plane displacements are determined simultaneously using two measured in-plane displacement data observed from two digital cameras at different view angles. The fundamental measurement principle and experimental results of accuracy confirmation are presented. In addition, we applied this method to the displacement measurement in a static loading and bending test of a solid rocket motor case (CFRP material; 2.2 m diameter and 2.3 m long) for an up-to-date Epsilon rocket developed by JAXA. The effectiveness and measurement accuracy is confirmed by comparing with conventional displacement sensor. This method could be useful to diagnose the reliability of large-scale space structures in the rocket development.

  16. A Local Agreement Pattern Measure Based on Hazard Functions for Survival Outcomes

    PubMed Central

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K.

    2017-01-01

    Summary Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this paper, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. PMID:28724196

  17. A local agreement pattern measure based on hazard functions for survival outcomes.

    PubMed

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K

    2018-03-01

    Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this article, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. © 2017, The International Biometric Society.

  18. Research on droplet size measurement of impulse antiriots water cannon based on sheet laser

    NASA Astrophysics Data System (ADS)

    Fa-dong, Zhao; Hong-wei, Zhuang; Ren-jun, Zhan

    2014-04-01

    As a new-style counter-personnel non-lethal weapon, it is the non-steady characteristic and large water mist field that increase the difficulty of measuring the droplet size distribution of impulse anti-riots water cannon which is the most important index to examine its tactical and technology performance. A method based on the technologies of particle scattering, sheet laser imaging and high speed handling was proposed and an universal droplet size measuring algorithm was designed and verified. According to this method, the droplet size distribution was measured. The measuring results of the size distribution under the same position with different timescale, the same axial distance with different radial distance, the same radial distance with different axial distance were analyzed qualitatively and some rational cause was presented. The droplet size measuring method proposed in this article provides a scientific and effective experiment method to ascertain the technical and tactical performance and optimize the relative system performance.

  19. A novel method for calculating ambient aerosol liquid water content based on measurements of a humidified nephelometer system

    NASA Astrophysics Data System (ADS)

    Kuang, Ye; Zhao, Chun Sheng; Zhao, Gang; Tao, Jiang Chuan; Xu, Wanyun; Ma, Nan; Bian, Yu Xuan

    2018-05-01

    Water condensed on ambient aerosol particles plays significant roles in atmospheric environment, atmospheric chemistry and climate. Before now, no instruments were available for real-time monitoring of ambient aerosol liquid water contents (ALWCs). In this paper, a novel method is proposed to calculate ambient ALWC based on measurements of a three-wavelength humidified nephelometer system, which measures aerosol light scattering coefficients and backscattering coefficients at three wavelengths under dry state and different relative humidity (RH) conditions, providing measurements of light scattering enhancement factor f(RH). The proposed ALWC calculation method includes two steps: the first step is the estimation of the dry state total volume concentration of ambient aerosol particles, Va(dry), with a machine learning method called random forest model based on measurements of the dry nephelometer. The estimated Va(dry) agrees well with the measured one. The second step is the estimation of the volume growth factor Vg(RH) of ambient aerosol particles due to water uptake, using f(RH) and the Ångström exponent. The ALWC is calculated from the estimated Va(dry) and Vg(RH). To validate the new method, the ambient ALWC calculated from measurements of the humidified nephelometer system during the Gucheng campaign was compared with ambient ALWC calculated from ISORROPIA thermodynamic model using aerosol chemistry data. A good agreement was achieved, with a slope and intercept of 1.14 and -8.6 µm3 cm-3 (r2 = 0.92), respectively. The advantage of this new method is that the ambient ALWC can be obtained solely based on measurements of a three-wavelength humidified nephelometer system, facilitating the real-time monitoring of the ambient ALWC and promoting the study of aerosol liquid water and its role in atmospheric chemistry, secondary aerosol formation and climate change.

  20. Videodensitometric Methods for Cardiac Output Measurements

    NASA Astrophysics Data System (ADS)

    Mischi, Massimo; Kalker, Ton; Korsten, Erik

    2003-12-01

    Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.

  1. A review on creatinine measurement techniques.

    PubMed

    Mohabbati-Kalejahi, Elham; Azimirad, Vahid; Bahrami, Manouchehr; Ganbari, Ahmad

    2012-08-15

    This paper reviews the entire recent global tendency for creatinine measurement. Creatinine biosensors involve complex relationships between biology and micro-mechatronics to which the blood is subjected. Comparison between new and old methods shows that new techniques (e.g. Molecular Imprinted Polymers based algorithms) are better than old methods (e.g. Elisa) in terms of stability and linear range. All methods and their details for serum, plasma, urine and blood samples are surveyed. They are categorized into five main algorithms: optical, electrochemical, impedometrical, Ion Selective Field-Effect Transistor (ISFET) based technique and chromatography. Response time, detection limit, linear range and selectivity of reported sensors are discussed. Potentiometric measurement technique has the lowest response time of 4-10 s and the lowest detection limit of 0.28 nmol L(-1) belongs to chromatographic technique. Comparison between various techniques of measurements indicates that the best selectivity belongs to MIP based and chromatographic techniques. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Molecular dispersion spectroscopy for chemical sensing using chirped mid-infrared quantum cascade laser.

    PubMed

    Wysocki, Gerard; Weidmann, Damien

    2010-12-06

    A spectroscopic method of molecular detection based on dispersion measurements using a frequency-chirped laser source is presented. An infrared quantum cascade laser emitting around 1912 cm(-1) is used as a tunable spectroscopic source to measure dispersion that occurs in the vicinity of molecular ro-vibrational transitions. The sample under study is a mixture of nitric oxide in dry nitrogen. Two experimental configurations based on a coherent detection scheme are investigated and discussed. The theoretical models, which describe the observed spectral signals, are developed and verified experimentally. The method is particularly relevant to optical sensing based on mid-infrared quantum cascade lasers as the high chirp rates available with those sources can significantly enhance the magnitude of the measured dispersion signals. The method relies on heterodyne beatnote frequency measurements and shows high immunity to variations in the optical power received by the photodetector.

  3. Improved collaborative filtering recommendation algorithm of similarity measure

    NASA Astrophysics Data System (ADS)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  4. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  5. Archimedes Revisited: A Faster, Better, Cheaper Method of Accurately Measuring the Volume of Small Objects

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2005-01-01

    A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…

  6. Method for a detailed measurement of image intensity nonuniformity in magnetic resonance imaging.

    PubMed

    Wang, Deming; Doddrell, David M

    2005-04-01

    In magnetic resonance imaging (MRI), the MR signal intensity can vary spatially and this spatial variation is usually referred to as MR intensity nonuniformity. Although the main source of intensity nonuniformity arises from B1 inhomogeneity of the coil acting as a receiver and/or transmitter, geometric distortion also alters the MR signal intensity. It is useful on some occasions to have these two different sources be separately measured and analyzed. In this paper, we present a practical method for a detailed measurement of the MR intensity nonuniformity. This method is based on the same three-dimensional geometric phantom that was recently developed for a complete measurement of the geometric distortion in MR systems. In this paper, the contribution to the intensity nonuniformity from the geometric distortion can be estimated and thus, it provides a mechanism for estimation of the intensity nonuniformity that reflects solely the spatial characteristics arising from B1. Additionally, a comprehensive scheme for characterization of the intensity nonuniformity based on the new measurement method is proposed. To demonstrate the method, the intensity nonuniformity in a 1.5 T Sonata MR system was measured and is used to illustrate the main features of the method.

  7. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  8. Measurement of edge residual stresses in glass by the phase-shifting method

    NASA Astrophysics Data System (ADS)

    Ajovalasit, A.; Petrucci, G.; Scafidi, M.

    2011-05-01

    Control and measurement of residual stress in glass is of great importance in the industrial field. Since glass is a birefringent material, the residual stress analysis is based mainly on the photoelastic method. This paper considers two methods of automated analysis of membrane residual stress in glass sheets, based on the phase-shifting concept in monochromatic light. In particular these methods are the automated versions of goniometric compensation methods of Tardy and Sénarmont. The proposed methods can effectively replace manual methods of compensation (goniometric compensation of Tardy and Sénarmont, Babinet and Babinet-Soleil compensators) provided by current standards on the analysis of residual stresses in glasses.

  9. Uncertainty Modeling and Evaluation of CMM Task Oriented Measurement Based on SVCMM

    NASA Astrophysics Data System (ADS)

    Li, Hongli; Chen, Xiaohuai; Cheng, Yinbao; Liu, Houde; Wang, Hanbin; Cheng, Zhenying; Wang, Hongtao

    2017-10-01

    Due to the variety of measurement tasks and the complexity of the errors of coordinate measuring machine (CMM), it is very difficult to reasonably evaluate the uncertainty of the measurement results of CMM. It has limited the application of CMM. Task oriented uncertainty evaluation has become a difficult problem to be solved. Taking dimension measurement as an example, this paper puts forward a practical method of uncertainty modeling and evaluation of CMM task oriented measurement (called SVCMM method). This method makes full use of the CMM acceptance or reinspection report and the Monte Carlo computer simulation method (MCM). The evaluation example is presented, and the results are evaluated by the traditional method given in GUM and the proposed method, respectively. The SVCMM method is verified to be feasible and practical. It can help CMM users to conveniently complete the measurement uncertainty evaluation through a single measurement cycle.

  10. Improvement of photon correlation spectroscopy method for measuring nanoparticle size by using attenuated total reflectance.

    PubMed

    Krishtop, Victor; Doronin, Ivan; Okishev, Konstantin

    2012-11-05

    Photon correlation spectroscopy is an effective method for measuring nanoparticle sizes and has several advantages over alternative methods. However, this method suffers from a disadvantage in that its measuring accuracy reduces in the presence of convective flows of fluid containing nanoparticles. In this paper, we propose a scheme based on attenuated total reflectance in order to reduce the influence of convection currents. The autocorrelation function for the light-scattering intensity was found for this case, and it was shown that this method afforded a significant decrease in the time required to measure the particle sizes and an increase in the measuring accuracy.

  11. A method for remote sounding of a bottom relief of water objects with using GPS

    NASA Astrophysics Data System (ADS)

    Mamontova, L. S.

    2014-12-01

    The no-fly automated system of small rivers' depth's measurement which is based on a combination of a differential method GPS-definition of the pro-measured vessel's coordinates both the method of depth's measurement with sonic depth finder and the method of the vessel's management was examined in this article.On the central station the digital card with a relief for a pro-measured zone of the reservoir is formed and the position of a pro-measured vessel on the tacks is controlled thanks to the coordinates of a pro-measured vessel and depth's measurements with sonic depth finder.The offered system allows to raise the level of depth's pro-measured works.

  12. Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.

    PubMed

    Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi

    2013-06-01

    Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.

  13. Intrathoracic airway measurement: ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reinhardt, Joseph M.; Raab, Stephen A.; D'Souza, Neil D.; Hoffman, Eric A.

    1997-05-01

    High-resolution x-ray CT (HRCT) provides detailed images of the lungs and bronchial tree. HRCT-based imaging and quantitation of peripheral bronchial airway geometry provides a valuable tool for assessing regional airway physiology. Such measurements have been sued to address physiological questions related to the mechanics of airway collapse in sleep apnea, the measurement of airway response to broncho-constriction agents, and to evaluate and track the progression of disease affecting the airways, such as asthma and cystic fibrosis. Significant attention has been paid to the measurements of extra- and intra-thoracic airways in 2D sections from volumetric x-ray CT. A variety of manual and semi-automatic techniques have been proposed for airway geometry measurement, including the use of standardized display window and level settings for caliper measurements, methods based on manual or semi-automatic border tracing, and more objective, quantitative approaches such as the use of the 'half-max' criteria. A recently proposed measurements technique uses a model-based deconvolution to estimate the location of the inner and outer airway walls. Validation using a plexiglass phantom indicates that the model-based method is more accurate than the half-max approach for thin-walled structures. In vivo validation of these airway measurement techniques is difficult because of the problems in identifying a reliable measurement 'gold standard.' In this paper we report on ex vivo validation of the half-max and model-based methods using an excised pig lung. The lung is sliced into thin sections of tissue and scanned using an electron beam CT scanner. Airways of interest are measured from the CT images, and also measured with using a microscope and micrometer to obtain a measurement gold standard. The result show no significant difference between the model-based measurements and the gold standard; while the half-max estimates exhibited a measurement bias and were significantly different than the gold standard.

  14. Multi-beam laser heterodyne measurement with ultra-precision for Young modulus based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Y. Chao; Ding, Q.; Gao, Y.; Ran, L. Ling; Yang, J. Ru; Liu, C. Yu; Wang, C. Hui; Sun, J. Feng

    2014-07-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for Young modulus. Based on Doppler effect and heterodyne technology, loaded the information of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by mass variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain value of Young modulus of the sample by the calculation. This novel method is used to simulate measurement for Young modulus of wire under different mass by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.3%.

  15. AGREEMENT AND COVERAGE OF INDICATORS OF RESPONSE TO INTERVENTION: A MULTI-METHOD COMPARISON AND SIMULATION

    PubMed Central

    Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.

    2013-01-01

    Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090

  16. Method of high precision interval measurement in pulse laser ranging system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  17. Study on photon transport problem based on the platform of molecular optical simulation environment.

    PubMed

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.

  18. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  19. Investigating the technical adequacy of curriculum-based measurement in written expression for students who are deaf or hard of hearing.

    PubMed

    Cheng, Shu-Fen; Rose, Susan

    2009-01-01

    This study investigated the technical adequacy of curriculum-based measures of written expression (CBM-W) in terms of writing prompts and scoring methods for deaf and hard-of-hearing students. Twenty-two students at the secondary school-level completed 3-min essays within two weeks, which were scored for nine existing and alternative curriculum-based measurement (CBM) scoring methods. The technical features of the nine scoring methods were examined for interrater reliability, alternate-form reliability, and criterion-related validity. The existing CBM scoring method--number of correct minus incorrect word sequences--yielded the highest reliability and validity coefficients. The findings from this study support the use of the CBM-W as a reliable and valid tool for assessing general writing proficiency with secondary students who are deaf or hard of hearing. The CBM alternative scoring methods that may serve as additional indicators of written expression include correct subject-verb agreements, correct clauses, and correct morphemes.

  20. New optical frequency domain differential mode delay measurement method for a multimode optical fiber.

    PubMed

    Ahn, T; Moon, S; Youk, Y; Jung, Y; Oh, K; Kim, D

    2005-05-30

    A novel mode analysis method and differential mode delay (DMD) measurement technique for a multimode optical fiber based on optical frequency domain reflectometry has been proposed for the first time. We have used a conventional OFDR with a tunable external cavity laser and a Michelson interferometer. A few-mode optical multimode fiber was prepared to test our proposed measurement technique. We have also compared the OFDR measurement results with those obtained using a traditional time-domain measurement method.

  1. Measure the Earth's Radius and the Speed of Light with Simple and Inexpensive Computer-Based Experiments

    ERIC Educational Resources Information Center

    Martin, Michael J.

    2004-01-01

    With new and inexpensive computer-based methods, measuring the speed of light and the Earth's radius--historically difficult endeavors--can be simple enough to be tackled by high school and college students working in labs that have limited budgets. In this article, the author describes two methods of estimating the Earth's radius using two…

  2. Curriculum-Based Measurement of Reading: An Evaluation of Frequentist and Bayesian Methods to Model Progress Monitoring Data

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Desjardins, Christopher David

    2018-01-01

    Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR's lack of validity and reliability, and…

  3. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V.

    2015-08-04

    Scanning probe microscopy may include a method for generating a band excitation (BE) signal and simultaneously exciting a probe at a plurality of frequencies within a predetermined frequency band based on the excitation signal. A response of the probe is measured across a subset of frequencies of the predetermined frequency band and the excitation signal is adjusted based on the measured response.

  4. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V.

    2017-01-03

    Scanning probe microscopy may include a method for generating a band excitation (BE) signal and simultaneously exciting a probe at a plurality of frequencies within a predetermined frequency band based on the excitation signal. A response of the probe is measured across a subset of frequencies of the predetermined frequency band and the excitation signal is adjusted based on the measured response.

  5. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    PubMed

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  6. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  7. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  8. Reflector-based phase calibration of ultrasound transducers.

    PubMed

    van Neer, Paul L M J; Vos, Hendrik J; de Jong, Nico

    2011-01-01

    Recently, the measurement of phase transfer functions (PTFs) of piezoelectric transducers has received more attention. These PTFs are useful for e.g. coding and interference based imaging methods, and ultrasound contrast microbubble research. Several optical and acoustic methods to measure a transducer's PTF have been reported in literature. The optical methods require a setup to which not all ultrasound laboratories have access to. The acoustic methods require accurate distance and acoustic wave speed measurements. A small error in these leads to a large error in phase, e.g. an accuracy of 0.1% on an axial distance of 10cm leads to an uncertainty in the PTF measurement of ±97° at 4MHz. In this paper we present an acoustic pulse-echo method to measure the PTF of a transducer, which is based on linear wave propagation and only requires an estimate of the wave travel distance and the acoustic wave speed. In our method the transducer is excited by a monofrequency sine burst with a rectangular envelope. The transducer initially vibrates at resonance (transient regime) prior to the forcing frequency response (steady state regime). The PTF value of the system is the difference between the phases deduced from the transient and the steady state regimes. Good agreement, to within 7°, was obtained between KLM simulations and measurements on two transducers in a 1-8MHz frequency range. The reproducibility of the method was ±10°, with a systematic error of 2° at 1MHz increasing to 16° at 8MHz. This work demonstrates that the PTF of a transducer can be measured in a simple laboratory setting. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. A New Frequency-Domain Method for Bunch Length Measurement

    NASA Astrophysics Data System (ADS)

    Ferianis, M.; Pros, M.

    1997-05-01

    A new method for bunch length measurements has been developed at Elettra. It is based on a spectral observation of the synchrotron radiation light pulses. The single pulse spectrum is shaped by means of an optical process which gives the method an increased sensitivity compared to the usual spectral observations. Some simulations have been carried out to check the method in non-ideal conditions. The results of the first measurements are also presented.

  10. Analytical methods for the measurement of polymerization kinetics and stresses of dental resin-based composites: A review

    PubMed Central

    Ghavami-Lahiji, Mehrsima; Hooshmand, Tabassom

    2017-01-01

    Resin-based composites are commonly used restorative materials in dentistry. Such tooth-colored restorations can adhere to the dental tissues. One drawback is that the polymerization shrinkage and induced stresses during the curing procedure is an inherent property of resin composite materials that might impair their performance. This review focuses on the significant developments of laboratory tools in the measurement of polymerization shrinkage and stresses of dental resin-based materials during polymerization. An electronic search of publications from January 1977 to July 2016 was made using ScienceDirect, PubMed, Medline, and Google Scholar databases. The search included only English-language articles. Only studies that performed laboratory methods to evaluate the amount of the polymerization shrinkage and/or stresses of dental resin-based materials during polymerization were selected. The results indicated that various techniques have been introduced with different mechanical/physical bases. Besides, there are factors that may contribute the differences between the various methods in measuring the amount of shrinkages and stresses of resin composites. The search for an ideal and standard apparatus for measuring shrinkage stress and volumetric polymerization shrinkage of resin-based materials in dentistry is still required. Researchers and clinicians must be aware of differences between analytical methods to make proper interpretation and indications of each technique relevant to a clinical situation. PMID:28928776

  11. Resource-use measurement based on patient recall: issues and challenges for economic evaluation.

    PubMed

    Thorn, Joanna C; Coast, Joanna; Cohen, David; Hollingworth, William; Knapp, Martin; Noble, Sian M; Ridyard, Colin; Wordsworth, Sarah; Hughes, Dyfrig

    2013-06-01

    Accurate resource-use measurement is challenging within an economic evaluation, but is a fundamental requirement for estimating efficiency. Considerable research effort has been concentrated on the appropriate measurement of outcomes and the policy implications of economic evaluation, while methods for resource-use measurement have been relatively neglected. Recently, the Database of Instruments for Resource Use Measurement (DIRUM) was set up at http://www.dirum.org to provide a repository where researchers can share resource-use measures and methods. A workshop to discuss the issues was held at the University of Birmingham in October 2011. Based on material presented at the workshop, this article highlights the state of the art of UK instruments for resource-use data collection based on patient recall. We consider methodological issues in the design and analysis of resource-use instruments, and the challenges associated with designing new questionnaires. We suggest a method of developing a good practice guideline, and identify some areas for future research. Consensus amongst health economists has yet to be reached on many aspects of resource-use measurement. We argue that researchers should now afford costing methodologies the same attention as outcome measurement, and we hope that this Current Opinion article will stimulate a debate on methods of resource-use data collection and establish a research agenda to improve the precision and accuracy of resource-use estimates.

  12. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).

    We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.

    A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.

    Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.

    Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.

    The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  13. An Approach to the Evaluation of Hypermedia.

    ERIC Educational Resources Information Center

    Knussen, Christina; And Others

    1991-01-01

    Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…

  14. Viscosity measuring using microcantilevers

    DOEpatents

    Oden, Patrick Ian

    2001-01-01

    A method for the measurement of the viscosity of a fluid uses a micromachined cantilever mounted on a moveable base. As the base is rastered while in contact with the fluid, the deflection of the cantilever is measured and the viscosity determined by comparison with standards.

  15. A new method to measure Bowen ratios using high-resolution vertical dry and wet bulb temperature profiles

    NASA Astrophysics Data System (ADS)

    Euser, T.; Luxemburg, W. M. J.; Everson, C. S.; Mengistu, M. G.; Clulow, A. D.; Bastiaanssen, W. G. M.

    2014-06-01

    The Bowen ratio surface energy balance method is a relatively simple method to determine the latent heat flux and the actual land surface evaporation. The Bowen ratio method is based on the measurement of air temperature and vapour pressure gradients. If these measurements are performed at only two heights, correctness of data becomes critical. In this paper we present the concept of a new measurement method to estimate the Bowen ratio based on vertical dry and wet bulb temperature profiles with high spatial resolution. A short field experiment with distributed temperature sensing (DTS) in a fibre optic cable with 13 measurement points in the vertical was undertaken. A dry and a wetted section of a fibre optic cable were suspended on a 6 m high tower installed over a sugar beet trial plot near Pietermaritzburg (South Africa). Using the DTS cable as a psychrometer, a near continuous observation of vapour pressure and air temperature at 0.20 m intervals was established. These data allowed the computation of the Bowen ratio with a high spatial and temporal precision. The daytime latent and sensible heat fluxes were estimated by combining the Bowen ratio values from the DTS-based system with independent measurements of net radiation and soil heat flux. The sensible heat flux, which is the relevant term to evaluate, derived from the DTS-based Bowen ratio (BR-DTS) was compared with that derived from co-located eddy covariance (R2 = 0.91), surface layer scintillometer (R2 = 0.81) and surface renewal (R2 = 0.86) systems. By using multiple measurement points instead of two, more confidence in the derived Bowen ratio values is obtained.

  16. [Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter].

    PubMed

    Bieńkowski, Paweł; Cała, Paweł; Zubrzak, Bartłomiej

    2015-01-01

    This paper presents the characteristics of the mobile phone base station (BS) as an electromagnetic field (EMF) source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method) or more complex measuring procedure (sources exclusion method) are also presented). This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  17. Low-frequency sound speed and attenuation in sandy seabottom from long-range broadband acoustic measurements.

    PubMed

    Wan, Lin; Zhou, Ji-Xun; Rogers, Peter H

    2010-08-01

    A joint China-U.S. underwater acoustics experiment was conducted in the Yellow Sea with a very flat bottom and a strong and sharp thermocline. Broadband explosive sources were deployed both above and below the thermocline along two radial lines up to 57.2 km and a quarter circle with a radius of 34 km. Two inversion schemes are used to obtain the seabottom sound speed. One is based on extracting normal mode depth functions from the cross-spectral density matrix. The other is based on the best match between the calculated and measured modal arrival times for different frequencies. The inverted seabottom sound speed is used as a constraint condition to extract the seabottom sound attenuation by three methods. The first method involves measuring the attenuation coefficients of normal modes. In the second method, the seabottom sound attenuation is estimated by minimizing the difference between the theoretical and measured modal amplitude ratios. The third method is based on finding the best match between the measured and modeled transmission losses (TLs). The resultant seabottom attenuation, averaged over three independent methods, can be expressed as alpha=(0.33+/-0.02)f(1.86+/-0.04)(dB/m kHz) over a frequency range of 80-1000 Hz.

  18. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  19. Enzymatic method for measuring starch gelatinization in dry products in situ

    USDA-ARS?s Scientific Manuscript database

    An enzymatic method based on hydrolysis of starch by amyloglucosidase and measurement of D-glucose released by glucose oxidase-peroxidase was developed to measure both gelatinized starch and hydrolyzable starch in situ of dried starchy products. Efforts focused on the development of sample handling ...

  20. A vision-based method for planar position measurement

    NASA Astrophysics Data System (ADS)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  1. Anthropometry-corrected exposure modeling as a method to improve trunk posture assessment with a single inclinometer.

    PubMed

    Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay

    2013-01-01

    Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.

  2. Application of neural based estimation algorithm for gait phases of above knee prosthesis.

    PubMed

    Tileylioğlu, E; Yilmaz, A

    2015-01-01

    In this study, two gait phase estimation methods which utilize a rule based quantization and an artificial neural network model respectively are developed and applied for the microcontroller based semi-active knee prosthesis in order to respond user demands and adapt environmental conditions. In this context, an experimental environment in which gait data collected synchronously from both inertial and image based measurement systems has been set up. The inertial measurement system that incorporates MEM accelerometers and gyroscopes is used to perform direct motion measurement through the microcontroller, while the image based measurement system is employed for producing the verification data and assessing the success of the prosthesis. Embedded algorithms dynamically normalize the input data prior to gait phase estimation. The real time analyses of two methods revealed that embedded ANN based approach performs slightly better in comparison with the rule based algorithm and has advantage of being easily-scalable, thus able to accommodate additional input parameters considering the microcontroller constraints.

  3. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    PubMed Central

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  4. Quantum tomography for measuring experimentally the matrix elements of an arbitrary quantum operation.

    PubMed

    D'Ariano, G M; Lo Presti, P

    2001-05-07

    Quantum operations describe any state change allowed in quantum mechanics, including the evolution of an open system or the state change due to a measurement. We present a general method based on quantum tomography for measuring experimentally the matrix elements of an arbitrary quantum operation. As input the method needs only a single entangled state. The feasibility of the technique for the electromagnetic field is shown, and the experimental setup is illustrated based on homodyne tomography of a twin beam.

  5. Analysis of energy-based algorithms for RNA secondary structure prediction

    PubMed Central

    2012-01-01

    Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803

  6. Analysis of energy-based algorithms for RNA secondary structure prediction.

    PubMed

    Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H

    2012-02-01

    RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.

  7. Temperature and Pressure Sensors Based on Spin-Allowed Broadband Luminescence of Doped Orthorhombic Perovskite Structures

    NASA Technical Reports Server (NTRS)

    Eldridge, Jeffrey I. (Inventor); Chambers, Matthew D. (Inventor)

    2014-01-01

    Systems and methods that are capable of measuring pressure or temperature based on luminescence are discussed herein. These systems and methods are based on spin-allowed broadband luminescence of sensors with orthorhombic perovskite structures of rare earth aluminates doped with chromium or similar transition metals, such as chromium-doped gadolinium aluminate. Luminescence from these sensors can be measured to determine at least one of temperature or pressure, based on either the intense luminescence of these sensors, even at high temperatures, or low temperature techniques discussed herein.

  8. Complex motion measurement using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Jianjun; Tu, Dan; Shen, Zhenkang

    1997-12-01

    Genetic algorithm (GA) is an optimization technique that provides an untraditional approach to deal with many nonlinear, complicated problems. The notion of motion measurement using genetic algorithm arises from the fact that the motion measurement is virtually an optimization process based on some criterions. In the paper, we propose a complex motion measurement method using genetic algorithm based on block-matching criterion. The following three problems are mainly discussed and solved in the paper: (1) apply an adaptive method to modify the control parameters of GA that are critical to itself, and offer an elitism strategy at the same time (2) derive an evaluate function of motion measurement for GA based on block-matching technique (3) employ hill-climbing (HC) method hybridly to assist GA's search for the global optimal solution. Some other related problems are also discussed. At the end of paper, experiments result is listed. We employ six motion parameters for measurement in our experiments. Experiments result shows that the performance of our GA is good. The GA can find the object motion accurately and rapidly.

  9. Influence analysis of fluctuation parameters on flow stability based on uncertainty method

    NASA Astrophysics Data System (ADS)

    Meng, Tao; Fan, Shangchun; Wang, Chi; Shi, Huichao

    2018-05-01

    The relationship between flow fluctuation and pressure in a flow facility is studied theoretically and experimentally in this paper, and a method for measuring the flow fluctuation is proposed. According to the synchronicity of pressure and flow fluctuation, the amplitude of the flow fluctuation is calculated using the pressure measured in the flow facility and measurement of the flow fluctuation in a wide range of frequency is realized. Based on the method proposed, uncertainty analysis is used to evaluate the influences of different parameters on the flow fluctuation by the help of a sample-based stochastic model established and the parameters that have great influence are found, which can be a reference for the optimization design and the stability improvement of the flow facility.

  10. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance.

    PubMed

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  11. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance

    NASA Astrophysics Data System (ADS)

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  12. A flexible method for residual stress measurement of spray coated layers by laser made hole drilling and SLM based beam steering

    NASA Astrophysics Data System (ADS)

    Osten, W.; Pedrini, G.; Weidmann, P.; Gadow, R.

    2015-08-01

    A minimum invasive but high resolution method for residual stress analysis of ceramic coatings made by thermal spraycoating using a pulsed laser for flexible hole drilling is described. The residual stresses are retrieved by applying the measured surface data for a model-based reconstruction procedure. While the 3D deformations and the profile of the machined area are measured with digital holography, the residual stresses are calculated by FE analysis. To improve the sensitivity of the method, a SLM is applied to control the distribution and the shape of the holes. The paper presents the complete measurement and reconstruction procedure and discusses the advantages and challenges of the new technology.

  13. Structure of Hybrid Polyhedral Oligomeric Silsesquioxane Polymethacrylate Oligomers Using Ion Mobility Mass Spectrometry and Molecular Mechanics

    DTIC Science & Technology

    2004-12-01

    Jones interaction potential is included45 better results are obtained but this method at times overestimates cross-sections in the intermediate 1500 to...utilized to generate sodiated [(PMA)Cp7T8]xNa+ ions, and their collision cross-sections were measured in helium using ion mobility based methods...were measured in helium using ion mobility based methods. Results for x = 1, 2, and 3 were consistent with only one conformer occurring for the Na+1

  14. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  15. A method for estimating the mass properties of a manipulator by measuring the reaction moments at its base

    NASA Technical Reports Server (NTRS)

    West, Harry; Papadopoulos, Evangelos; Dubowsky, Steven; Cheah, Hanson

    1989-01-01

    Emulating on earth the weightlessness of a manipulator floating in space requires knowledge of the manipulator's mass properties. A method for calculating these properties by measuring the reaction forces and moments at the base of the manipulator is described. A manipulator is mounted on a 6-DOF sensor, and the reaction forces and moments at its base are measured for different positions of the links as well as for different orientations of its base. A procedure is developed to calculate from these measurements some combinations of the mass properties. The mass properties identified are not sufficiently complete for computed torque and other dynamic control techniques, but do allow compensation for the gravitational load on the links, and for simulation of weightless conditions on a space emulator. The algorithm has been experimentally demonstrated on a PUMA 260 and used to measure the independent combinations of the 16 mass parameters of the base and three proximal links.

  16. Detection of honeycomb cell walls from measurement data based on Harris corner detection algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Yan; Dong, Zhigang; Kang, Renke; Yang, Jie; Ayinde, Babajide O.

    2018-06-01

    A honeycomb core is a discontinuous material with a thin-wall structure—a characteristic that makes accurate surface measurement difficult. This paper presents a cell wall detection method based on the Harris corner detection algorithm using laser measurement data. The vertexes of honeycomb cores are recognized with two different methods: one method is the reduction of data density, and the other is the optimization of the threshold of the Harris corner detection algorithm. Each cell wall is then identified in accordance with the neighboring relationships of its vertexes. Experiments were carried out for different types and surface shapes of honeycomb cores, where the proposed method was proved effective in dealing with noise due to burrs and/or deformation of cell walls.

  17. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the optimization method. This paper describes the GPS-based pitot-static calibration method developed for the AirSTAR research test-bed operated as part of the Integrated Resilient Aircraft Controls (IRAC) project in the NASA Aviation Safety Program (AvSP). A description of the method will be provided and results from recent flight tests will be shown to illustrate the performance and advantages of this approach. Discussion of maneuver requirements and data reduction will be included as well as potential applications.

  18. The Danish Organic Action Plan 2020: assessment method and baseline status of organic procurement in public kitchens.

    PubMed

    Sørensen, Nina N; Lassen, Anne D; Løje, Hanne; Tetens, Inge

    2015-09-01

    With political support from the Danish Organic Action Plan 2020, organic public procurement in Denmark is expected to increase. In order to evaluate changes in organic food procurement in Danish public kitchens, reliable methods are needed. The present study aimed to compare organic food procurement measurements by two methods and to collect and discuss baseline organic food procurement measurements from public kitchens participating in the Danish Organic Action Plan 2020. Comparison study measuring organic food procurement by applying two different methods, one based on the use of procurement invoices (the Organic Cuisine Label method) and the other on self-reported procurement (the Dogme method). Baseline organic food procurement status was based on organic food procurement measurements and background information from public kitchens. Public kitchens participating in the six organic food conversion projects funded by the Danish Organic Action Plan 2020 during 2012 and 2013. Twenty-six public kitchens (comparison study) and 345 public kitchens (baseline organic food procurement status). A high significant correlation coefficient was found between the two organic food procurement measurement methods (r=0·83, P<0·001) with measurements relevant for the baseline status. Mean baseline organic food procurement was found to be 24 % when including measurements from both methods. The results indicate that organic food procurement measurements by both methods were valid for the baseline status report of the Danish Organic Action Plan 2020. Baseline results in Danish public kitchens suggest there is room for more organic as well as sustainable public procurement in Denmark.

  19. Evaluating convex roof entanglement measures.

    PubMed

    Tóth, Géza; Moroder, Tobias; Gühne, Otfried

    2015-04-24

    We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.

  20. Novel X-ray Communication Based XNAV Augmentation Method Using X-ray Detectors

    PubMed Central

    Song, Shibin; Xu, Luping; Zhang, Hua; Bai, Yuanjie

    2015-01-01

    The further development of X-ray pulsar-based NAVigation (XNAV) is hindered by its lack of accuracy, so accuracy improvement has become a critical issue for XNAV. In this paper, an XNAV augmentation method which utilizes both pulsar observation and X-ray ranging observation for navigation filtering is proposed to deal with this issue. As a newly emerged concept, X-ray communication (XCOM) shows great potential in space exploration. X-ray ranging, derived from XCOM, could achieve high accuracy in range measurement, which could provide accurate information for XNAV. For the proposed method, the measurement models of pulsar observation and range measurement observation are established, and a Kalman filtering algorithm based on the observations and orbit dynamics is proposed to estimate the position and velocity of a spacecraft. A performance comparison of the proposed method with the traditional pulsar observation method is conducted by numerical experiments. Besides, the parameters that influence the performance of the proposed method, such as the pulsar observation time, the SNR of the ranging signal, etc., are analyzed and evaluated by numerical experiments. PMID:26404295

  1. Design of a temperature measurement and feedback control system based on an improved magnetic nanoparticle thermometer

    NASA Astrophysics Data System (ADS)

    Du, Zhongzhou; Sun, Yi; Liu, Jie; Su, Rijian; Yang, Ming; Li, Nana; Gan, Yong; Ye, Na

    2018-04-01

    Magnetic fluid hyperthermia, as a novel cancer treatment, requires precise temperature control at 315 K-319 K (42 °C-46 °C). However, the traditional temperature measurement method cannot obtain the real-time temperature in vivo, resulting in a lack of temperature feedback during the heating process. In this study, the feasibility of temperature measurement and feedback control using magnetic nanoparticles is proposed and demonstrated. This technique could be applied in hyperthermia. Specifically, the triangular-wave temperature measurement method is improved by reconstructing the original magnetization response of magnetic nanoparticles based on a digital phase-sensitive detection algorithm. The standard deviation of the temperature in the magnetic nanoparticle thermometer is about 0.1256 K. In experiments, the temperature fluctuation of the temperature measurement and feedback control system using magnetic nanoparticles is less than 0.5 K at the expected temperature of 315 K. This shows the feasibility of the temperature measurement method for temperature control. The method provides a new solution for temperature measurement and feedback control in hyperthermia.

  2. Novel method for measuring a dense 3D strain map of robotic flapping wings

    NASA Astrophysics Data System (ADS)

    Li, Beiwen; Zhang, Song

    2018-04-01

    Measuring dense 3D strain maps of the inextensible membranous flapping wings of robots is of vital importance to the field of bio-inspired engineering. Conventional high-speed 3D videography methods typically reconstruct the wing geometries through measuring sparse points with fiducial markers, and thus cannot obtain the full-field mechanics of the wings in detail. In this research, we propose a novel system to measure a dense strain map of inextensible membranous flapping wings by developing a superfast 3D imaging system and a computational framework for strain analysis. Specifically, first we developed a 5000 Hz 3D imaging system based on the digital fringe projection technique using the defocused binary patterns to precisely measure the dynamic 3D geometries of rapidly flapping wings. Then, we developed a geometry-based algorithm to perform point tracking on the precisely measured 3D surface data. Finally, we developed a dense strain computational method using the Kirchhoff-Love shell theory. Experiments demonstrate that our method can effectively perform point tracking and measure a highly dense strain map of the wings without many fiducial markers.

  3. Full-field 3D shape measurement of specular object having discontinuous surfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Huang, Shujun; Gao, Nan; Gao, Feng; Jiang, Xiangqian

    2017-06-01

    This paper presents a novel Phase Measuring Deflectometry (PMD) method to measure specular objects having discontinuous surfaces. A mathematical model is established to directly relate the absolute phase and depth, instead of the phase and gradient. Based on the model, a hardware measuring system has been set up, which consists of a precise translating stage, a projector, a diffuser and a camera. The stage locates the projector and the diffuser together to a known position during measurement. By using the model-based and machine vision methods, system calibration is accomplished to provide the required parameters and conditions. The verification tests are given to evaluate the effectiveness of the developed system. 3D (Three-Dimensional) shapes of a concave mirror and a monolithic multi-mirror array having multiple specular surfaces have been measured. Experimental results show that the proposed method can obtain 3D shape of specular objects having discontinuous surfaces effectively

  4. Image based method for aberration measurement of lithographic tools

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  5. Semantic text relatedness on Al-Qur’an translation using modified path based method

    NASA Astrophysics Data System (ADS)

    Irwanto, Yudi; Arif Bijaksana, Moch; Adiwijaya

    2018-03-01

    Abdul Baquee Muhammad [1] have built Corpus that contained AlQur’an domain, WordNet and dictionary. He has did initialisation in the development of knowledges about AlQur’an and the knowledges about relatedness between texts in AlQur’an. The Path based measurement method that proposed by Liu, Zhou and Zheng [3] has never been used in the AlQur’an domain. By using AlQur’an translation dataset in this research, the path based measurement method proposed by Liu, Zhou and Zheng [3] will be used to test this method in AlQur’an domain to obtain similarity values and to measure its correlation value. In this study the degree value is proposed to be used in modifying the path based method that proposed in previous research. Degree Value is the number of links that owned by a lcs (lowest common subsumer) node on a taxonomy. The links owned by a node on the taxonomy represent the semantic relationship that a node has in the taxonomy. By using degree value to modify the path-based method that proposed in previous research is expected that the correlation value obtained will increase. After running some experiment by using proposed method, the correlation measurement value can obtain fairly good correlation ties with 200 Word Pairs derive from Noun POS SimLex-999. The correlation value that be obtained is 93.3% which means their bonds are strong and they have very strong correlation. Whereas for the POS other than Noun POS vocabulary that owned by WordNet is incomplete therefore many pairs of words that the value of its similarity is zero so the correlation value is low.

  6. Sensorless battery temperature measurements based on electrochemical impedance spectroscopy

    NASA Astrophysics Data System (ADS)

    Raijmakers, L. H. J.; Danilov, D. L.; van Lammeren, J. P. M.; Lammers, M. J. G.; Notten, P. H. L.

    2014-02-01

    A new method is proposed to measure the internal temperature of (Li-ion) batteries. Based on electrochemical impedance spectroscopy measurements, an intercept frequency (f0) can be determined which is exclusively related to the internal battery temperature. The intercept frequency is defined as the frequency at which the imaginary part of the impedance is zero (Zim = 0), i.e. where the phase shift between the battery current and voltage is absent. The advantage of the proposed method is twofold: (i) no hardware temperature sensors are required anymore to monitor the battery temperature and (ii) the method does not suffer from heat transfer delays. Mathematical analysis of the equivalent electrical-circuit, representing the battery performance, confirms that the intercept frequency decreases with rising temperatures. Impedance measurements on rechargeable Li-ion cells of various chemistries were conducted to verify the proposed method. These experiments reveal that the intercept frequency is clearly dependent on the temperature and does not depend on State-of-Charge (SoC) and aging. These impedance-based sensorless temperature measurements are therefore simple and convenient for application in a wide range of stationary, mobile and high-power devices, such as hybrid- and full electric vehicles.

  7. A Distance Measure for Genome Phylogenetic Analysis

    NASA Astrophysics Data System (ADS)

    Cao, Minh Duc; Allison, Lloyd; Dix, Trevor

    Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.

  8. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  9. Identification of characteristic frequencies of damaged railway tracks using field hammer test measurements

    NASA Astrophysics Data System (ADS)

    Oregui, M.; Li, Z.; Dollevoet, R.

    2015-03-01

    In this paper, the feasibility of the Frequency Response Function (FRF)-based statistical method to identify the characteristic frequencies of railway track defects is studied. The method compares a damaged track state to a healthy state based on non-destructive field hammer test measurements. First, a study is carried out to investigate the repeatability of hammer tests in railway tracks. By changing the excitation and measurement locations it is shown that the variability introduced by the test process is negligible. Second, following the concepts of control charts employed in process monitoring, a method to define an approximate healthy state is introduced by using hammer test measurements at locations without visual damage. Then, the feasibility study includes an investigation into squats (i.e. a major type of rail surface defect) of varying severity. The identified frequency ranges related to squats agree with those found in an extensively validated vehicle-borne detection system. Therefore, the FRF-based statistical method in combination with the non-destructive hammer test measurements has the potential to be employed to identify the characteristic frequencies of damaged conditions in railway tracks in the frequency range of 300-3000 Hz.

  10. Compensation method of cloud infrared radiation interference based on a spinning projectile's attitude measurement

    NASA Astrophysics Data System (ADS)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-01-01

    Based on the study of earth infrared radiation and further requirement of anticloud interference ability for a spinning projectile's infrared attitude measurement, a compensation method of cloud infrared radiation interference is proposed. First, the theoretical model of infrared radiation interference is established by analyzing the generation mechanism and interference characteristics of cloud infrared radiation. Then, the influence of cloud infrared radiation on attitude angle is calculated in the following two situations. The first situation is the projectile in cloud, and the maximum of roll angle error can reach ± 20 deg. The second situation is the projectile outside of cloud, and it results in the inability to measure the projectile's attitude angle. Finally, a multisensor weighted fusion algorithm is proposed based on trust function method to reduce the influence of cloud infrared radiation. The results of semiphysical experiments show that the error of roll angle with a weighted fusion algorithm can be kept within ± 0.5 deg in the presence of cloud infrared radiation interference. This proposed method improves the accuracy of roll angle by nearly four times in attitude measurement and also solves the problem of low accuracy of infrared radiation attitude measurement in navigation and guidance field.

  11. IMU-Based Joint Angle Measurement for Gait Analysis

    PubMed Central

    Seel, Thomas; Raisch, Jorg; Schauer, Thomas

    2014-01-01

    This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1) joint axis and position identification; and (2) flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU)-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°. PMID:24743160

  12. [Methods for measuring skin aging].

    PubMed

    Zieger, M; Kaatz, M

    2016-02-01

    Aging affects human skin and is becoming increasingly important with regard to medical, social and aesthetic issues. Detection of intrinsic and extrinsic components of skin aging requires reliable measurement methods. Modern techniques, e.g., based on direct imaging, spectroscopy or skin physiological measurements, provide a broad spectrum of parameters for different applications.

  13. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  14. Blade resonance parameter identification based on tip-timing method without the once-per revolution sensor

    NASA Astrophysics Data System (ADS)

    Guo, Haotian; Duan, Fajie; Zhang, Jilong

    2016-01-01

    Blade tip-timing is the most effective method for blade vibration online measurement of turbomachinery. In this article a synchronous resonance vibration measurement method of blade based on tip-timing is presented. This method requires no once-per revolution sensor which makes it more generally applicable in the condition where this sensor is difficult to install, especially for the high-pressure rotors of dual-rotor engines. Only three casing mounted probes are required to identify the engine order, amplitude, natural frequency and the damping coefficient of the blade. A method is developed to identify the blade which a tip-timing data belongs to without once-per revolution sensor. Theoretical analyses of resonance parameter measurement are presented. Theoretic error of the method is investigated and corrected. Experiments are conducted and the results indicate that blade resonance parameter identification is achieved without once-per revolution sensor.

  15. Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao

    2016-01-01

    Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.

  16. Measurement of smaller colon polyp in CT colonography images using morphological image processing.

    PubMed

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K

    2017-11-01

    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  17. New Matching Method for Accelerometers in Gravity Gradiometer

    PubMed Central

    Wei, Hongwei; Wu, Meiping; Cao, Juliang

    2017-01-01

    The gravity gradiometer is widely used in mineral prospecting, including in the exploration of mineral, oil and gas deposits. The mismatch of accelerometers adversely affects the measuring precision of rotating accelerometer-based gravity gradiometers. Several strategies have been investigated to address the imbalance of accelerometers in gradiometers. These strategies, however, complicate gradiometer structures because feedback loops and re-designed accelerometers are needed in these strategies. In this paper, we present a novel matching method, which is based on a new configuration of accelerometers in a gravity gradiometer. In the new configuration, an angle was introduced between the measurement direction of the accelerometer and the spin direction. With the introduced angle, accelerometers could measure the centrifugal acceleration generated by the rotating disc. Matching was realized by updating the scale factors of the accelerometers with the help of centrifugal acceleration. Further simulation computations showed that after adopting the new matching method, signal-to-noise ratio improved from −41 dB to 22 dB. Compared with other matching methods, our method is more flexible and costs less. The matching accuracy of this new method is similar to that of other methods. Our method provides a new idea for matching methods in gravity gradiometer measurement. PMID:28757584

  18. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  19. RAPID, PCR-BASED METHODS FOR MEASURING THE QUALITY OF BATHING BEACH WATERS

    EPA Science Inventory

    The current methods for measuring the quality of recreational waters were developed in the 1970's and were recommended to the States by EPA in 1986. These methods detect and quantify Escherichia coli and enterococci, two bacteria that are consistently associated with fecal wast...

  20. Research on position and orientation measurement method for roadheader based on vision/INS

    NASA Astrophysics Data System (ADS)

    Yang, Jinyong; Zhang, Guanqin; Huang, Zhe; Ye, Yaozhong; Ma, Bowen; Wang, Yizhong

    2018-01-01

    Roadheader which is a kind of special equipment for large tunnel excavation has been widely used in Coal Mine. It is one of the main mechanical-electrical equipment for mine production and also has been regarded as the core equipment for underground tunnel driving construction. With the deep application of the rapid driving system, underground tunnel driving methods with higher automation level are required. In this respect, the real-time position and orientation measurement technique for roadheader is one of the most important research contents. For solving the problem of position and orientation measurement automatically in real time for roadheaders, this paper analyses and compares the features of several existing measuring methods. Then a new method based on the combination of monocular vision and strap down inertial navigation system (SINS) would be proposed. By realizing five degree-of-freedom (DOF) measurement of real-time position and orientation of roadheader, this method has been verified by the rapid excavation equipment in Daliuta coal mine. Experiment results show that the accuracy of orientation measurement is better than 0.1°, the standard deviation of static drift is better than 0.25° and the accuracy of position measurement is better than 1cm. It is proved that this method can be used in real-time position and orientation measurement application for roadheader which has a broad prospect in coal mine engineering.

  1. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    PubMed Central

    Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.

    2012-01-01

    Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688

  2. A brief measure of attitudes toward mixed methods research in psychology

    PubMed Central

    Roberts, Lynne D.; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281

  3. Localization of small arms fire using acoustic measurements of muzzle blast and/or ballistic shock wave arrivals.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2012-11-01

    The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.

  4. Misalignment calibration of geomagnetic vector measurement system using parallelepiped frame rotation method

    NASA Astrophysics Data System (ADS)

    Pang, Hongfeng; Zhu, XueJun; Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao

    2016-12-01

    Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively.

  5. Economic method for helical gear flank surface characterisation

    NASA Astrophysics Data System (ADS)

    Koulin, G.; Reavie, T.; Frazer, R. C.; Shaw, B. A.

    2018-03-01

    Typically the quality of a gear pair is assessed based on simplified geometric tolerances which do not always correlate with functional performance. In order to identify and quantify functional performance based parameters, further development of the gear measurement approach is required. Methodology for interpolation of the full active helical gear flank surface, from sparse line measurements, is presented. The method seeks to identify the minimum number of line measurements required to sufficiently characterise an active gear flank. In the form ground gear example presented, a single helix and three profile line measurements was considered to be acceptable. The resulting surfaces can be used to simulate the meshing engagement of a gear pair and therefore provide insight into functional performance based parameters. Therefore the assessment of the quality can be based on the predicted performance in the context of an application.

  6. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  7. Radiance and atmosphere propagation-based method for the target range estimation

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan

    2012-06-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.

  8. A Varian DynaLog file-based procedure for patient dose-volume histogram-based IMRT QA.

    PubMed

    Calvo-Ortega, Juan F; Teke, Tony; Moragues, Sandra; Pozo, Miquel; Casals-Farran, Joan

    2014-03-06

    In the present study, we describe a method based on the analysis of the dynamic MLC log files (DynaLog) generated by the controller of a Varian linear accelerator in order to perform patient-specific IMRT QA. The DynaLog files of a Varian Millennium MLC, recorded during an IMRT treatment, can be processed using a MATLAB-based code in order to generate the actual fluence for each beam and so recalculate the actual patient dose distribution using the Eclipse treatment planning system. The accuracy of the DynaLog-based dose reconstruction procedure was assessed by introducing ten intended errors to perturb the fluence of the beams of a reference plan such that ten subsequent erroneous plans were generated. In-phantom measurements with an ionization chamber (ion chamber) and planar dose measurements using an EPID system were performed to investigate the correlation between the measured dose changes and the expected ones detected by the reconstructed plans for the ten intended erroneous cases. Moreover, the method was applied to 20 cases of clinical plans for different locations (prostate, lung, breast, and head and neck). A dose-volume histogram (DVH) metric was used to evaluate the impact of the delivery errors in terms of dose to the patient. The ionometric measurements revealed a significant positive correlation (R² = 0.9993) between the variations of the dose induced in the erroneous plans with respect to the reference plan and the corresponding changes indicated by the DynaLog-based reconstructed plans. The EPID measurements showed that the accuracy of the DynaLog-based method to reconstruct the beam fluence was comparable with the dosimetric resolution of the portal dosimetry used in this work (3%/3 mm). The DynaLog-based reconstruction method described in this study is a suitable tool to perform a patient-specific IMRT QA. This method allows us to perform patient-specific IMRT QA by evaluating the result based on the DVH metric of the planning CT image (patient DVH-based IMRT QA).

  9. Serum protein measurement using a tapered fluorescent fibre-optic evanescent wave-based biosensor

    NASA Astrophysics Data System (ADS)

    Preejith, P. V.; Lim, C. S.; Chia, T. F.

    2006-12-01

    A novel method to measure the total serum protein concentration is described in this paper. The method is based on the principles of fibre-optic evanescent wave spectroscopy. The biosensor applies a fluorescent dye-immobilized porous glass coating on a multi-mode optical fibre. The evanescent wave's intensity at the fibre-optic core-cladding interface is used to monitor the protein-induced changes in the sensor element. The sensor offers a rapid, single-step method for quantifying protein concentrations without destroying the sample. This unique sensing method presents a sensitive and accurate platform for the quantification of protein.

  10. Methods of Measurement in epidemiology: Sedentary Behaviour

    PubMed Central

    Atkin, Andrew J; Gorely, Trish; Clemes, Stacy A; Yates, Thomas; Edwardson, Charlotte; Brage, Soren; Salmon, Jo; Marshall, Simon J; Biddle, Stuart JH

    2012-01-01

    Background Research examining sedentary behaviour as a potentially independent risk factor for chronic disease morbidity and mortality has expanded rapidly in recent years. Methods We present a narrative overview of the sedentary behaviour measurement literature. Subjective and objective methods of measuring sedentary behaviour suitable for use in population-based research with children and adults are examined. The validity and reliability of each method is considered, gaps in the literature specific to each method identified and potential future directions discussed. Results To date, subjective approaches to sedentary behaviour measurement, e.g. questionnaires, have focused predominantly on TV viewing or other screen-based behaviours. Typically, such measures demonstrate moderate reliability but slight to moderate validity. Accelerometry is increasingly being used for sedentary behaviour assessments; this approach overcomes some of the limitations of subjective methods, but detection of specific postures and postural changes by this method is somewhat limited. Instruments developed specifically for the assessment of body posture have demonstrated good reliability and validity in the limited research conducted to date. Miniaturization of monitoring devices, interoperability between measurement and communication technologies and advanced analytical approaches are potential avenues for future developments in this field. Conclusions High-quality measurement is essential in all elements of sedentary behaviour epidemiology, from determining associations with health outcomes to the development and evaluation of behaviour change interventions. Sedentary behaviour measurement remains relatively under-developed, although new instruments, both objective and subjective, show considerable promise and warrant further testing. PMID:23045206

  11. Laser focal profiler based on forward scattering of a nanoparticle

    NASA Astrophysics Data System (ADS)

    Ota, Taisuke

    2018-03-01

    A laser focal intensity profiling method based on the forward scattering from a nanoparticle is demonstrated for in situ measurements using a laser focusing system with six microscope objective lenses with different numerical apertures ranging from 0.15 to 1.4. The measured profiles showed Airy disc patterns although their rings showed some imperfections due to aberrations and misalignment of the test system. The dipole radiation model revealed that the artefact of this method was much smaller than the influence of the deterioration in the experimental system; a condition where no artefact appears was predicted based on proper selection of measurement angles.

  12. Accelerometer-Based Method for Extracting Respiratory and Cardiac Gating Information for Dual Gating during Nuclear Medicine Imaging

    PubMed Central

    Pänkäälä, Mikko; Paasio, Ari

    2014-01-01

    Both respiratory and cardiac motions reduce the quality and consistency of medical imaging specifically in nuclear medicine imaging. Motion artifacts can be eliminated by gating the image acquisition based on the respiratory phase and cardiac contractions throughout the medical imaging procedure. Electrocardiography (ECG), 3-axis accelerometer, and respiration belt data were processed and analyzed from ten healthy volunteers. Seismocardiography (SCG) is a noninvasive accelerometer-based method that measures accelerations caused by respiration and myocardial movements. This study was conducted to investigate the feasibility of the accelerometer-based method in dual gating technique. The SCG provides accelerometer-derived respiratory (ADR) data and accurate information about quiescent phases within the cardiac cycle. The correct information about the status of ventricles and atria helps us to create an improved estimate for quiescent phases within a cardiac cycle. The correlation of ADR signals with the reference respiration belt was investigated using Pearson correlation. High linear correlation was observed between accelerometer-based measurement and reference measurement methods (ECG and Respiration belt). Above all, due to the simplicity of the proposed method, the technique has high potential to be applied in dual gating in clinical cardiac positron emission tomography (PET) to obtain motion-free images in the future. PMID:25120563

  13. Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.

    PubMed

    van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon

    2018-03-01

    Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.

  14. A knowledge-driven approach to biomedical document conceptualization.

    PubMed

    Zheng, Hai-Tao; Borchert, Charles; Jiang, Yong

    2010-06-01

    Biomedical document conceptualization is the process of clustering biomedical documents based on ontology-represented domain knowledge. The result of this process is the representation of the biomedical documents by a set of key concepts and their relationships. Most of clustering methods cluster documents based on invariant domain knowledge. The objective of this work is to develop an effective method to cluster biomedical documents based on various user-specified ontologies, so that users can exploit the concept structures of documents more effectively. We develop a flexible framework to allow users to specify the knowledge bases, in the form of ontologies. Based on the user-specified ontologies, we develop a key concept induction algorithm, which uses latent semantic analysis to identify key concepts and cluster documents. A corpus-related ontology generation algorithm is developed to generate the concept structures of documents. Based on two biomedical datasets, we evaluate the proposed method and five other clustering algorithms. The clustering results of the proposed method outperform the five other algorithms, in terms of key concept identification. With respect to the first biomedical dataset, our method has the F-measure values 0.7294 and 0.5294 based on the MeSH ontology and gene ontology (GO), respectively. With respect to the second biomedical dataset, our method has the F-measure values 0.6751 and 0.6746 based on the MeSH ontology and GO, respectively. Both results outperforms the five other algorithms in terms of F-measure. Based on the MeSH ontology and GO, the generated corpus-related ontologies show informative conceptual structures. The proposed method enables users to specify the domain knowledge to exploit the conceptual structures of biomedical document collections. In addition, the proposed method is able to extract the key concepts and cluster the documents with a relatively high precision. Copyright 2010 Elsevier B.V. All rights reserved.

  15. More than just tracking time: Complex measures of user engagement with an internet-based health promotion intervention.

    PubMed

    Baltierra, Nina B; Muessig, Kathryn E; Pike, Emily C; LeGrand, Sara; Bull, Sheana S; Hightow-Weidman, Lisa B

    2016-02-01

    There has been a rise in internet-based health interventions without a concomitant focus on new methods to measure user engagement and its effect on outcomes. We describe current user tracking methods for internet-based health interventions and offer suggestions for improvement based on the design and pilot testing of healthMpowerment.org (HMP). HMP is a multi-component online intervention for young Black men and transgender women who have sex with men (YBMSM/TW) to reduce risky sexual behaviors, promote healthy living and build social support. The intervention is non-directive, incorporates interactive features, and utilizes a point-based reward system. Fifteen YBMSM/TW (age 20-30) participated in a one-month pilot study to test the usability and efficacy of HMP. Engagement with the intervention was tracked using a customized data capture system and validated with Google Analytics. Usage was measured in time spent (total and across sections) and points earned. Average total time spent on HMP was five hours per person (range 0-13). Total time spent was correlated with total points earned and overall site satisfaction. Measuring engagement in internet-based interventions is crucial to determining efficacy. Multiple methods of tracking helped derive more comprehensive user profiles. Results highlighted the limitations of measures to capture user activity and the elusiveness of the concept of engagement. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. RSS Fingerprint Based Indoor Localization Using Sparse Representation with Spatio-Temporal Constraint

    PubMed Central

    Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun

    2016-01-01

    The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882

  17. Numerical investigation of multi-beam laser heterodyne measurement with ultra-precision for linear expansion coefficient of metal based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You

    2011-01-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.

  18. Innovative FRF measurement technique for frequency based substructuring method

    NASA Astrophysics Data System (ADS)

    Mirza, W. I. I. Wan Iskandar; Rani, M. N. Abdul; Ayub, M. A.; Yunus, M. A.; Omar, R.; Mohd Zin, M. S.

    2018-04-01

    In this paper, frequency based substructuring (FBS) is used in an attempt to predict the dynamic behaviour of an assembled structure. The assembled structure which consists of two beam substructures namely substructure A (finite element model) and substructure B (experimental model) was tested. The FE model of substructure A was constructed by using 3D elements and the Frequency Response Functions (FRFs) were derived viaa FRF synthesis method. A specially customised bolt was used to allow the attachment of sensors and excitation to be made at theinterfaces of substructure B, and the FRFs were measured by using an impact testing method. Both substructures A and B were then coupled by using the FBS method for the prediction of FRFs. The coupled FRF obtained was validated with the measured FRF counterparts. This work revealed that by implementing a specially customised bolt during the measurement of FRF at the interface, led to an improvement in the FBS predicted results.

  19. Research on key technology of the verification system of steel rule based on vision measurement

    NASA Astrophysics Data System (ADS)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  20. Verification of three-microphone impedance tube method for measurement of transmission loss in aerogels

    NASA Astrophysics Data System (ADS)

    Connick, Robert J.

    Accurate measurement of normal incident transmission loss is essential for the acoustic characterization of building materials. In this research, a method of measuring normal incidence sound transmission loss proposed by Salissou et al. as a complement to standard E2611-09 of the American Society for Testing and Materials [Standard Test Method for Measurement of Normal Incidence Sound Transmission of Acoustical Materials Based on the Transfer Matrix Method (American Society for Testing and Materials, New York, 2009)] is verified. Two sam- ples from the original literature are used to verify the method as well as a Filtros RTM sample. Following the verification, several nano-material Aerogel samples are measured.

  1. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  2. Determination of lysine content based on an in situ pretreatment and headspace gas chromatographic measurement technique.

    PubMed

    Wan, Xiao-Fang; Liu, Bao-Lian; Yu, Teng; Yan, Ning; Chai, Xin-Sheng; Li, You-Ming; Chen, Guang-Xue

    2018-05-01

    This work reports on a simple method for the determination of lysine content by an in situ sample pretreatment and headspace gas chromatographic measurement (HS-GC) technique, based on carbon dioxide (CO 2 ) formation from the pretreatment reaction (between lysine and ninhydrin solution) in a closed vial. It was observed that complete lysine conversion to CO 2 could be achieved within 60 min at 60 °C in a phosphate buffer medium (pH = 4.0), with a minimum molar ratio of ninhydrin/lysine of 16. The results showed that the method had a good precision (RSD < 5.23%) and accuracy (within 6.80%), compared to the results measured by a reference method (ninhydrin spectroscopic method). Due to the feature of in situ sample pretreatment and headspace measurement, the present method becomes very simple and particularly suitable to be used for batch sample analysis in lysine-related research and applications. Graphical abstract The flow path of the reaction and HS-GC measurement for the lysine analysis.

  3. Applications of broadband cavity enhanced spectroscopy for measurements of trace gases and aerosols

    NASA Astrophysics Data System (ADS)

    Washenfelder, R. A.; Attwood, A. R.; Brock, C. A.; Brown, S. S.; Dube, W. P.; Flores, J. M.; Langford, A. O.; Min, K. E.; Rudich, Y.; Stutz, J.; Wagner, N.; Young, C.; Zarzana, K. J.

    2015-12-01

    Broadband cavity enhanced spectroscopy (BBCES) uses a broadband light source, optical cavity, and multichannel detector to measure light extinction with high sensitivity. This method differs from cavity ringdown spectroscopy, because it uses an inexpensive, incoherent light source and allows optical extinction to be determined simultaneously across a broad wavelength region.Spectral fitting methods can be used to retrieve multiple absorbers across the observed wavelength region. We have successfully used this method to measure glyoxal (CHOCHO), nitrous acid (HONO), and nitrogen dioxide (NO2) from ground-based and aircraft-based sampling platforms. The detection limit (2-sigma) in 5 s for retrievals of CHOCHO, HONO and NO2 is 32, 250 and 80 parts per trillion (pptv).Alternatively, gas-phase absorbers can be chemically removed to allow the accurate determination of aerosol extinction. In the laboratory, we have used the aerosol extinction measurements to determine scattering and absorption as a function of wavelength. We have deployed a ground-based field instrument to measure aerosol extinction, with a detection limit of approximately 0.2 Mm-1 in 1 min.BBCES methods are most widely used in the near-ultraviolet and visible spectral region. Recently, we have demonstrated measurements at 315-350 nm for formaldehyde (CH2O) and NO2. Extending the technique further into the ultraviolet spectral region will allow important additional measurements of trace gas species and aerosol extinction.

  4. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  5. Adapting Document Similarity Measures for Ligand-Based Virtual Screening.

    PubMed

    Himmat, Mubarak; Salim, Naomie; Al-Dabbagh, Mohammed Mumtaz; Saeed, Faisal; Ahmed, Ali

    2016-04-13

    Quantifying the similarity of molecules is considered one of the major tasks in virtual screening. There are many similarity measures that have been proposed for this purpose, some of which have been derived from document and text retrieving areas as most often these similarity methods give good results in document retrieval and can achieve good results in virtual screening. In this work, we propose a similarity measure for ligand-based virtual screening, which has been derived from a text processing similarity measure. It has been adopted to be suitable for virtual screening; we called this proposed measure the Adapted Similarity Measure of Text Processing (ASMTP). For evaluating and testing the proposed ASMTP we conducted several experiments on two different benchmark datasets: the Maximum Unbiased Validation (MUV) and the MDL Drug Data Report (MDDR). The experiments have been conducted by choosing 10 reference structures from each class randomly as queries and evaluate them in the recall of cut-offs at 1% and 5%. The overall obtained results are compared with some similarity methods including the Tanimoto coefficient, which are considered to be the conventional and standard similarity coefficients for fingerprint-based similarity calculations. The achieved results show that the performance of ligand-based virtual screening is better and outperforms the Tanimoto coefficients and other methods.

  6. Influence of stem temperature changes on heat pulse sap flux density measurements.

    PubMed

    Vandegehuchte, Maurits W; Burgess, Stephen S O; Downey, Alec; Steppe, Kathy

    2015-04-01

    While natural spatial temperature gradients between measurement needles have been thoroughly investigated for continuous heat-based sap flow methods, little attention has been given to how natural changes in stem temperature impact heat pulse-based methods through temporal rather than spatial effects. By modelling the theoretical equation for both an ideal instantaneous pulse and a step pulse and applying a finite element model which included actual needle dimensions and wound effects, the influence of a varying stem temperature on heat pulse-based methods was investigated. It was shown that the heat ratio (HR) method was influenced, while for the compensation heat pulse and Tmax methods changes in stem temperatures of up to 0.002 °C s(-1) did not lead to significantly different results. For the HR method, rising stem temperatures during measurements led to lower heat pulse velocity values, while decreasing stem temperatures led to both higher and lower heat pulse velocities, and to imaginary results for high flows. These errors of up to 40% can easily be prevented by including a temperature correction in the data analysis procedure, calculating the slope of the natural temperature change based on the measured temperatures before application of the heat pulse. Results of a greenhouse and outdoor experiment on Pinus pinea L. show the influence of this correction on low and average sap flux densities. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment

    ERIC Educational Resources Information Center

    Shapiro, Edward S.; Gebhardt, Sarah N.

    2012-01-01

    This article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (CAT) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong…

  8. The Spectrophotometric Method of Determining the Transmission of Solar Energy in Salt Gradient Solar Ponds

    NASA Technical Reports Server (NTRS)

    Giulianelli, J.

    1984-01-01

    In order to predict the thermal efficiency of a solar pond it is necessary to know total average solar energy reaching the storage layer. One method for determining this energy for water containing dissolved colored species is based upon spectral transmission measurements using a laboratory spectrophotometer. This method is examined and some of the theoretical ground work needed to discuss the measurement of transmission of light water. Results of in situ irradiance measurements from oceanography research are presented and the difficulties inherent in extrapolating laboratory data obtained with ten centimeter cells to real three dimensional pond situations is discussed. Particular emphasis is put on the need to account for molecular and particulate scattering in measurements done on low absorbing solutions. Despite these considerations it is expected that attenuation calculations based upon careful measurements using a dual beam spectrophotometer technique combined with known attenuation coefficients will be useful in solar pond modeling and monitoring for color buildup. Preliminary results using the CSM method are presented.

  9. Methods Development for Spectral Simplification of Room-Temperature Rotational Spectra

    NASA Astrophysics Data System (ADS)

    Kent, Erin B.; Shipman, Steven

    2014-06-01

    Room-temperature rotational spectra are dense and difficult to assign, and so we have been working to develop methods to accelerate this process. We have tested two different methods with our waveguide-based spectrometer, which operates from 8.7 to 26.5 GHz. The first method, based on previous work by Medvedev and De Lucia, was used to estimate lower state energies of transitions by performing relative intensity measurements at a range of temperatures between -20 and +50 °C. The second method employed hundreds of microwave-microwave double resonance measurements to determine level connectivity between rotational transitions. The relative intensity measurements were not particularly successful in this frequency range (the reasons for this will be discussed), but the information gleaned from the double-resonance measurements can be incorporated into other spectral search algorithms (such as autofit or genetic algorithm approaches) via scoring or penalty functions to help with the spectral assignment process. I.R. Medvedev, F.C. De Lucia, Astrophys. J. 656, 621-628 (2007).

  10. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  11. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  12. An integrative framework for sensor-based measurement of teamwork in healthcare.

    PubMed

    Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J

    2015-01-01

    There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliations see end of article.

  13. Solution-Based Electro-Orientation Spectroscopy (EOS) for Contactless Measurement of Semiconductor Nanowires

    NASA Astrophysics Data System (ADS)

    Yuan, Wuhan; Mohabir, Amar; Tutuncuoglu, Gozde; Filler, Michael; Feldman, Leonard; Shan, Jerry

    2017-11-01

    Solution-based, contactless methods for determining the electrical conductivity of nanowires and nanotubes have unique advantages over conventional techniques in terms of high throughput and compatibility with further solution-based processing and assembly methods. Here, we describe the solution-based electro-orientation spectroscopy (EOS) method, in which nanowire conductivity is measured from the AC-electric-field-induced alignment rate of the nanowire in a suspending fluid. The particle conductivity is determined from the measured crossover frequency between conductivity-dominated, low-frequency alignment to the permittivity-dominated, high-frequency regime. We discuss the extension of the EOS measurement range by an order-of-magnitude, taking advantage of the high dielectric constant of deionized water. With water and other fluids, we demonstrate that EOS can quantitatively characterize the electrical conductivities of nanowires over a 7-order-of-magnitude range, 10-5 to 102 S/m. We highlight the efficiency and utility of EOS for nanomaterial characterization by statistically characterizing the variability of semiconductor nanowires of the same nominal composition, and studying the connection between synthesis parameters and properties. NSF CBET-1604931.

  14. Detection of ferromagnetic target based on mobile magnetic gradient tensor system

    NASA Astrophysics Data System (ADS)

    Gang, Y. I. N.; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren

    2016-03-01

    Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source-sensor displacement vector. Secondly, unit source-sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source-sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source-sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source-sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method.

  15. Measuring the activity of a {sup 51}Cr neutrino source based on the gamma-radiation spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorbachev, V. V., E-mail: vvgor-gfb1@mail.ru; Gavrin, V. N.; Ibragimova, T. V.

    A technique for the measurement of activities of intense β sources by measuring the continuous gamma-radiation (internal bremsstrahlung) spectra is developed. A method for reconstructing the spectrum recorded by a germanium semiconductor detector is described. A method for the absolute measurement of the internal bremsstrahlung spectrum of {sup 51}Cr is presented.

  16. Range estimation of passive infrared targets through the atmosphere

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Seo, Doochun; Choi, Seokweon

    2013-04-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat systems. However, jamming signals tremendously degrade the performance of such active sensor devices. We introduce a simple target range estimation method and the fundamental limits of the proposed method based on the atmosphere propagation model. Since passive infrared (IR) sensors measure IR signals radiating from objects in different wavelengths, this method has robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and various attenuation factors (i.e., the distance between sensor and target and atmosphere environment parameters). MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the results from MODTRAN and atmosphere propagation-based modeling, the target range can be estimated. To analyze the proposed method's performance statistically, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao lower bound (CRLB) via the probability density function of measured radiance. We also compare CRLB and the variance of MLE using Monte-Carlo simulation.

  17. Josephson frequency meter for millimeter and submillimeter wavelengths

    NASA Technical Reports Server (NTRS)

    Anischenko, S. E.; Larkin, S. Y.; Chaikovsky, V. I.; Kabayev, P. V.; Kamyshin, V. V.

    1995-01-01

    Frequency measurements of electromagnetic oscillations of millimeter and submillimeter wavebands with frequency growth due to a number of reasons become more and more difficult. First, these frequencies are considered to be cutoffs for semiconductor converting devices and one has to use optical measurement methods instead of traditional ones with frequency transfer. Second, resonance measurement methods are characterized by using relatively narrow bands and optical ones are limited in frequency and time resolution due to the limited range and velocity of movement of their mechanical elements as well as the efficiency of these optical techniques decrease with the increase of wavelength due to diffraction losses. That requires a priori information on the radiation frequency band of the source involved. Method of measuring frequency of harmonic microwave signals in millimeter and submillimeter wavebands based on the ac Josephson effect in superconducting contacts is devoid of all the above drawbacks. This approach offers a number of major advantages over the more traditional measurement methods, that is one based on frequency conversion, resonance and interferometric techniques. It can be characterized by high potential accuracy, wide range of frequencies measured, prompt measurement and the opportunity to obtain a panoramic display of the results as well as full automation of the measuring process.

  18. Measuring Recognition Performance Using Computer-Based and Paper-Based Methods.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    1991-01-01

    Using a within-subjects design, computer-based and paper-based tests of aircraft silhouette recognition were administered to 83 male naval pilots and flight officers to determine the relative reliabilities and validities of 2 measurement modes. Relative reliabilities and validities of the two modes were contingent on the multivariate measurement…

  19. Policy to implementation: evidence-based practice in community mental health – study protocol

    PubMed Central

    2013-01-01

    Background Evidence-based treatments (EBTs) are not widely available in community mental health settings. In response to the call for implementation of evidence-based treatments in the United States, states and counties have mandated behavioral health reform through policies and other initiatives. Evaluations of the impact of these policies on implementation are rare. A systems transformation about to occur in Philadelphia, Pennsylvania, offers an important opportunity to prospectively study implementation in response to a policy mandate. Methods/design Using a prospective sequential mixed-methods design, with observations at multiple points in time, we will investigate the responses of staff from 30 community mental health clinics to a policy from the Department of Behavioral Health encouraging and incentivizing providers to implement evidence-based treatments to treat youth with mental health problems. Study participants will be 30 executive directors, 30 clinical directors, and 240 therapists. Data will be collected prior to the policy implementation, and then at two and four years following policy implementation. Quantitative data will include measures of intervention implementation and potential moderators of implementation (i.e., organizational- and leader-level variables) and will be collected from executive directors, clinical directors, and therapists. Measures include self-reported therapist fidelity to evidence-based treatment techniques as measured by the Therapist Procedures Checklist-Revised, organizational variables as measured by the Organizational Social Context Measurement System and the Implementation Climate Assessment, leader variables as measured by the Multifactor Leadership Questionnaire, attitudes towards EBTs as measured by the Evidence-Based Practice Attitude Scale, and knowledge of EBTs as measured by the Knowledge of Evidence- Based Services Questionnaire. Qualitative data will include semi-structured interviews with a subset of the sample to assess the implementation experience of high-, average-, and low-performing agencies. Mixed methods will be integrated through comparing and contrasting results from the two methods for each of the primary hypotheses in this study. Discussion Findings from the proposed research will inform both future policy mandates around implementation and the support required for the success of these policies, with the ultimate goal of improving the quality of treatment provided to youth in the public sector. PMID:23522556

  20. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  1. Determination of isocyanate groups in the organic intermediates by reaction-based headspace gas chromatography.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2016-10-14

    This work reports on a novel method for the determination of isocyanate groups in the related organic intermediates by a reaction-based headspace gas chromatography. The method is based on measuring the CO 2 formed from the reaction between the isocyanate groups in the organic intermediates and water in a closed headspace sample vial at 45°C for 20min. The results showed that the method has a good precision and accuracy, in which the relative standard deviation in the repeatability measurement was 5.26%, and the relative differences between the data obtained by the HS-GC method and the reference back-titration method were within 9.42%. The present method is simple and efficient and is particularly suitable to be used for determining the isocyanate groups in the batch sample analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation.

    PubMed

    Wang, Gang; Zhao, Zhikai; Ning, Yongjie

    2018-05-28

    As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  3. Direct Measurement of Intracellular Pressure

    PubMed Central

    Petrie, Ryan J.; Koo, Hyun

    2014-01-01

    A method to directly measure the intracellular pressure of adherent, migrating cells is described in the Basic Protocol. This approach is based on the servo-null method where a microelectrode is introduced into the cell to directly measure the physical pressure of the cytoplasm. We also describe the initial calibration of the microelectrode as well as the application of the method to cells migrating inside three-dimensional (3D) extracellular matrix (ECM). PMID:24894836

  4. Method and Apparatus for Measuring Surface Air Pressure

    NASA Technical Reports Server (NTRS)

    Lin, Bing (Inventor); Hu, Yongxiang (Inventor)

    2014-01-01

    The present invention is directed to an apparatus and method for remotely measuring surface air pressure. In one embodiment, the method of the present invention utilizes the steps of transmitting a signal having multiple frequencies into the atmosphere, measuring the transmitted/reflected signal to determine the relative received power level of each frequency and then determining the surface air pressure based upon the attenuation of the transmitted frequencies.

  5. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  6. Microscale Concentration Measurements Using Laser Light Scattering Methods

    NASA Technical Reports Server (NTRS)

    Niederhaus, Charles; Miller, Fletcher

    2004-01-01

    The development of lab-on-a-chip devices for microscale biochemical assays has led to the need for microscale concentration measurements of specific analyses. While fluorescence methods are the current choice, this method requires developing fluorophore-tagged conjugates for each analyte of interest. In addition, fluorescent imaging is also a volume-based method, and can be limiting as smaller detection regions are required.

  7. Simplified power control method for cellular mobile communication

    NASA Astrophysics Data System (ADS)

    Leung, Y. W.

    1994-04-01

    The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.

  8. A New Method for Non-destructive Measurement of Biomass, Growth Rates, Vertical Biomass Distribution and Dry Matter Content Based on Digital Image Analysis

    PubMed Central

    Tackenberg, Oliver

    2007-01-01

    Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204

  9. Dynamic Rod Worth Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Y.A.; Chapman, D.M.; Hill, D.J.

    2000-12-15

    The dynamic rod worth measurement (DRWM) technique is a method of quickly validating the predicted bank worth of control rods and shutdown rods. The DRWM analytic method is based on three-dimensional, space-time kinetic simulations of the rapid rod movements. Its measurement data is processed with an advanced digital reactivity computer. DRWM has been used as the method of bank worth validation at numerous plant startups with excellent results. The process and methodology of DRWM are described, and the measurement results of using DRWM are presented.

  10. Measuring the Return on Information Technology: A Knowledge-Based Approach for Revenue Allocation at the Process and Firm Level

    DTIC Science & Technology

    2005-07-01

    approach for measuring the return on Information Technology (IT) investments. A review of existing methods suggests the difficulty in adequately...measuring the returns of IT at various levels of analysis (e.g., firm or process level). To address this issue, this study aims to develop a method for...view (KBV), this paper proposes an analytic method for measuring the historical revenue and cost of IT investments by estimating the amount of

  11. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    NASA Astrophysics Data System (ADS)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  12. Measurement of the Young's modulus of thin or flexible specimen with digital-image correlation method

    NASA Astrophysics Data System (ADS)

    Xu, Lianyun; Hou, Zhende; Qin, Yuwen

    2002-05-01

    Because some composite material, thin film material, and biomaterial, are very thin and some of them are flexible, the classical methods for measuring their Young's moduli, by mounting extensometers on specimens, are not available. A bi-image method based on image correlation for measuring Young's moduli is developed in this paper. The measuring precision achieved is one order enhanced with general digital image correlation or called single image method. By this way, the Young's modulus of a SS301 stainless steel thin tape, with thickness 0.067mm, is measured, and the moduli of polyester fiber films, a kind of flexible sheet with thickness 0.25 mm, are also measured.

  13. Measuring carbon in forests: current status and future challenges.

    PubMed

    Brown, Sandra

    2002-01-01

    To accurately and precisely measure the carbon in forests is gaining global attention as countries seek to comply with agreements under the UN Framework Convention on Climate Change. Established methods for measuring carbon in forests exist, and are best based on permanent sample plots laid out in a statistically sound design. Measurements on trees in these plots can be readily converted to aboveground biomass using either biomass expansion factors or allometric regression equations. A compilation of existing root biomass data for upland forests of the world generated a significant regression equation that can be used to predict root biomass based on aboveground biomass only. Methods for measuring coarse dead wood have been tested in many forest types, but the methods could be improved if a non-destructive tool for measuring the density of dead wood was developed. Future measurements of carbon storage in forests may rely more on remote sensing data, and new remote data collection technologies are in development.

  14. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  15. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  16. Measuring the Index of Refraction.

    ERIC Educational Resources Information Center

    Phelps, F. M., III; Jacobson, B. S.

    1980-01-01

    Presents two methods for measuring the index of refraction of glass or lucite. These two methods, used in the freshman laboratory, are based on the fact that a ray of light inside a block will be refracted parallel to the surface. (HM)

  17. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    DOEpatents

    Richardson, John G [Idaho Falls, ID

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  18. Robust Strategy for Rocket Engine Health Monitoring

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    2001-01-01

    Monitoring the health of rocket engine systems is essentially a two-phase process. The acquisition phase involves sensing physical conditions at selected locations, converting physical inputs to electrical signals, conditioning the signals as appropriate to establish scale or filter interference, and recording results in a form that is easy to interpret. The inference phase involves analysis of results from the acquisition phase, comparison of analysis results to established health measures, and assessment of health indications. A variety of analytical tools may be employed in the inference phase of health monitoring. These tools can be separated into three broad categories: statistical, rule based, and model based. Statistical methods can provide excellent comparative measures of engine operating health. They require well-characterized data from an ensemble of "typical" engines, or "golden" data from a specific test assumed to define the operating norm in order to establish reliable comparative measures. Statistical methods are generally suitable for real-time health monitoring because they do not deal with the physical complexities of engine operation. The utility of statistical methods in rocket engine health monitoring is hindered by practical limits on the quantity and quality of available data. This is due to the difficulty and high cost of data acquisition, the limited number of available test engines, and the problem of simulating flight conditions in ground test facilities. In addition, statistical methods incur a penalty for disregarding flow complexity and are therefore limited in their ability to define performance shift causality. Rule based methods infer the health state of the engine system based on comparison of individual measurements or combinations of measurements with defined health norms or rules. This does not mean that rule based methods are necessarily simple. Although binary yes-no health assessment can sometimes be established by relatively simple rules, the causality assignment needed for refined health monitoring often requires an exceptionally complex rule base involving complicated logical maps. Structuring the rule system to be clear and unambiguous can be difficult, and the expert input required to maintain a large logic network and associated rule base can be prohibitive.

  19. Bi-color near infrared thermoreflectometry: a method for true temperature field measurement.

    PubMed

    Sentenac, Thierry; Gilblas, Rémi; Hernandez, Daniel; Le Maoult, Yannick

    2012-12-01

    In a context of radiative temperature field measurement, this paper deals with an innovative method, called bicolor near infrared thermoreflectometry, for the measurement of true temperature fields without prior knowledge of the emissivity field of an opaque material. This method is achieved by a simultaneous measurement, in the near infrared spectral band, of the radiance temperature fields and of the emissivity fields measured indirectly by reflectometry. The theoretical framework of the method is introduced and the principle of the measurements at two wavelengths is detailed. The crucial features of the indirect measurement of emissivity are the measurement of bidirectional reflectivities in a single direction and the introduction of an unknown variable, called the "diffusion factor." Radiance temperature and bidirectional reflectivities are then merged into a bichromatic system based on Kirchhoff's laws. The assumption of the system, based on the invariance of the diffusion factor for two near wavelengths, and the value of the chosen wavelengths, are then discussed in relation to a database of several material properties. A thermoreflectometer prototype was developed, dimensioned, and evaluated. Experiments were carried out to outline its trueness in challenging cases. First, experiments were performed on a metallic sample with a high emissivity value. The bidirectional reflectivity was then measured from low signals. The results on erbium oxide demonstrate the power of the method with materials with high emissivity variations in near infrared spectral band.

  20. A Noninvasive Method to Study Regulation of Extracellular Fluid Volume in Rats Using Nuclear Magnetic Resonance

    EPA Science Inventory

    Time-domain nuclear magnetic resonance (TD-NMR)-based measurement of body composition of rodents is an effective method to quickly and repeatedly measure proportions of fat, lean, and fluid without anesthesia. TD-NMR provides a measure of free water in a living animal, termed % f...

  1. Validating Accelerometry and Skinfold Measures in Youth with Down Syndrome

    ERIC Educational Resources Information Center

    Esposito, Phil Michael

    2012-01-01

    Current methods for measuring quantity and intensity of physical activity based on accelerometer output have been studied and validated in youth. These methods have been applied to youth with Down syndrome (DS) with no empirical research done to validate these measures. Similarly, individuals with DS have unique body proportions not represented by…

  2. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  3. Information loss method to measure node similarity in networks

    NASA Astrophysics Data System (ADS)

    Li, Yongli; Luo, Peng; Wu, Chong

    2014-09-01

    Similarity measurement for the network node has been paid increasing attention in the field of statistical physics. In this paper, we propose an entropy-based information loss method to measure the node similarity. The whole model is established based on this idea that less information loss is caused by seeing two more similar nodes as the same. The proposed new method has relatively low algorithm complexity, making it less time-consuming and more efficient to deal with the large scale real-world network. In order to clarify its availability and accuracy, this new approach was compared with some other selected approaches on two artificial examples and synthetic networks. Furthermore, the proposed method is also successfully applied to predict the network evolution and predict the unknown nodes' attributions in the two application examples.

  4. A Weighted Multipath Measurement Based on Gene Ontology for Estimating Gene Products Similarity

    PubMed Central

    Liu, Lizhen; Dai, Xuemin; Song, Wei; Lu, Jingli

    2014-01-01

    Abstract Many different methods have been proposed for calculating the semantic similarity of term pairs based on gene ontology (GO). Most existing methods are based on information content (IC), and the methods based on IC are used more commonly than those based on the structure of GO. However, most IC-based methods not only fail to handle identical annotations but also show a strong bias toward well-annotated proteins. We propose a new method called weighted multipath measurement (WMM) for estimating the semantic similarity of gene products based on the structure of the GO. We not only considered the contribution of every path between two GO terms but also took the depth of the lowest common ancestors into account. We assigned different weights for different kinds of edges in GO graph. The similarity values calculated by WMM can be reused because they are only relative to the characteristics of GO terms. Experimental results showed that the similarity values obtained by WMM have a higher accuracy. We compared the performance of WMM with that of other methods using GO data and gene annotation datasets for yeast and humans downloaded from the GO database. We found that WMM is more suited for prediction of gene function than most existing IC-based methods and that it can distinguish proteins with identical annotations (two proteins are annotated with the same terms) from each other. PMID:25229994

  5. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but data to validate it did not exist until recently. In this paper, data from repeated ...

  6. 40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... isokinetic sampling rates prior to a pollutant emission measurement run. The approximation method described... with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its equivalent...

  7. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...

  8. Objective and subjective methods for quantifying training load in wheelchair basketball small-sided games.

    PubMed

    Iturricastillo, Aitor; Granados, Cristina; Los Arcos, Asier; Yanci, Javier

    2017-04-01

    The aim of the present study was to analyse the training load in wheelchair basketball small-sided games and determine the relationship between heart rate (HR)-based training load and perceived exertion (RPE)-based training load methods among small-sided games bouts. HR-based measurements of training load included Edwards' training load and Stagno's training impulses (TRIMP MOD ) while RPE-based training load measurements included cardiopulmonary (session RPEres) and muscular (session RPEmus) values. Data were collected from 12 wheelchair basketball players during five consecutive weeks. The total load for the small-sided games sessions was 67.5 ± 6.7 and 55.3 ± 12.5 AU in HR-based training load (Edwards' training load and TRIMP MOD ), while the RPE-based training loads were 99.3 ± 26.9 (session RPEres) and 100.8 ± 31.2 AU (session RPEmus). Bout-to-bout analysis identified greater session RPEmus in the third [P < 0.05; effect size (ES) = 0.66, moderate] and fourth bouts (P < 0.05; ES = 0.64, moderate) than in the first bout, but other measures did not differ. Mean correlations indicated a trivial and small relationship among HR-based and RPE-based training loads. It is suggested that HR-based and RPE-based training loads provide different information, but these two methods could be complementary because one method could help us to understand the limitations of the other.

  9. Objective measurement of bread crumb texture

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Coles, Graeme D.

    1995-01-01

    Evaluation of bread crumb texture plays an important role in judging bread quality. This paper discusses the application of image analysis methods to the objective measurement of the visual texture of bread crumb. The application of Fast Fourier Transform and mathematical morphology methods have been discussed by the authors in their previous work, and a commercial bread texture measurement system has been developed. Based on the nature of bread crumb texture, we compare the advantages and disadvantages of the two methods, and a third method based on features derived directly from statistics of edge density in local windows of the bread image. The analysis of various methods and experimental results provides an insight into the characteristics of the bread texture image and interconnection between texture measurement algorithms. The usefulness of the application of general stochastic process modelling of texture is thus revealed; it leads to more reliable and accurate evaluation of bread crumb texture. During the development of these methods, we also gained useful insights into how subjective judges form opinions about bread visual texture. These are discussed here.

  10. A method to measure internal contact angle in opaque systems by magnetic resonance imaging.

    PubMed

    Zhu, Weiqin; Tian, Ye; Gao, Xuefeng; Jiang, Lei

    2013-07-23

    Internal contact angle is an important parameter for internal wettability characterization. However, due to the limitation of optical imaging, methods available for contact angle measurement are only suitable for transparent or open systems. For most of the practical situations that require contact angle measurement in opaque or enclosed systems, the traditional methods are not effective. Based upon the requirement, a method suitable for contact angle measurement in nontransparent systems is developed by employing MRI technology. In the Article, the method is demonstrated by measuring internal contact angles in opaque cylindrical tubes. It proves that the method also shows great feasibility in transparent situations and opaque capillary systems. By using the method, contact angle in opaque systems could be measured successfully, which is significant in understanding the wetting behaviors in nontransparent systems and calculating interfacial parameters in enclosed systems.

  11. Predicting survival in heart failure case and control subjects by use of fully automated methods for deriving nonlinear and conventional indices of heart rate dynamics

    NASA Technical Reports Server (NTRS)

    Ho, K. K.; Moody, G. B.; Peng, C. K.; Mietus, J. E.; Larson, M. G.; Levy, D.; Goldberger, A. L.

    1997-01-01

    BACKGROUND: Despite much recent interest in quantification of heart rate variability (HRV), the prognostic value of conventional measures of HRV and of newer indices based on nonlinear dynamics is not universally accepted. METHODS AND RESULTS: We have designed algorithms for analyzing ambulatory ECG recordings and measuring HRV without human intervention, using robust methods for obtaining time-domain measures (mean and SD of heart rate), frequency-domain measures (power in the bands of 0.001 to 0.01 Hz [VLF], 0.01 to 0.15 Hz [LF], and 0.15 to 0.5 Hz [HF] and total spectral power [TP] over all three of these bands), and measures based on nonlinear dynamics (approximate entropy [ApEn], a measure of complexity, and detrended fluctuation analysis [DFA], a measure of long-term correlations). The study population consisted of chronic congestive heart failure (CHF) case patients and sex- and age-matched control subjects in the Framingham Heart Study. After exclusion of technically inadequate studies and those with atrial fibrillation, we used these algorithms to study HRV in 2-hour ambulatory ECG recordings of 69 participants (mean age, 71.7+/-8.1 years). By use of separate Cox proportional-hazards models, the conventional measures SD (P<.01), LF (P<.01), VLF (P<.05), and TP (P<.01) and the nonlinear measure DFA (P<.05) were predictors of survival over a mean follow-up period of 1.9 years; other measures, including ApEn (P>.3), were not. In multivariable models, DFA was of borderline predictive significance (P=.06) after adjustment for the diagnosis of CHF and SD. CONCLUSIONS: These results demonstrate that HRV analysis of ambulatory ECG recordings based on fully automated methods can have prognostic value in a population-based study and that nonlinear HRV indices may contribute prognostic value to complement traditional HRV measures.

  12. 3D geometric phase analysis and its application in 3D microscopic morphology measurement

    NASA Astrophysics Data System (ADS)

    Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin

    2018-04-01

    Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.

  13. Microwave absorption properties of gold nanoparticle doped polymers

    NASA Astrophysics Data System (ADS)

    Jiang, C.; Ouattara, L.; Ingrosso, C.; Curri, M. L.; Krozer, V.; Boisen, A.; Jakobsen, M. H.; Johansen, T. K.

    2011-03-01

    This paper presents a method for characterizing microwave absorption properties of gold nanoparticle doped polymers. The method is based on on-wafer measurements at the frequencies from 0.5 GHz to 20 GHz. The on-wafer measurement method makes it possible to characterize electromagnetic (EM) property of small volume samples. The epoxy based SU8 polymer and SU8 doped with gold nanoparticles are chosen as the samples under test. Two types of microwave test devices are designed for exciting the samples through electrical coupling and magnetic coupling, respectively. Measurement results demonstrate that the nanocomposites absorb a certain amount of microwave energy due to gold nanoparticles. Higher nanoparticle concentration results in more significant absorption effect.

  14. Simple method based on intensity measurements for characterization of aberrations from micro-optical components.

    PubMed

    Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe

    2015-11-01

    We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.

  15. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  16. The difficulty of measuring the absorption of scattered sunlight by H2O and CO2 in volcanic plumes: A comment on Pering et al. “A novel and inexpensive method for measuring volcanic plume water fluxes at high temporal resolution,” Remote Sens. 2017, 9, 146

    USGS Publications Warehouse

    Kern, Christoph

    2017-01-01

    In their recent study, Pering et al. (2017) presented a novel method for measuring volcanic water vapor fluxes. Their method is based on imaging volcanic gas and aerosol plumes using a camera sensitive to the near-infrared (NIR) absorption of water vapor. The imaging data are empirically calibrated by comparison with in situ water measurements made within the plumes. Though the presented method may give reasonable results over short time scales, the authors fail to recognize the sensitivity of the technique to light scattering on aerosols within the plume. In fact, the signals measured by Pering et al. are not related to the absorption of NIR radiation by water vapor within the plume. Instead, the measured signals are most likely caused by a change in the effective light path of the detected radiation through the atmospheric background water vapor column. Therefore, their method is actually based on establishing an empirical relationship between in-plume scattering efficiency and plume water content. Since this relationship is sensitive to plume aerosol abundance and numerous environmental factors, the method will only yield accurate results if it is calibrated very frequently using other measurement techniques.

  17. Novel imaging analysis system to measure the spatial dimension of engineered tissue construct.

    PubMed

    Choi, Kyoung-Hwan; Yoo, Byung-Su; Park, So Ra; Choi, Byung Hyune; Min, Byoung-Hyun

    2010-02-01

    The measurement of the spatial dimensions of tissue-engineered constructs is very important for their clinical applications. In this study, a novel method to measure the volume of tissue-engineered constructs was developed using iterative mathematical computations. The method measures and analyzes three-dimensional (3D) parameters of a construct to estimate its actual volume using a sequence of software-based mathematical algorithms. The mathematical algorithm is composed of two stages: the shape extraction and the determination of volume. The shape extraction utilized 3D images of a construct: length, width, and thickness, captured by a high-quality camera with charge coupled device. The surface of the 3D images was then divided into fine sections. The area of each section was measured and combined to obtain the total surface area. The 3D volume of the target construct was then mathematically obtained using its total surface area and thickness. The accuracy of the measurement method was verified by comparing the results with those obtained from the hydrostatic weighing method (Korea Research Institute of Standards and Science [KRISS], Korea). The mean difference in volume between two methods was 0.0313 +/- 0.0003% (n = 5, P = 0.523) with no significant statistical difference. In conclusion, our image-based spatial measurement system is a reliable and easy method to obtain an accurate 3D volume of a tissue-engineered construct.

  18. Diameter measurement of optical nanofiber based on high-order Bragg reflections using a ruled grating.

    PubMed

    Zhu, Ming; Wang, Yao-Ting; Sun, Yi-Zhi; Zhang, Lijian; Ding, Wei

    2018-02-01

    A convenient method using a commercially available ruled grating for precise and overall diameter measurement of optical nanofibers (ONFs) is presented. We form a composite Bragg reflector with a micronscale period by dissolving aluminum coating, slicing the grating along ruling lines, and mounting it on an ONF. The resonant wavelengths of high-order Bragg reflections possess fiber diameter dependence, enabling nondestructive measurement of the ONF diameter profile. This method provides an easy and economic diagnostic tool for wide varieties of ONF-based applications.

  19. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  20. Subcopula-based measure of asymmetric association for contingency tables.

    PubMed

    Wei, Zheng; Kim, Daeyoung

    2017-10-30

    For the analysis of a two-way contingency table, a new asymmetric association measure is developed. The proposed method uses the subcopula-based regression between the discrete variables to measure the asymmetric predictive powers of the variables of interest. Unlike the existing measures of asymmetric association, the subcopula-based measure is insensitive to the number of categories in a variable, and thus, the magnitude of the proposed measure can be interpreted as the degree of asymmetric association in the contingency table. The theoretical properties of the proposed subcopula-based asymmetric association measure are investigated. We illustrate the performance and advantages of the proposed measure using simulation studies and real data examples. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Physically based method for measuring suspended-sediment concentration and grain size using multi-frequency arrays of acoustic-doppler profilers

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.; Griffiths, Ronald; Dean, David

    2014-01-01

    As the result of a 12-year program of sediment-transport research and field testing on the Colorado River (6 stations in UT and AZ), Yampa River (2 stations in CO), Little Snake River (1 station in CO), Green River (1 station in CO and 2 stations in UT), and Rio Grande (2 stations in TX), we have developed a physically based method for measuring suspended-sediment concentration and grain size at 15-minute intervals using multifrequency arrays of acoustic-Doppler profilers. This multi-frequency method is able to achieve much higher accuracies than single-frequency acoustic methods because it allows removal of the influence of changes in grain size on acoustic backscatter. The method proceeds as follows. (1) Acoustic attenuation at each frequency is related to the concentration of silt and clay with a known grain-size distribution in a river cross section using physical samples and theory. (2) The combination of acoustic backscatter and attenuation at each frequency is uniquely related to the concentration of sand (with a known reference grain-size distribution) and the concentration of silt and clay (with a known reference grain-size distribution) in a river cross section using physical samples and theory. (3) Comparison of the suspended-sand concentrations measured at each frequency using this approach then allows theory-based calculation of the median grain size of the suspended sand and final correction of the suspended-sand concentration to compensate for the influence of changing grain size on backscatter. Although this method of measuring suspended-sediment concentration is somewhat less accurate than using conventional samplers in either the EDI or EWI methods, it is much more accurate than estimating suspended-sediment concentrations using calibrated pump measurements or single-frequency acoustics. Though the EDI and EWI methods provide the most accurate measurements of suspended-sediment concentration, these measurements are labor-intensive, expensive, and may be impossible to collect at time intervals less than discharge-independent changes in suspended-sediment concentration can occur (< hours). Therefore, our physically based multi-frequency acoustic method shows promise as a cost-effective, valid approach for calculating suspended-sediment loads in river at a level of accuracy sufficient for many scientific and management purposes.

  2. Digital Moiré based transient interferometry and its application in optical surface measurement

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Tan, Yifeng; Wang, Shaopu; Hu, Yao

    2017-10-01

    Digital Moiré based transient interferometry (DMTI) is an effective non-contact testing methods for optical surfaces. In DMTI system, only one frame of real interferogram is experimentally captured for the transient measurement of the surface under test (SUT). When combined with partial compensation interferometry (PCI), DMTI is especially appropriate for the measurement of aspheres with large apertures, large asphericity or different surface parameters. Residual wavefront is allowed in PCI, so the same partial compensator can be applied to the detection of multiple SUTs. Excessive residual wavefront aberration results in spectrum aliasing, and the dynamic range of DMTI is limited. In order to solve this problem, a method based on wavelet transform is proposed to extract phase from the fringe pattern with spectrum aliasing. Results of simulation demonstrate the validity of this method. The dynamic range of Digital Moiré technology is effectively expanded, which makes DMTI prospective in surface figure error measurement for intelligent fabrication of aspheric surfaces.

  3. Evaluation of structural and thermophysical effects on the measurement accuracy of deep body thermometers based on dual-heat-flux method.

    PubMed

    Huang, Ming; Tamura, Toshiyo; Chen, Wenxi; Kanaya, Shigehiko

    2015-01-01

    To help pave a path toward the practical use of continuous unconstrained noninvasive deep body temperature measurement, this study aims to evaluate the structural and thermophysical effects on measurement accuracy for the dual-heat-flux method (DHFM). By considering the thermometer's height, radius, conductivity, density and specific heat as variables affecting the accuracy of DHFM measurement, we investigated the relationship between those variables and accuracy using 3-D models based on finite element method. The results of our simulation study show that accuracy is proportional to the radius but inversely proportional to the thickness of the thermometer when the radius is less than 30.0mm, and is also inversely proportional to the heat conductivity of the heat insulator inside the thermometer. The insights from this study would help to build a guideline for design, fabrication and optimization of DHFM-based thermometers, as well as their practical use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. ACCELERATORS: Beam based alignment of the SSRF storage ring

    NASA Astrophysics Data System (ADS)

    Zhang, Man-Zhou; Li, Hao-Hu; Jiang, Bo-Cheng; Liu, Gui-Min; Li, De-Ming

    2009-04-01

    There are 140 beam position monitors (BPMs) in the Shanghai Synchrotron Radiation Facility (SSRF) storage ring used for measuring the closed orbit. As the BPM pickup electrodes are assembled directly on the vacuum chamber, it is important to calibrate the electrical center offset of the BPM to an adjacent quadrupole magnetic center. A beam based alignment (BBA) method which varies individual quadrupole magnet strength and observes its effects on the orbit is used to measure the BPM offsets in both the horizontal and vertical planes. It is a completely automated technique with various data processing methods. There are several parameters such as the strength change of the correctors and the quadrupoles which should be chosen carefully in real measurement. After several rounds of BBA measurement and closed orbit correction, these offsets are set to an accuracy better than 10 μm. In this paper we present the method of beam based calibration of BPMs, the experimental results of the SSRF storage ring, and the error analysis.

  5. Discovering Central Practitioners in a Medical Discussion Forum Using Semantic Web Analytics.

    PubMed

    Rajabi, Enayat; Abidi, Syed Sibte Raza

    2017-01-01

    The aim of this paper is to investigate semantic web based methods to enrich and transform a medical discussion forum in order to perform semantics-driven social network analysis. We use the centrality measures as well as semantic similarity metrics to identify the most influential practitioners within a discussion forum. The centrality results of our approach are in line with centrality measures produced by traditional SNA methods, thus validating the applicability of semantic web based methods for SNA, particularly for analyzing social networks for specialized discussion forums.

  6. Retooling Predictive Relations for non-volatile PM by Comparison to Measurements

    NASA Astrophysics Data System (ADS)

    Vander Wal, R. L.; Abrahamson, J. P.

    2015-12-01

    Non-volatile particulate matter (nvPM) emissions from jet aircraft at cruise altitude are of particular interest for climate and atmospheric processes but are difficult to measure and are normally approximated. To provide such inventory estimates the present approach is to use measured, ground-based values with scaling to cruise (engine operating) conditions. Several points are raised by this approach. First is what ground based values to use. Empirical and semi-empirical approaches, such as the revised first order approximation (FOA3) and formation-oxidation (FOX) methods, each with embedded assumptions are available to calculate a ground-based black carbon concentration, CBC. Second is the scaling relation that can depend upon the ratios of fuel-air equivalence, pressure, and combustor flame temperature. We are using measured ground-based values to evaluate the accuracy of present methods towards developing alternative methods for CBCby smoke number or via a semi-empirical kinetic method for the specific engine, CFM56-2C, representative of a rich-dome style combustor, and as one of the most prevalent engine families in commercial use. Applying scaling relations to measured ground based values and comparison to measurements at cruise evaluates the accuracy of current scaling formalism. In partnership with GE Aviation, performing engine cycle deck calculations enables critical comparison between estimated or predicted thermodynamic parameters and true (engine) operational values for the CFM56-2C engine. Such specific comparisons allow tracing differences between predictive estimates for, and measurements of nvPM to their origin - as either divergence of input parameters or in the functional form of the predictive relations. Such insights will lead to development of new predictive tools for jet aircraft nvPM emissions. Such validated relations can then be extended to alternative fuels with confidence in operational thermodynamic values and functional form. Comparisons will then be made between these new predictive relationships and measurements of nvPM from alternative fuels using ground and cruise data - as collected during NASA-led AAFEX and ACCESS field campaigns, respectively.

  7. Surface fractal dimension, water adsorption efficiency, and cloud nucleation activity of insoluble aerosol.

    PubMed

    Laaksonen, Ari; Malila, Jussi; Nenes, Athanasios; Hung, Hui-Ming; Chen, Jen-Ping

    2016-05-03

    Surface porosity affects the ability of a substance to adsorb gases. The surface fractal dimension D is a measure that indicates the amount that a surface fills a space, and can thereby be used to characterize the surface porosity. Here we propose a new method for determining D, based on measuring both the water vapour adsorption isotherm of a given substance, and its ability to act as a cloud condensation nucleus when introduced to humidified air in aerosol form. We show that our method agrees well with previous methods based on measurement of nitrogen adsorption. Besides proving the usefulness of the new method for general surface characterization of materials, our results show that the surface fractal dimension is an important determinant in cloud drop formation on water insoluble particles. We suggest that a closure can be obtained between experimental critical supersaturation for cloud drop activation and that calculated based on water adsorption data, if the latter is corrected using the surface fractal dimension of the insoluble cloud nucleus.

  8. Surface fractal dimension, water adsorption efficiency, and cloud nucleation activity of insoluble aerosol

    NASA Astrophysics Data System (ADS)

    Laaksonen, Ari; Malila, Jussi; Nenes, Athanasios; Hung, Hui-Ming; Chen, Jen-Ping

    2016-05-01

    Surface porosity affects the ability of a substance to adsorb gases. The surface fractal dimension D is a measure that indicates the amount that a surface fills a space, and can thereby be used to characterize the surface porosity. Here we propose a new method for determining D, based on measuring both the water vapour adsorption isotherm of a given substance, and its ability to act as a cloud condensation nucleus when introduced to humidified air in aerosol form. We show that our method agrees well with previous methods based on measurement of nitrogen adsorption. Besides proving the usefulness of the new method for general surface characterization of materials, our results show that the surface fractal dimension is an important determinant in cloud drop formation on water insoluble particles. We suggest that a closure can be obtained between experimental critical supersaturation for cloud drop activation and that calculated based on water adsorption data, if the latter is corrected using the surface fractal dimension of the insoluble cloud nucleus.

  9. Use of petroleum-based correlations and estimation methods for synthetic fuels

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.

    1980-01-01

    Correlations of hydrogen content with aromatics content, heat of combustion, and smoke point are derived for some synthetic fuels prepared from oil and coal syncrudes. Comparing the results of the aromatics content with correlations derived for petroleum fuels shows that the shale-derived fuels fit the petroleum-based correlations, but the coal-derived fuels do not. The correlations derived for heat of combustion and smoke point are comparable to some found for petroleum-based correlations. Calculated values of hydrogen content and of heat of combustion are obtained for the synthetic fuels by use of ASTM estimation methods. Comparisons of the measured and calculated values show biases in the equations that exceed the critical statistics values. Comparison of the measured hydrogen content by the standard ASTM combustion method with that by a nuclear magnetic resonance (NMR) method shows a decided bias. The comparison of the calculated and measured NMR hydrogen contents shows a difference similar to that found with petroleum fuels.

  10. Comparison of NAVSTAR satellite L band ionospheric calibrations with Faraday rotation measurements

    NASA Technical Reports Server (NTRS)

    Royden, H. N.; Miller, R. B.; Buennagel, L. A.

    1984-01-01

    It is pointed out that interplanetary navigation at the Jet Propulsion Laboratory (JPL) is performed by analyzing measurements derived from the radio link between spacecraft and earth and, near the target, onboard optical measurements. For precise navigation, corrections for ionospheric effects must be applied, because the earth's ionosphere degrades the accuracy of the radiometric data. These corrections are based on ionospheric total electron content (TEC) determinations. The determinations are based on the measurement of the Faraday rotation of linearly polarized VHF signals from geostationary satellites. Problems arise in connection with the steadily declining number of satellites which are suitable for Faraday rotation measurements. For this reason, alternate methods of determining ionospheric electron content are being explored. One promising method involves the use of satellites of the NAVSTAR Global Positioning System (GPS). The results of a comparative study regarding this method are encouraging.

  11. A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.

    PubMed

    Brusco, Michael J; Shireman, Emilie; Steinley, Douglas

    2017-09-01

    The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers.

    PubMed

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-12-09

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.

  13. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers

    PubMed Central

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-01-01

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705

  14. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Remote sensing of temperature and concentration profiles of a gas jet by coupling infrared emission spectroscopy and LIDAR for characterization of aircraft engine exhaust

    NASA Astrophysics Data System (ADS)

    Offret, J.-P.; Lebedinsky, J.; Navello, L.; Pina, V.; Serio, B.; Bailly, Y.; Hervé, P.

    2015-05-01

    Temperature data play an important role in the combustion chamber since it determines both the efficiency and the rate of pollutants emission of engines. Air pollution problem concerns the emissions of gases such as CO, CO2, NO, NO2, SO2 and also aerosols, soot and volatile organic compounds. Flame combustion occurs in hostile environments where temperature and concentration profiles are often not easy to measure. In this study, a temperature and CO2 concentration profiles optical measurement method, suitable for combustion analysis, is discussed and presented. The proposed optical metrology method presents numerous advantages when compared to intrusive methods. The experimental setup comprises a passive radiative emission measurement method combined with an active laser-measurement method. The passive method is based on the use of gas emission spectroscopy. The experimental spectrometer device is coupled with an active method. The active method is used to investigate and correct complex flame profiles. This method similar to a LIDAR (Light Detection And Ranging) device is based on the measurement of Rayleigh scattering of a short laser pulse recorded using a high-speed streak camera. The whole experimental system of this new method is presented. Results obtained on a small-scale turbojet are shown and discussed in order to illustrate the potentials deliver by the sophisticated method. Both temperature and concentration profiles of the gas jet are presented and discussed.

  16. Standardizing lightweight deflectometer modulus measurements for compaction quality assurance : research summary.

    DOT National Transportation Integrated Search

    2017-09-01

    The mechanistic-empirical pavement design method requires the elastic resilient modulus as the key input for characterization of geomaterials. Current density-based QA procedures do not measure resilient modulus. Additionally, the density-based metho...

  17. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  18. Fast and precise dense grid size measurement method based on coaxial dual optical imaging system

    NASA Astrophysics Data System (ADS)

    Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei

    2015-10-01

    Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.

  19. Neurologic examination and instrument-based measurements in the evaluation of ulnar neuropathy at the elbow.

    PubMed

    Omejec, Gregor; Podnar, Simon

    2018-06-01

    The aim of the study was to compare the utility of instrument-based assessment of peripheral nerve function with the neurologic examination in ulnar neuropathy at the elbow (UNE). We prospectively recruited consecutive patients with suspected UNE, performed a neurologic examination, and performed instrument-based measurements (muscle cross-sectional area by ultrasonography, muscle strength by dynamometry, and sensation using monofilaments). We found good correlations between clinical estimates and corresponding instrument-based measurements, with similar ability to diagnose UNE and predict UNE pathophysiology. Although instrument-based methods provide quantitative evaluation of peripheral nerve function, we did not find them to be more sensitive or specific in the diagnosis of UNE than the standard neurologic examination. Likewise, instrument-based methods were not better able to differentiate between groups of UNE patients with different pathophysiologies. Muscle Nerve 57: 951-957, 2018. © 2017 Wiley Periodicals, Inc.

  20. Generalized sample entropy analysis for traffic signals based on similarity measure

    NASA Astrophysics Data System (ADS)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  1. Golden angle based scanning for robust corneal topography with OCT

    PubMed Central

    Wagner, Joerg; Goldblum, David; Cattin, Philippe C.

    2017-01-01

    Corneal topography allows the assessment of the cornea’s refractive power which is crucial for diagnostics and surgical planning. The use of optical coherence tomography (OCT) for corneal topography is still limited. One limitation is the susceptibility to disturbances like blinking of the eye. This can result in partially corrupted scans that cannot be evaluated using common methods. We present a new scanning method for reliable corneal topography from partial scans. Based on the golden angle, the method features a balanced scan point distribution which refines over measurement time and remains balanced when part of the scan is removed. The performance of the method is assessed numerically and by measurements of test surfaces. The results confirm that the method enables numerically well-conditioned and reliable corneal topography from partially corrupted scans and reduces the need for repeated measurements in case of abrupt disturbances. PMID:28270961

  2. Predicting the thermal conductivity of aluminium alloys in the cryogenic to room temperature range

    NASA Astrophysics Data System (ADS)

    Woodcraft, Adam L.

    2005-06-01

    Aluminium alloys are being used increasingly in cryogenic systems. However, cryogenic thermal conductivity measurements have been made on only a few of the many types in general use. This paper describes a method of predicting the thermal conductivity of any aluminium alloy between the superconducting transition temperature (approximately 1 K) and room temperature, based on a measurement of the thermal conductivity or electrical resistivity at a single temperature. Where predictions are based on low temperature measurements (approximately 4 K and below), the accuracy is generally better than 10%. Useful predictions can also be made from room temperature measurements for most alloys, but with reduced accuracy. This method permits aluminium alloys to be used in situations where the thermal conductivity is important without having to make (or find) direct measurements over the entire temperature range of interest. There is therefore greater scope to choose alloys based on mechanical properties and availability, rather than on whether cryogenic thermal conductivity measurements have been made. Recommended thermal conductivity values are presented for aluminium 6082 (based on a new measurement), and for 1000 series, and types 2014, 2024, 2219, 3003, 5052, 5083, 5086, 5154, 6061, 6063, 6082, 7039 and 7075 (based on low temperature measurements in the literature).

  3. A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.

    PubMed

    Huang, Shiping; Wu, Zhifeng; Misra, Anil

    2017-12-11

    Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.

  4. Establishment of a Method for Measuring Antioxidant Capacity in Urine, Based on Oxidation Reduction Potential and Redox Couple I2/KI

    PubMed Central

    Cao, Tinghui; He, Min; Bai, Tianyu

    2016-01-01

    Objectives. To establish a new method for determination of antioxidant capacity of human urine based on the redox couple I2/KI and to evaluate the redox status of healthy and diseased individuals. Methods. The method was based on the linear relationship between oxidation reduction potential (ORP) and logarithm of concentration ratio of I2/KI. ORP of a solution with a known concentration ratio of I2/KI will change when reacted with urine. To determine the accuracy of the method, both vitamin C and urine were reacted separately with I2/KI solution. The new method was compared with the traditional method of iodine titration and then used to measure the antioxidant capacity of urine samples from 30 diabetic patients and 30 healthy subjects. Results. A linear relationship was found between logarithm of concentration ratio of I2/KI and ORP (R 2 = 0.998). Both vitamin C and urine concentration showed a linear relationship with ORP (R 2 = 0.994 and 0.986, resp.). The precision of the method was in the acceptable range and results of two methods had a linear correlation (R 2 = 0.987). Differences in ORP values between diabetic group and control group were statistically significant (P < 0.05). Conclusions. A new method for measuring the antioxidant capacity of clinical urine has been established. PMID:28115919

  5. Radon Diffusion Measurement in Polyethylene based on Alpha Detection

    NASA Astrophysics Data System (ADS)

    Rau, Wolfgang

    2011-04-01

    We present a method to measure the diffusion of Radon in solid materials based on the alpha decay of the radon daughter products. In contrast to usual diffusion measurements which detect the radon that penetrates a thin barrier, we let the radon diffuse into the material and then measure the alpha decays of the radon daughter products in the material. We applied this method to regular and ultra high molecular weight poly ethylene and find diffusion lengths of order of mm as expected. However, the preliminary analysis shows significant differences between two different approaches we have chosen. These differences may be explained by the different experimental conditions.

  6. Pulse Transit Time Measurement Using Seismocardiogram, Photoplethysmogram, and Acoustic Recordings: Evaluation and Comparison.

    PubMed

    Yang, Chenxi; Tavassolian, Negar

    2018-05-01

    This work proposes a novel method of pulse transit time (PTT) measurement. The proximal arterial location data are collected from seismocardiogram (SCG) recordings by placing a micro-electromechanical accelerometer on the chest wall. The distal arterial location data are recorded using an acoustic sensor placed inside the ear. The performance of distal location recordings is evaluated by comparing SCG-acoustic and SCG-photoplethysmogram (PPG) measurements. PPG and acoustic performances under motion noise are also compared. Experimental results suggest comparable performances for the acoustic-based and PPG-based devices. The feasibility of each PTT measurement method is validated for blood pressure evaluations and its limitations are analyzed.

  7. Accurate aircraft wind measurements using the global positioning system (GPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobosy, R.J.; Crawford, T.L., McMillen, R.T., Dumas, E.J.

    1996-11-01

    High accuracy measurements of the spatial distribution of wind speed are required in the study of turbulent exchange between the atmosphere and the earth. The use of a differential global positioning system (GPS) to determine the sensor velocity vector component of wind speed is discussed in this paper. The results of noise and rocking testing are summarized, and fluxes obtained from the GPS-based methods are compared to those measured from systems on towers and airplanes. The GPS-based methods provided usable measurements that compared well with tower and aircraft data at a significantly lower cost. 21 refs., 1 fig., 2 tabs.

  8. Differentiation and Exploration of Model MACP for HE VER 1.0 on Prototype Performance Measurement Application for Higher Education

    NASA Astrophysics Data System (ADS)

    El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis

    2018-02-01

    Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.

  9. A computed tomography-based spatial normalization for the analysis of [18F] fluorodeoxyglucose positron emission tomography of the brain.

    PubMed

    Cho, Hanna; Kim, Jin Su; Choi, Jae Yong; Ryu, Young Hoon; Lyoo, Chul Hyoung

    2014-01-01

    We developed a new computed tomography (CT)-based spatial normalization method and CT template to demonstrate its usefulness in spatial normalization of positron emission tomography (PET) images with [(18)F] fluorodeoxyglucose (FDG) PET studies in healthy controls. Seventy healthy controls underwent brain CT scan (120 KeV, 180 mAs, and 3 mm of thickness) and [(18)F] FDG PET scans using a PET/CT scanner. T1-weighted magnetic resonance (MR) images were acquired for all subjects. By averaging skull-stripped and spatially-normalized MR and CT images, we created skull-stripped MR and CT templates for spatial normalization. The skull-stripped MR and CT images were spatially normalized to each structural template. PET images were spatially normalized by applying spatial transformation parameters to normalize skull-stripped MR and CT images. A conventional perfusion PET template was used for PET-based spatial normalization. Regional standardized uptake values (SUV) measured by overlaying the template volume of interest (VOI) were compared to those measured with FreeSurfer-generated VOI (FSVOI). All three spatial normalization methods underestimated regional SUV values by 0.3-20% compared to those measured with FSVOI. The CT-based method showed slightly greater underestimation bias. Regional SUV values derived from all three spatial normalization methods were correlated significantly (p < 0.0001) with those measured with FSVOI. CT-based spatial normalization may be an alternative method for structure-based spatial normalization of [(18)F] FDG PET when MR imaging is unavailable. Therefore, it is useful for PET/CT studies with various radiotracers whose uptake is expected to be limited to specific brain regions or highly variable within study population.

  10. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    NASA Astrophysics Data System (ADS)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  11. New method of noncontact temperature measurement in on-line textile production

    NASA Astrophysics Data System (ADS)

    Cheng, Xianping; Song, Xing-Li; Deng, Xing-Zhong

    1993-09-01

    Based on the condition of textile production the method of infrared non-contact temperature measurement is adcpted in the heat-setting and drying heat-treatment process . This method is used to monitor the moving cloth. The temperature of the cloth is displayed rapidly and exactly. The principle of the temperature measurement is analysed theoretically in this paper. Mathematical analysis and calculation are used for introducing signal transmitting method. Adopted method of combining software with hardware the temperature is corrected and compensated with the aid of a single-chip microcomputer. The results of test indicate that the application of temperature measurement instrument provides reliable parameters in the quality control. And it is an important measure on improving the quality of products.

  12. Reverse engineering of the homogeneous-entity product profiles based on CCD

    NASA Astrophysics Data System (ADS)

    Gan, Yong; Zhong, Jingru; Sun, Ning; Sun, Aoran

    2011-08-01

    This measurement system uses delaminated measurement principle, measures the three perpendicular direction values of the entities. When the measured entity is immerged in the liquid layer by layer, every layer's image are collected by CCD and digitally processed. It introduces the basic measuring principle and the working process of the measure method. According to Archimedes law, the related buoyancy and volume that soaked in different layer's depth are measured by electron balance and the mathematics models are established. Through calculating every layer's weight and centre of gravity by computer based on the method of Artificial Intelligence, we can reckon 3D coordinate values of every minute entity cell in different layers and its 3D contour picture is constructed. The experimental results show that for all the homogeneous entity insoluble in water, it can measure them. The measurement velocity is fast and non-destructive test, it can measure the entity with internal hole.

  13. An Exploration of Alternative Scoring Methods Using Curriculum-Based Measurement in Early Writing

    ERIC Educational Resources Information Center

    Allen, Abigail A.; Poch, Apryl L.; Lembke, Erica S.

    2018-01-01

    This manuscript describes two empirical studies of alternative scoring procedures used with curriculum-based measurement in writing (CBM-W). Study 1 explored the technical adequacy of a trait-based rubric in first grade. Study 2 explored the technical adequacy of a trait-based rubric, production-dependent, and production-independent scores in…

  14. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  15. Optimization with artificial neural network systems - A mapping principle and a comparison to gradient based methods

    NASA Technical Reports Server (NTRS)

    Leong, Harrison Monfook

    1988-01-01

    General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.

  16. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  17. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers.

    PubMed

    Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.

  18. Lunar-edge based on-orbit modulation transfer function (MTF) measurement

    NASA Astrophysics Data System (ADS)

    Cheng, Ying; Yi, Hongwei; Liu, Xinlong

    2017-10-01

    Modulation transfer function (MTF) is an important parameter for image quality evaluation of on-orbit optical image systems. Various methods have been proposed to determine the MTF of an imaging system which are based on images containing point, pulse and edge features. In this paper, the edge of the moon can be used as a high contrast target to measure on-orbit MTF of image systems based on knife-edge methods. The proposed method is an extension of the ISO 12233 Slanted-edge Spatial Frequency Response test, except that the shape of the edge is a circular arc instead of a straight line. In order to get more accurate edge locations and then obtain a more authentic edge spread function (ESF), we choose circular fitting method based on least square to fit lunar edge in sub-pixel edge detection process. At last, simulation results show that the MTF value at Nyquist frequency calculated using our lunar edge method is reliable and accurate with error less than 2% comparing with theoretical MTF value.

  19. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  20. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    PubMed Central

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  1. A Generalized Approach for Measuring Relationships Among Genes.

    PubMed

    Wang, Lijun; Ahsan, Md Asif; Chen, Ming

    2017-07-21

    Several methods for identifying relationships among pairs of genes have been developed. In this article, we present a generalized approach for measuring relationships between any pairs of genes, which is based on statistical prediction. We derive two particular versions of the generalized approach, least squares estimation (LSE) and nearest neighbors prediction (NNP). According to mathematical proof, LSE is equivalent to the methods based on correlation; and NNP is approximate to one popular method called the maximal information coefficient (MIC) according to the performances in simulations and real dataset. Moreover, the approach based on statistical prediction can be extended from two-genes relationships to multi-genes relationships. This application would help to identify relationships among multi-genes.

  2. Palladium-based Mass-Tag Cell Barcoding with a Doublet-Filtering Scheme and Single Cell Deconvolution Algorithm

    PubMed Central

    Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.

    2015-01-01

    SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231

  3. Correlation of cystatin C and creatinine based estimates of renal function in children with hydronephrosis.

    PubMed

    Momtaz, Hossein-Emad; Dehghan, Arash; Karimian, Mohammad

    2016-01-01

    The use of a simple and accurate glomerular filtration rate (GFR) estimating method aiming minute assessment of renal function can be of great clinical importance. This study aimed to determine the association of a GFR estimating by equation that includes only cystatin C (Gentian equation) to equation that include only creatinine (Schwartz equation) among children. A total of 31 children aged from 1 day to 5 years with the final diagnosis of unilateral or bilateral hydronephrosis referred to Besat hospital in Hamadan, between March 2010 and February 2011 were consecutively enrolled. Schwartz and Gentian equations were employed to determine GFR based on plasma creatinine and cystatin C levels, respectively. The proportion of GFR based on Schwartz equation was 70.19± 24.86 ml/min/1.73 m(2), while the level of this parameter based on Gentian method and using cystatin C was 86.97 ± 21.57 ml/min/1.73 m(2). The Pearson correlation coefficient analysis showed a strong direct association between the two levels of GFR measured by Schwartz equation based on serum creatinine level and Gentian method and using cystatin C (r = 0.594, P < 0.001). The linear association between GFR values measured with the two methods included cystatin C based GFR = 50.8+ 0.515 × Schwartz GFR. The correlation between GFR values measured by using serum creatinine and serum cystatin C measurements remained meaningful even after adjustment for patients' gender and age (r = 0.724, P < 0.001). The equation developed based on cystatin C level is comparable with another equation, based on serum creatinine (Schwartz formula) to estimate GFR in children.

  4. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    PubMed

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  5. Enhacement of intrafield overlay using a design based metrology system

    NASA Astrophysics Data System (ADS)

    Jo, Gyoyeon; Ji, Sunkeun; Kim, Shinyoung; Kang, Hyunwoo; Park, Minwoo; Kim, Sangwoo; Kim, Jungchan; Park, Chanha; Yang, Hyunjo; Maruyama, Kotaro; Park, Byungjun

    2016-03-01

    As the scales of the semiconductor devices continue to shrink, accurate measurement and control of the overlay have been emphasized for securing more overlay margin. Conventional overlay analysis methods are based on the optical measurement of the overlay mark. However, the overlay data obtained from these optical methods cannot represent the exact misregistration between two layers at the circuit level. The overlay mismatch may arise from the size or pitch difference between the overlay mark and the real pattern. Pattern distortion, caused by CMP or etching, could be a source of the overlay mismatch as well. Another issue is the overlay variation in the real circuit pattern which varies depending on its location. The optical overlay measurement methods, such as IBO and DBO that use overlay mark on the scribeline, are not capable of defining the exact overlay values of the real circuit. Therefore, the overlay values of the real circuit need to be extracted to integrate the semiconductor device properly. The circuit level overlay measurement using CDSEM is time-consuming in extracting enough data to indicate overall trend of the chip. However DBM tool is able to derive sufficient data to display overlay tendency of the real circuit region with high repeatability. An E-beam based DBM(Design Based Metrology) tool can be an alternative overlay measurement method. In this paper, we are going to certify that the overlay values extracted from optical measurement cannot represent the circuit level overlay values. We will also demonstrate the possibility to correct misregistration between two layers using the overlay data obtained from the DBM system.

  6. A Machine Learning Approach to Measurement of Text Readability for EFL Learners Using Various Linguistic Features

    ERIC Educational Resources Information Center

    Kotani, Katsunori; Yoshimi, Takehiko; Isahara, Hitoshi

    2011-01-01

    The present paper introduces and evaluates a readability measurement method designed for learners of EFL (English as a foreign language). The proposed readability measurement method (a regression model) estimates the text readability based on linguistic features, such as lexical, syntactic and discourse features. Text readability refers to the…

  7. Time-of-flight measurements of heavy ions using Si PIN diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strekalovsky, A. O., E-mail: alex.strek@bk.ru; Kamanin, D. V.; Pyatkov, Yu. V.

    2016-12-15

    A new off-line timing method for PIN diode signals is presented which allows the plasma delay effect to be suppressed. Velocities of heavy ions measured by the new method are in good agreement within a wide range of masses and energies with velocities measured by time stamp detectors based on microchannel plates.

  8. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    ERIC Educational Resources Information Center

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  9. An advanced analysis method of initial orbit determination with too short arc data

    NASA Astrophysics Data System (ADS)

    Li, Binzhe; Fang, Li

    2018-02-01

    This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.

  10. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  11. A rotation-translation invariant molecular descriptor of partial charges and its use in ligand-based virtual screening

    PubMed Central

    2014-01-01

    Background Measures of similarity for chemical molecules have been developed since the dawn of chemoinformatics. Molecular similarity has been measured by a variety of methods including molecular descriptor based similarity, common molecular fragments, graph matching and 3D methods such as shape matching. Similarity measures are widespread in practice and have proven to be useful in drug discovery. Because of our interest in electrostatics and high throughput ligand-based virtual screening, we sought to exploit the information contained in atomic coordinates and partial charges of a molecule. Results A new molecular descriptor based on partial charges is proposed. It uses the autocorrelation function and linear binning to encode all atoms of a molecule into two rotation-translation invariant vectors. Combined with a scoring function, the descriptor allows to rank-order a database of compounds versus a query molecule. The proposed implementation is called ACPC (AutoCorrelation of Partial Charges) and released in open source. Extensive retrospective ligand-based virtual screening experiments were performed and other methods were compared with in order to validate the method and associated protocol. Conclusions While it is a simple method, it performed remarkably well in experiments. At an average speed of 1649 molecules per second, it reached an average median area under the curve of 0.81 on 40 different targets; hence validating the proposed protocol and implementation. PMID:24887178

  12. Development of a photogrammetric method of measuring tree taper outside bark

    Treesearch

    David R. Larsen

    2006-01-01

    A photogrammetric method is presented for measuring tree diameters outside bark using calibrated control ground-based digital photographs. The method was designed to rapidly collect tree taper information from subject trees for the development of tree taper equations. Software that is commercially available, but designed for a different purpose, can be readily adapted...

  13. Wave processes in the human cardiovascular system: The measuring complex, computing models, and diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.

    2017-03-01

    A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.

  14. EVALUATION OF IODINE BASED IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF HG USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...

  15. Fluorescence based spectral assessment of pork meat freshness

    USDA-ARS?s Scientific Manuscript database

    Development of sensitive, nondestructive measurement methods for meat freshness is necessary to ensure safe distribution of meat products in the continually growing meat market. Fluorescence spectral technology has been shown to be a promising measurement method for quality and safety evaluation of ...

  16. A New PC and LabVIEW Package Based System for Electrochemical Investigations

    PubMed Central

    Stević, Zoran; Andjelković, Zoran; Antić, Dejan

    2008-01-01

    The paper describes a new PC and LabVIEW software package based system for electrochemical research. An overview of well known electrochemical methods, such as potential measurements, galvanostatic and potentiostatic method, cyclic voltammetry and EIS is given. Electrochemical impedance spectroscopy has been adapted for systems containing large capacitances. For signal generation and recording of the response of investigated electrochemical cell, a measurement and control system was developed, based on a PC P4. The rest of the hardware consists of a commercially available AD-DA converter and an external interface for analog signal processing. The interface is a result of authors own research. The software platform for desired measurement methods is LabVIEW 8.2 package, which is regarded as a high standard in the area of modern virtual instruments. The developed system was adjusted, tested and compared with commercially available system and ORCAD simulation. PMID:27879794

  17. Design of distributed FBG vibration measuring system based on Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Miao, Changyun; Li, Hongqiang; Gao, Hua; Gan, Jingmeng

    2011-11-01

    A distributed optical fiber grating wavelength interrogator based on fiber Fabry Perot tunable filter(FFP-TF) was proposed, which could measure dynamic strain or vibration of multi-sensing fiber gratings in one optical fiber by time division way. The wavelength demodulated mathematical model was built, the formulas of system output voltage and sensitivity were deduced and the method of finding static operating point was determined. The wavelength drifting characteristic of FFP-TF was discussed when the center wavelength of FFP-TF was set on the static operating point. A wavelength locking method was proposed by introducing a high-frequency driving voltage signal. A demodulated system was established based on Labview and its demodulated wavelength dynamic range is 290pm in theory. In experiment, by digital filtering applied to the system output data, 100Hz and 250Hz vibration signals were measured. The experiment results proved the feasibility of the demodulated method.

  18. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  19. Distributed phase birefringence measurements based on polarization correlation in phase-sensitive optical time-domain reflectometers.

    PubMed

    Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc

    2015-09-21

    In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.

  20. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors

    PubMed Central

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-01-01

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015

  1. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.

    PubMed

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-12-28

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.

  2. Precision measurement of refractive index of air based on laser synthetic wavelength interferometry with Edlén equation estimation.

    PubMed

    Yan, Liping; Chen, Benyong; Zhang, Enzheng; Zhang, Shihua; Yang, Ye

    2015-08-01

    A novel method for the precision measurement of refractive index of air (n(air)) based on the combining of the laser synthetic wavelength interferometry with the Edlén equation estimation is proposed. First, a n(air_e) is calculated from the modified Edlén equation according to environmental parameters measured by low precision sensors with an uncertainty of 10(-6). Second, a unique integral fringe number N corresponding to n(air) is determined based on the calculated n(air_e). Then, a fractional fringe ε corresponding to n(air) with high accuracy can be obtained according to the principle of fringe subdivision of laser synthetic wavelength interferometry. Finally, high accurate measurement of n(air) is achieved according to the determined fringes N and ε. The merit of the proposed method is that it not only solves the problem of the measurement accuracy of n(air) being limited by the accuracies of environmental sensors, but also avoids adopting complicated vacuum pumping to measure the integral fringe N in the method of conventional laser interferometry. To verify the feasibility of the proposed method, comparison experiments with Edlén equations in short time and in long time were performed. Experimental results show that the measurement accuracy of n(air) is better than 2.5 × 10(-8) in short time tests and 6.2 × 10(-8) in long time tests.

  3. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  4. ConGEMs: Condensed Gene Co-Expression Module Discovery Through Rule-Based Clustering and Its Application to Carcinogenesis.

    PubMed

    Mallik, Saurav; Zhao, Zhongming

    2017-12-28

    For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures-weighted rank-based Jaccard and Cosine measures-and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s) through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm-RANWAR-was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

  5. Dimensional metrology of micro structure based on modulation depth in scanning broadband light interferometry

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Tang, Yan; Deng, Qinyuan; Zhao, Lixin; Hu, Song

    2017-08-01

    Three-dimensional measurement and inspection is an area with growing needs and interests in many domains, such as integrated circuits (IC), medical cure, and chemistry. Among the methods, broadband light interferometry is widely utilized due to its large measurement range, noncontact and high precision. In this paper, we propose a spatial modulation depth-based method to retrieve the surface topography through analyzing the characteristics of both frequency and spatial domains in the interferogram. Due to the characteristics of spatial modulation depth, the technique could effectively suppress the negative influences caused by light fluctuations and external disturbance. Both theory and experiments are elaborated to confirm that the proposed method can greatly improve the measurement stability and sensitivity with high precision. This technique can achieve a superior robustness with the potential to be applied in online topography measurement.

  6. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  7. MTF measurement of LCDs by a linear CCD imager: I. Monochrome case

    NASA Astrophysics Data System (ADS)

    Kim, Tae-hee; Choe, O. S.; Lee, Yun Woo; Cho, Hyun-Mo; Lee, In Won

    1997-11-01

    We construct the modulation transfer function (MTF) measurement system of a LCD using a linear charge-coupled device (CCD) imager. The MTF used in optical system can not describe in the effect of both resolution and contrast on the image quality of display. Thus we present the new measurement method based on the transmission property of a LCD. While controlling contrast and brightness levels, the MTF is measured. From the result, we show that the method is useful for describing of the image quality. A ne measurement method and its condition are described. To demonstrate validity, the method is applied for comparison of the performance of two different LCDs.

  8. Regional fringe analysis for improving depth measurement in phase-shifting fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Chien, Kuang-Che Chang; Tu, Han-Yen; Hsieh, Ching-Huang; Cheng, Chau-Jern; Chang, Chun-Yen

    2018-01-01

    This study proposes a regional fringe analysis (RFA) method to detect the regions of a target object in captured shifted images to improve depth measurement in phase-shifting fringe projection profilometry (PS-FPP). In the RFA method, region-based segmentation is exploited to segment the de-fringed image of a target object, and a multi-level fuzzy-based classification with five presented features is used to analyze and discriminate the regions of an object from the segmented regions, which were associated with explicit fringe information. Then, in the experiment, the performance of the proposed method is tested and evaluated on 26 test cases made of five types of materials. The qualitative and quantitative results demonstrate that the proposed RFA method can effectively detect the desired regions of an object to improve depth measurement in the PS-FPP system.

  9. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  10. Use of visible, near-infrared, and thermal infrared remote sensing to study soil moisture

    NASA Technical Reports Server (NTRS)

    Blanchard, M. B.; Greeley, R.; Goettelman, R.

    1974-01-01

    Two methods are described which are used to estimate soil moisture remotely using the 0.4- to 14.0 micron wavelength region: (1) measurement of spectral reflectance, and (2) measurement of soil temperature. The reflectance method is based on observations which show that directional reflectance decreases as soil moisture increases for a given material. The soil temperature method is based on observations which show that differences between daytime and nighttime soil temperatures decrease as moisture content increases for a given material. In some circumstances, separate reflectance or temperature measurements yield ambiguous data, in which case these two methods may be combined to obtain a valid soil moisture determination. In this combined approach, reflectance is used to estimate low moisture levels; and thermal inertia (or thermal diffusivity) is used to estimate higher levels. The reflectance method appears promising for surface estimates of soil moisture, whereas the temperature method appears promising for estimates of near-subsurface (0 to 10 cm).

  11. Use of visible, near-infrared, and thermal infrared remote sensing to study soil moisture

    NASA Technical Reports Server (NTRS)

    Blanchard, M. B.; Greeley, R.; Goettelman, R.

    1974-01-01

    Two methods are used to estimate soil moisture remotely using the 0.4- to 14.0-micron wavelength region: (1) measurement of spectral reflectance, and (2) measurement of soil temperature. The reflectance method is based on observations which show that directional reflectance decreases as soil moisture increases for a given material. The soil temperature method is based on observations which show that differences between daytime and nighttime soil temperatures decrease as moisture content increases for a given material. In some circumstances, separate reflectance or temperature measurements yield ambiguous data, in which case these two methods may be combined to obtain a valid soil moisture determination. In this combined approach, reflectance is used to estimate low moisture levels; and thermal inertia (or thermal diffusivity) is used to estimate higher levels. The reflectance method appears promising for surface estimates of soil moisture, whereas the temperature method appears promising for estimates of near-subsurface (0 to 10 cm).

  12. Study on photoelectric parameter measurement method of high capacitance solar cell

    NASA Astrophysics Data System (ADS)

    Zhang, Junchao; Xiong, Limin; Meng, Haifeng; He, Yingwei; Cai, Chuan; Zhang, Bifeng; Li, Xiaohui; Wang, Changshi

    2018-01-01

    The high efficiency solar cells usually have high capacitance characteristic, so the measurement of their photoelectric performance usually requires long pulse width and long sweep time. The effects of irradiance non-uniformity, probe shielding and spectral mismatch on the IV curve measurement are analyzed experimentally. A compensation method for irradiance loss caused by probe shielding is proposed, and the accurate measurement of the irradiance intensity in the IV curve measurement process of solar cell is realized. Based on the characteristics that the open circuit voltage of solar cell is sensitive to the junction temperature, an accurate measurement method of the temperature of solar cell under continuous irradiation condition is proposed. Finally, a measurement method with the characteristic of high accuracy and wide application range for high capacitance solar cell is presented.

  13. Structural health monitoring using DOG multi-scale space: an approach for analyzing damage characteristics

    NASA Astrophysics Data System (ADS)

    Guo, Tian; Xu, Zili

    2018-03-01

    Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.

  14. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  15. Scatter measurement and correction method for cone-beam CT based on single grating scan

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  16. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  17. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.

  18. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  19. Determination of the human spine curve based on laser triangulation.

    PubMed

    Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija

    2015-02-05

    The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.

  20. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  1. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  2. Mass Measurements with the CSS2 and CIME cyclotrons at GANIL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez Hornillos, M. B.; Chartier, M.; Demonchy, C. E.

    2006-03-13

    This paper presents two original direct mass-measurement techniques developed at GANIL using the CSS2 and CIME cyclotrons as high-resolution mass spectrometers. The mass measurement with the CSS2 cyclotron is based on a time-of-flight method along the spiral trajectory of the ions inside the cyclotron. The atomic mass excesses of 68Se and 80Y recently measured with this technique are -53.958(246) MeV and -60.971(180) MeV, respectively. The new mass-measurement technique with the CIME cyclotron is based on the sweep of the acceleration radio-frequency of the cyclotron. Tests with stable beams have been performed in order to study the accuracy of this newmore » mass-measurement method and to understand the systematic errors.« less

  3. Lumbar joint torque estimation based on simplified motion measurement using multiple inertial sensors.

    PubMed

    Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi

    2015-01-01

    We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.

  4. An improved method for measuring the magnetic inhomogeneity shift in hydrogen masers

    NASA Technical Reports Server (NTRS)

    Reinhardt, V. S.; Peters, H. E.

    1975-01-01

    The reported method makes it possible to conduct all maser frequency measurements under conditions of low magnetic field intensity for which the hydrogen maser is most stable. Aspects concerning the origin of the magnetic inhomogeneity shift are examined and the available approaches for measuring this shift are considered, taking into account certain drawbacks of currently used methods. An approach free of these drawbacks can be based on the measurement of changes in a parameter representing the difference between the number of atoms in the involved states.

  5. Time signal distribution in communication networks based on synchronous digital hierarchy

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1993-01-01

    A new method that uses round-trip paths to accurately measure transmission delay for time synchronization is proposed. The performance of the method in Synchronous Digital Hierarchy networks is discussed. The feature of this method is that it separately measures the initial round trip path delay and the variations in round-trip path delay. The delay generated in SDH equipment is determined by measuring the initial round-trip path delay. In an experiment with actual SDH equipment, the error of initial delay measurement was suppressed to 30ns.

  6. Interplay between past market correlation structure changes and future volatility outbursts.

    PubMed

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2016-11-18

    We report significant relations between past changes in the market correlation structure and future changes in the market volatility. This relation is made evident by using a measure of "correlation structure persistence" on correlation-based information filtering networks that quantifies the rate of change of the market dependence structure. We also measured changes in the correlation structure by means of a "metacorrelation" that measures a lagged correlation between correlation matrices computed over different time windows. Both methods show a deep interplay between past changes in correlation structure and future changes in volatility and we demonstrate they can anticipate market risk variations and this can be used to better forecast portfolio risk. Notably, these methods overcome the curse of dimensionality that limits the applicability of traditional econometric tools to portfolios made of a large number of assets. We report on forecasting performances and statistical significance of both methods for two different equity datasets. We also identify an optimal region of parameters in terms of True Positive and False Positive trade-off, through a ROC curve analysis. We find that this forecasting method is robust and it outperforms logistic regression predictors based on past volatility only. Moreover the temporal analysis indicates that methods based on correlation structural persistence are able to adapt to abrupt changes in the market, such as financial crises, more rapidly than methods based on past volatility.

  7. Interplay between past market correlation structure changes and future volatility outbursts

    NASA Astrophysics Data System (ADS)

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2016-11-01

    We report significant relations between past changes in the market correlation structure and future changes in the market volatility. This relation is made evident by using a measure of “correlation structure persistence” on correlation-based information filtering networks that quantifies the rate of change of the market dependence structure. We also measured changes in the correlation structure by means of a “metacorrelation” that measures a lagged correlation between correlation matrices computed over different time windows. Both methods show a deep interplay between past changes in correlation structure and future changes in volatility and we demonstrate they can anticipate market risk variations and this can be used to better forecast portfolio risk. Notably, these methods overcome the curse of dimensionality that limits the applicability of traditional econometric tools to portfolios made of a large number of assets. We report on forecasting performances and statistical significance of both methods for two different equity datasets. We also identify an optimal region of parameters in terms of True Positive and False Positive trade-off, through a ROC curve analysis. We find that this forecasting method is robust and it outperforms logistic regression predictors based on past volatility only. Moreover the temporal analysis indicates that methods based on correlation structural persistence are able to adapt to abrupt changes in the market, such as financial crises, more rapidly than methods based on past volatility.

  8. Interplay between past market correlation structure changes and future volatility outbursts

    PubMed Central

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2016-01-01

    We report significant relations between past changes in the market correlation structure and future changes in the market volatility. This relation is made evident by using a measure of “correlation structure persistence” on correlation-based information filtering networks that quantifies the rate of change of the market dependence structure. We also measured changes in the correlation structure by means of a “metacorrelation” that measures a lagged correlation between correlation matrices computed over different time windows. Both methods show a deep interplay between past changes in correlation structure and future changes in volatility and we demonstrate they can anticipate market risk variations and this can be used to better forecast portfolio risk. Notably, these methods overcome the curse of dimensionality that limits the applicability of traditional econometric tools to portfolios made of a large number of assets. We report on forecasting performances and statistical significance of both methods for two different equity datasets. We also identify an optimal region of parameters in terms of True Positive and False Positive trade-off, through a ROC curve analysis. We find that this forecasting method is robust and it outperforms logistic regression predictors based on past volatility only. Moreover the temporal analysis indicates that methods based on correlation structural persistence are able to adapt to abrupt changes in the market, such as financial crises, more rapidly than methods based on past volatility. PMID:27857144

  9. [New assessment scale based on the type of person desired by an employer].

    PubMed

    Sasaki, Kenichi; Toyoda, Hideki

    2011-10-01

    In many cases, aptitude tests used in the hiring process fail to connect the measurement scale with the emotional type of the person desired by an employer. This experimental study introduced a new measuring method, in which the measurement scale could be adjusted according to the type of person an employer is seeking. Then the effectiveness of this method was verified by comparing the results of an aptitude test utilizing the method and the results of the typical hiring process carried out by the new method in hiring.

  10. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  11. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  12. Developing a Measure of Wealth for Primary Student Families in a Developing Country: Comparison of Two Methods of Psychometric Calibration

    ERIC Educational Resources Information Center

    Griffin, Patrick

    2005-01-01

    This article compares the invariance properties of two methods of psychometric instrument calibration for the development of a measure of wealth among families of Grade 5 pupils in five provinces in Vietnam. The measure is based on self-reported lists of possessions in the home. Its stability has been measured over two time periods. The concept of…

  13. NEMA image quality phantom measurements and attenuation correction in integrated PET/MR hybrid imaging.

    PubMed

    Ziegler, Susanne; Jakoby, Bjoern W; Braun, Harald; Paulus, Daniel H; Quick, Harald H

    2015-12-01

    In integrated PET/MR hybrid imaging the evaluation of PET performance characteristics according to the NEMA standard NU 2-2007 is challenging because of incomplete MR-based attenuation correction (AC) for phantom imaging. In this study, a strategy for CT-based AC of the NEMA image quality (IQ) phantom is assessed. The method is systematically evaluated in NEMA IQ phantom measurements on an integrated PET/MR system. NEMA IQ measurements were performed on the integrated 3.0 Tesla PET/MR hybrid system (Biograph mMR, Siemens Healthcare). AC of the NEMA IQ phantom was realized by an MR-based and by a CT-based method. The suggested CT-based AC uses a template μ-map of the NEMA IQ phantom and a phantom holder for exact repositioning of the phantom on the systems patient table. The PET image quality parameters contrast recovery, background variability, and signal-to-noise ratio (SNR) were determined and compared for both phantom AC methods. Reconstruction parameters of an iterative 3D OP-OSEM reconstruction were optimized for highest lesion SNR in NEMA IQ phantom imaging. Using a CT-based NEMA IQ phantom μ-map on the PET/MR system is straightforward and allowed performing accurate NEMA IQ measurements on the hybrid system. MR-based AC was determined to be insufficient for PET quantification in the tested NEMA IQ phantom because only photon attenuation caused by the MR-visible phantom filling but not the phantom housing is considered. Using the suggested CT-based AC, the highest SNR in this phantom experiment for small lesions (<= 13 mm) was obtained with 3 iterations, 21 subsets and 4 mm Gaussian filtering. This study suggests CT-based AC for the NEMA IQ phantom when performing PET NEMA IQ measurements on an integrated PET/MR hybrid system. The superiority of CT-based AC for this phantom is demonstrated by comparison to measurements using MR-based AC. Furthermore, optimized PET image reconstruction parameters are provided for the highest lesion SNR in NEMA IQ phantom measurements.

  14. Josephson frequency meter for millimeter and submillimeter wavelengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anischenko, S.E.; Larkin, S.Y.; Chaikovsky, V.I.

    1994-12-31

    Frequency measurements of electromagnetic oscillations of millimeter and submillimeter wavebands with frequency growth due to a number of reasons become more and more difficult. First, these frequencies are considered to be cutoff for semiconductor converting devices and one has to use optical measurement methods instead of traditional ones with frequency transfer. Second, resonance measurement methods are characterized by using relatively narrow bands and optical ones are limited in frequency and time resolution due to the limited range and velocity of movement of their mechanical elements as well as the efficiency of these optical techniques decreases with the increase of wavelengthmore » due to diffraction losses. That requires the apriori information on the radiation frequency band of the source involved. Method of measuring frequency of harmonic microwave signals in millimeter and submillimeter wavebands based on the ac Josephson effect in superconducting contacts is devoid of all the above drawbacks. This approach offers a number of major advantages over the more traditional measurement methods, that is the one based on frequency conversion, resonance and interferrometric techniques. It can be characterized by high potential accuracy, wide range of frequencies measured, prompt measurement and the opportunity to obtain panoramic display of the results as well as full automation of the measuring process.« less

  15. Total ozone observation by sun photometry at Arosa, Switzerland

    NASA Astrophysics Data System (ADS)

    Staehelin, Johannes; Schill, Herbert; Hoegger, Bruno; Viatte, Pierre; Levrat, Gilbert; Gamma, Adrian

    1995-07-01

    The method used for ground-based total ozone observations and the design of two instruments used to monitor atmospheric total ozone at Arosa (Dobson spectrophotometer and Brewer spectrometer) are briefly described. Two different procedures of the calibration of the Dobson spectrometer, both based on the Langley plot method, are presented. Data quality problems that occured in recent years in the measurements of one Dobson instrument at Arosa are discussed, and two different methods to reassess total ozone observations are compared. Two partially automated Dobson spectrophotometers and two completely automated Brewer spectrometers are currently in operation at Arosa. Careful comparison of the results of the measurements of the different instruments yields valuable information of possible small long- term drifts of the instruments involved in the operational measurements.

  16. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. [Measurements of the concentration of atmospheric CO2 based on OP/FTIR method and infrared reflecting scanning Fourier transform spectrometry].

    PubMed

    Wei, Ru-Yi; Zhou, Jin-Song; Zhang, Xue-Min; Yu, Tao; Gao, Xiao-Hui; Ren, Xiao-Qiang

    2014-11-01

    The present paper describes the observations and measurements of the infrared absorption spectra of CO2 on the Earth's surface with OP/FTIR method by employing a mid-infrared reflecting scanning Fourier transform spectrometry, which are the first results produced by the first prototype in China developed by the team of authors. This reflecting scanning Fourier transform spectrometry works in the spectral range 2 100-3 150 cm(-1) with a spectral resolution of 2 cm(-1). Method to measure the atmospheric molecules was described and mathematical proof and quantitative algorithms to retrieve molecular concentration were established. The related models were performed both by a direct method based on the Beer-Lambert Law and by a simulating-fitting method based on HITRAN database and the instrument functions. Concentrations of CO2 were retrieved by the two models. The results of observation and modeling analyses indicate that the concentrations have a distribution of 300-370 ppm, and show tendency that going with the variation of the environment they first decrease slowly and then increase rapidly during the observation period, and reached low points in the afternoon and during the sunset. The concentrations with measuring times retrieved by the direct method and by the simulating-fitting method agree with each other very well, with the correlation of all the data is up to 99.79%, and the relative error is no more than 2.00%. The precision for retrieving is relatively high. The results of this paper demonstrate that, in the field of detecting atmospheric compositions, OP/FTIR method performed by the Infrared reflecting scanning Fourier transform spectrometry is a feasible and effective technical approach, and either the direct method or the simulating-fitting method is capable of retrieving concentrations with high precision.

  18. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor.

    PubMed

    Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping

    2009-11-10

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  19. A Comparative Study of the Applied Methods for Estimating Deflection of the Vertical in Terrestrial Geodetic Measurements

    PubMed Central

    Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien

    2016-01-01

    This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544

  20. A method for measuring the inertia properties of rigid bodies

    NASA Astrophysics Data System (ADS)

    Gobbi, M.; Mastinu, G.; Previati, G.

    2011-01-01

    A method for the measurement of the inertia properties of rigid bodies is presented. Given a rigid body and its mass, the method allows to measure (identify) the centre of gravity location and the inertia tensor during a single test. The proposed technique is based on the analysis of the free motion of a multi-cable pendulum to which the body under consideration is connected. The motion of the pendulum and the forces acting on the system are recorded and the inertia properties are identified by means of a proper mathematical procedure based on a least square estimation. After the body is positioned on the test rig, the full identification procedure takes less than 10 min. The natural frequencies of the pendulum and the accelerations involved are quite low, making this method suitable for many practical applications. In this paper, the proposed method is described and two test rigs are presented: the first is developed for bodies up to 3500 kg and the second for bodies up to 400 kg. A validation of the measurement method is performed with satisfactory results. The test rig holds a third part quality certificate according to an ISO 9001 standard and could be scaled up to measure the inertia properties of huge bodies, such as trucks, airplanes or even ships.

Top