Sample records for maximum measurement error

  1. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  2. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  3. Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom

    PubMed Central

    Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.

    2013-01-01

    Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503

  4. Simulation on measurement of five-DOF motion errors of high precision spindle with cylindrical capacitive sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wang, Wen; Xiang, Kui; Lu, Keqing; Fan, Zongwei

    2015-02-01

    This paper describes a novel cylindrical capacitive sensor (CCS) to measure the spindle five degree-of-freedom (DOF) motion errors. The operating principle and mathematical models of the CCS are presented. Using Ansoft Maxwell software to calculate the different capacitances in different configurations, structural parameters of end face electrode are then investigated. Radial, axial and tilt motions are also simulated by making comparisons with the given displacements and the simulation values respectively. It could be found that the proposed CCS has a high accuracy for measuring radial motion error when the average eccentricity is about 15 μm. Besides, the maximum relative error of axial displacement is 1.3% when the axial motion is within [0.7, 1.3] mm, and the maximum relative error of the tilt displacement is 1.6% as rotor tilts around a single axis within [-0.6, 0.6]°. Finally, the feasibility of the CCS for measuring five DOF motion errors is verified through simulation and analysis.

  5. Measurement Model Specification Error in LISREL Structural Equation Models.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  6. Synopsis of timing measurement techniques used in telecommunications

    NASA Technical Reports Server (NTRS)

    Zampetti, George

    1993-01-01

    Historically, Maximum Time Interval Error (MTIE) and Maximum Relative Time Interval Error (MRTIE) have been the main measurement techniques used to characterize timing performance in telecommunications networks. Recently, a new measurement technique, Time Variance (TVAR) has gained acceptance in the North American (ANSI) standards body. TVAR was developed in concurrence with NIST to address certain inadequacies in the MTIE approach. The advantages and disadvantages of each of these approaches are described. Real measurement examples are presented to illustrate the critical issues in actual telecommunication applications. Finally, a new MTIE measurement is proposed (ZTIE) that complements TVAR. Together, TVAR and ZTIE provide a very good characterization of network timing.

  7. An online detection system for aggregate sizes and shapes based on digital image processing

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Chen, Sijia

    2017-02-01

    Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.

  8. Exploiting Measurement Uncertainty Estimation in Evaluation of GOES-R ABI Image Navigation Accuracy Using Image Registration Techniques

    NASA Technical Reports Server (NTRS)

    Haas, Evan; DeLuccia, Frank

    2016-01-01

    In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.

  9. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  10. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  11. On the application of photogrammetry to the fitting of jawbone-anchored bridges.

    PubMed

    Strid, K G

    1985-01-01

    Misfit between a jawbone-anchored bridge and the abutments in the patient's jaw may result in, for example, fixture fracture. To achieve improved alignment, the bridge base could be prepared in a numerically-controlled tooling machine using measured abutment coordinates as primary data. For each abutment, the measured values must comprise the coordinates of a reference surface as well as the spatial orientation of the fixture/abutment longitudinal axis. Stereophotogrammetry was assumed to be the measuring method of choice. To assess its potentials, a lower-jaw model with accurately positioned signals was stereophotographed and the films were measured in a stereocomparator. Model-space coordinates, computed from the image coordinates, were compared to the known signal coordinates. The root-mean-square error in position was determined to 0.03-0.08 mm, the maximum individual error amounting to 0.12 mm, whereas the r. m. s. error in axis direction was found to be 0.5-1.5 degrees with a maximum individual error of 1.8 degrees. These errors are of the same order as can be achieved by careful impression techniques. The method could be useful, but because of its complexity, stereophotogrammetry is not recommended as a standard procedure.

  12. Design and Implementation of an Intrinsically Safe Liquid-Level Sensor Using Coaxial Cable

    PubMed Central

    Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu

    2015-01-01

    Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms. PMID:26029949

  13. Design and implementation of an intrinsically safe liquid-level sensor using coaxial cable.

    PubMed

    Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu

    2015-05-28

    Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms.

  14. Quantifying precision of in situ length and weight measurements of fish

    USGS Publications Warehouse

    Gutreuter, S.; Krzoska, D.J.

    1994-01-01

    We estimated and compared errors in field-made (in situ) measurements of lengths and weights of fish. We made three measurements of length and weight on each of 33 common carp Cyprinus carpio, and on each of a total of 34 bluegills Lepomis macrochirus and black crappies Pomoxis nigromaculatus. Maximum total lengths of all fish were measured to the nearest 1 mm on a conventional measuring board. The bluegills and black crappies (85–282 mm maximum total length) were weighed to the nearest 1 g on a 1,000-g spring-loaded scale. The common carp (415–600 mm maximum total length) were weighed to the nearest 0.05 kg on a 20-kg spring-loaded scale. We present a statistical model for comparison of coefficients of variation of length (Cl ) and weight (Cw ). Expected Cl was near zero and constant across mean length, indicating that length can be measured with good precision in the field. Expected Cw decreased with increasing mean length, and was larger than expected Cl by 5.8 to over 100 times for the bluegills and black crappies, and by 3 to over 20 times for the common carp. Unrecognized in situ weighing errors bias the apparent content of unique information in weight, which is the information not explained by either length or measurement error. We recommend procedures to circumvent effects of weighing errors, including elimination of unnecessary weighing from routine monitoring programs. In situ weighing must be conducted with greater care than is common if the content of unique and nontrivial information in weight is to be correctly identified.

  15. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  16. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface.

    PubMed

    Huo, Ju; Zhang, Guiyang; Yang, Ming

    2018-04-20

    This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000  mm×3000  mm×4000  mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.

  17. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  18. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  19. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  20. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong

    Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less

  2. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  3. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    PubMed

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  4. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  5. Development of multiple-eye PIV using mirror array

    NASA Astrophysics Data System (ADS)

    Maekawa, Akiyoshi; Sakakibara, Jun

    2018-06-01

    In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of  ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .

  6. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  7. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  8. Space charge enhanced plasma gradient effects on satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.

    1991-01-01

    It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.

  9. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  10. Retinal image quality during accommodation.

    PubMed

    López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N

    2013-07-01

    We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  11. Error quantification of osteometric data in forensic anthropology.

    PubMed

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-06-01

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEM<0.5), and (2) maximum and minimum diameters at midshaft are more reliable than their positionally-dependent counterparts (i.e. sagittal, vertical, transverse, dorso-volar). Therefore, maxima and minima are specified for all midshaft measurements in DCP 2.0. Twenty-two measurements were flagged for excessive variability (either interobserver, intraobserver, or both); 15 of these measurements were part of the standard set of measurements in Data Collection Procedures for Forensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Clinical implementation and error sensitivity of a 3D quality assurance protocol for prostate and thoracic IMRT

    PubMed Central

    Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed

    2015-01-01

    This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299

  13. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    NASA Astrophysics Data System (ADS)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  14. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  15. Effect of asymmetrical transfer coefficients of a non-polarizing beam splitter on the nonlinear error of the polarization interferometer

    NASA Astrophysics Data System (ADS)

    Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao

    2010-09-01

    The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.

  16. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  17. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  18. Sampling for compliance with USDA Forest Service guidelines using information derived from LIDAR

    Treesearch

    Bogdan M. Strimbu; Daniel Cooke; Samuel Strozier

    2015-01-01

    Forest resources are traditionally assessed using field measurements. The USDA Forest Service developed a series of guidelines for planning and executing the measurements, specifically the significance level and maximum allowed sampling error.

  19. Wire-positioning algorithm for coreless Hall array sensors in current measurement

    NASA Astrophysics Data System (ADS)

    Chen, Wenli; Zhang, Huaiqing; Chen, Lin; Gu, Shanyun

    2018-05-01

    This paper presents a scheme of circular-arrayed, coreless Hall-effect current transformers. It can satisfy the demands of wide dynamic range and bandwidth current in the distribution system, as well as the demand of AC and DC simultaneous measurements. In order to improve the signal to noise ratio (SNR) of the sensor, a wire-positioning algorithm is proposed, which can improve the measurement accuracy based on the post-processing of measurement data. The simulation results demonstrate that the maximum errors are 70%, 6.1% and 0.95% corresponding to Ampère’s circuital method, approximate positioning algorithm and precise positioning algorithm, respectively. It is obvious that the accuracy of the positioning algorithm is significantly improved when compared with that of the Ampère’s circuital method. The maximum error of the positioning algorithm is smaller in the experiment.

  20. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    PubMed

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  1. Error threshold inference from Global Precipitation Measurement (GPM) satellite rainfall data and interpolated ground-based rainfall measurements in Metro Manila

    NASA Astrophysics Data System (ADS)

    Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.

    2017-12-01

    The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that IMERG performs well for moderate to high intensity rainfall and that the interpolation remains effective only when rainfall exceeds a certain threshold value. Over Metro Manila, an F-RMSE threshold of 0.5 mm indicated better correspondence between ground measured and satellite measured rainfall.

  2. Dependence of Dynamic Modeling Accuracy on Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations

  3. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less

  4. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  5. Examining Impulse-Variability in Kicking.

    PubMed

    Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F

    2016-07-01

    This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.

  6. Test-retest reliability of sudden ankle inversion measurements in subjects with healthy ankle joints.

    PubMed

    Eechaute, Christophe; Vaes, Peter; Duquet, William; Van Gheluwe, Bart

    2007-01-01

    Sudden ankle inversion tests have been used to investigate whether the onset of peroneal muscle activity is delayed in patients with chronically unstable ankle joints. Before interpreting test results of latency times in patients with chronic ankle instability and healthy subjects, the reliability of these measures must be first demonstrated. To investigate the test-retest reliability of variables measured during a sudden ankle inversion movement in standing subjects with healthy ankle joints. Validation study. Research laboratory. 15 subjects with healthy ankle joints (30 ankles). Subjects stood on an ankle inversion platform with both feet tightly fixed to independently moveable trapdoors. An unexpected sudden ankle inversion of 50 degrees was imposed. We measured latency and motor response times and electromechanical delay of the peroneus longus muscle, along with the time and angular position of the first and second decelerating moments, the mean and maximum inversion speed, and the total inversion time. Correlation coefficients and standard error of measurements were calculated. Intraclass correlation coefficients ranged from 0.17 for the electromechanical delay of the peroneus longus muscle (standard error of measurement = 2.7 milliseconds) to 0.89 for the maximum inversion speed (standard error of measurement = 34.8 milliseconds). The reliability of the latency and motor response times of the peroneus longus muscle, the time of the first and second decelerating moments, and the mean and maximum inversion speed was acceptable in subjects with healthy ankle joints and supports the investigation of the reliability of these measures in subjects with chronic ankle instability. The lower reliability of the electromechanical delay of the peroneus longus muscle and the angular positions of both decelerating moments calls the use of these variables into question.

  7. [De-noising and measurement of pulse wave velocity of the wavelet].

    PubMed

    Liu, Baohua; Zhu, Honglian; Ren, Xiaohua

    2011-02-01

    Pulse wave velocity (PWV) is a vital index of the cardiovascular pathology, so that the accurate measurement of PWV can be of benefit for prevention and treatment of cardiovascular diseases. The noise in the measure system of pulse wave signal, rounding error and selection of the recording site all cause errors in the measure result. In this paper, with wavelet transformation to eliminate the noise and to raise the precision, and with the choice of the point whose slope was maximum as the recording site of the reconstructing pulse wave, the measuring system accuracy was improved.

  8. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  9. MO-FG-BRA-06: Electromagnetic Beacon Insertion in Lung Cancer Patients and Resultant Surrogacy Errors for Dynamic MLC Tumour Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardcastle, N; Booth, J; Caillet, V

    Purpose: To assess endo-bronchial electromagnetic beacon insertion and to quantify the geometric accuracy of using beacons as a surrogate for tumour motion in real-time multileaf collimator (MLC) tracking of lung tumours. Methods: The LIGHT SABR trial is a world-first clinical trial in which the MLC leaves move with lung tumours in real time on a standard linear accelerator. Tracking is performed based on implanted electromagnetic beacons (CalypsoTM, Varian Medical Systems, USA) as a surrogate for tumour motion. Five patients have been treated and have each had three beacons implanted endo-bronchially under fluoroscopic guidance. The centre of mass (C.O.M) has beenmore » used to adapt the MLC in real-time. The geometric error in using the beacon C.O.M as a surrogate for tumour motion was measured by measuring the tumour and beacon C.O.M in all phases of the respiratory cycle of a 4DCT. The surrogacy error was defined as the difference in beacon and tumour C.O.M relative to the reference phase (maximum exhale). Results: All five patients have had three beacons successfully implanted with no migration between simulation and end of treatment. Beacon placement relative to tumour C.O.M varied from 14 to 74 mm and in one patient spanned two lobes. Surrogacy error was measured in each patient on the simulation 4DCT and ranged from 0 to 3 mm. Surrogacy error as measured on 4DCT was subject to artefacts in mid-ventilation phases. Surrogacy error was a function of breathing phase and was typically larger at maximum inhale. Conclusion: Beacon placement and thus surrogacy error is a major component of geometric uncertainty in MLC tracking of lung tumours. Surrogacy error must be measured on each patient and incorporated into margin calculation. Reduction of surrogacy error is limited by airway anatomy, however should be taken into consideration when performing beacon insertion and planning. This research is funded by Varian Medical Systems via a collaborative research agreement.« less

  10. An introduction of component fusion extend Kalman filtering method

    NASA Astrophysics Data System (ADS)

    Geng, Yue; Lei, Xusheng

    2018-05-01

    In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.

  11. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  12. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    PubMed

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Seepage investigation and selected hydrologic data for the Escalante River drainage basin, Garfield and Kane Counties, Utah, 1909-2002

    USGS Publications Warehouse

    Wilberg, Dale E.; Stolp, Bernard J.

    2005-01-01

    This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.

  14. DNAPL MAPPING AND WATER SATURATION MEASUREMENTS IN 2-D MODELS USING LIGHT TRANSMISSION VISUALIZATION (LTV) TECHNIQUE

    EPA Science Inventory

    • LTV can be used to characterize free phase PCE architecture in 2-D flow chambers without using a dye. • Results to date suggest that error in PCE detection using LTV can be less than 10% if the imaging system is optimized. • Mass balance calculations show a maximum error of 9...

  15. Non-contact method for characterization of small size thermoelectric modules.

    PubMed

    Manno, Michael; Yang, Bao; Bar-Cohen, Avram

    2015-08-01

    Conventional techniques for characterization of thermoelectric performance require bringing measurement equipment into direct contact with the thermoelectric device, which is increasingly error prone as device size decreases. Therefore, the novel work presented here describes a non-contact technique, capable of accurately measuring the maximum ΔT and maximum heat pumping of mini to micro sized thin film thermoelectric coolers. The non-contact characterization method eliminates the measurement errors associated with using thermocouples and traditional heat flux sensors to test small samples and large heat fluxes. Using the non-contact approach, an infrared camera, rather than thermocouples, measures the temperature of the hot and cold sides of the device to determine the device ΔT and a laser is used to heat to the cold side of the thermoelectric module to characterize its heat pumping capacity. As a demonstration of the general applicability of the non-contact characterization technique, testing of a thin film thermoelectric module is presented and the results agree well with those published in the literature.

  16. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  17. Design, construction and performance evaluation of the target tissue thickness measurement system in intraoperative radiotherapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Yazdani, Mohammad Reza; Setayeshi, Saeed; Arabalibeik, Hossein; Akbari, Mohammad Esmaeil

    2017-05-01

    Intraoperative electron radiation therapy (IOERT), which uses electron beams for irradiating the target directly during the surgery, has the advantage of delivering a homogeneous dose to a controlled layer of tissue. Since the dose falls off quickly below the target thickness, the underlying normal tissues are spared. In selecting the appropriate electron energy, the accuracy of the target tissue thickness measurement is critical. In contrast to other procedures applied in IOERT, the routine measurement method is considered to be completely traditional and approximate. In this work, a novel mechanism is proposed for measuring the target tissue thickness with an acceptable level of accuracy. An electronic system has been designed and manufactured with the capability of measuring the tissue thickness based on the recorded electron density under the target. The results indicated the possibility of thickness measurement with a maximum error of 2 mm for 91.35% of data. Aside from system limitation in estimating the thickness of 5 mm phantom, for 88.94% of data, maximum error is 1 mm.

  18. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  19. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Translation fidelity coevolves with longevity.

    PubMed

    Ke, Zhonghe; Mallik, Pramit; Johnson, Adam B; Luna, Facundo; Nevo, Eviatar; Zhang, Zhengdong D; Gladyshev, Vadim N; Seluanov, Andrei; Gorbunova, Vera

    2017-10-01

    Whether errors in protein synthesis play a role in aging has been a subject of intense debate. It has been suggested that rare mistakes in protein synthesis in young organisms may result in errors in the protein synthesis machinery, eventually leading to an increasing cascade of errors as organisms age. Studies that followed generally failed to identify a dramatic increase in translation errors with aging. However, whether translation fidelity plays a role in aging remained an open question. To address this issue, we examined the relationship between translation fidelity and maximum lifespan across 17 rodent species with diverse lifespans. To measure translation fidelity, we utilized sensitive luciferase-based reporter constructs with mutations in an amino acid residue critical to luciferase activity, wherein misincorporation of amino acids at this mutated codon re-activated the luciferase. The frequency of amino acid misincorporation at the first and second codon positions showed strong negative correlation with maximum lifespan. This correlation remained significant after phylogenetic correction, indicating that translation fidelity coevolves with longevity. These results give new life to the role of protein synthesis errors in aging: Although the error rate may not significantly change with age, the basal rate of translation errors is important in defining lifespan across mammals. © 2017 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.

  1. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  2. Intra-arterial pressure measurement in neonates: dynamic response requirements.

    PubMed

    van Genderingen, H R; Gevers, M; Hack, W W

    1995-02-01

    A computer simulation of a catheter manometer system was used to quantify measurement errors in neonatal blood pressure parameters. Accurate intra-arterial pressure recordings of 21 critically ill newborns were fed into this simulated system. The dynamic characteristics, natural frequency and damping coefficient, were varied from 2.5 to 60 Hz and from 0.1 to 1.4, respectively. As a result, errors in systolic, diastolic and pulse arterial pressure were obtained as a function of natural frequency and damping coefficient. Iso-error curves for 2%, 5% and 10% were constructed. Using these curves, the maximum inaccuracy of any neonatal catheter manometer system can be determined and used in the clinical setting.

  3. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  4. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  5. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  6. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  7. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  8. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  9. Evaluation of glued-diaphragm fibre optic pressure sensors in a shock tube

    NASA Astrophysics Data System (ADS)

    Sharifian, S. Ahmad; Buttsworth, David R.

    2007-02-01

    Glued-diaphragm fibre optic pressure sensors that utilize standard telecommunications components which are based on Fabry-Perot interferometry are appealing in a number of respects. Principally, they have high spatial and temporal resolution and are low in cost. These features potentially make them well suited to operation in extreme environments produced in short-duration high-enthalpy wind tunnel facilities where spatial and temporal resolution are essential, but attrition rates for sensors are typically very high. The sensors we consider utilize a zirconia ferrule substrate and a thin copper foil which are bonded together using an adhesive. The sensors show a fast response and can measure fluctuations with a frequency up to 250 kHz. The sensors also have a high spatial resolution on the order of 0.1 mm. However, with the interrogation and calibration processes adopted in this work, apparent errors of up to 30% of the maximum pressure have been observed. Such errors are primarily caused by mechanical hysteresis and adhesive viscoelasticity. If a dynamic calibration is adopted, the maximum measurement error can be limited to about 10% of the maximum pressure. However, a better approach is to eliminate the adhesive from the construction process or design the diaphragm and substrate in a way that does not require the adhesive to carry a significant fraction of the mechanical loading.

  10. Auto-tracking system for human lumbar motion analysis.

    PubMed

    Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong

    2011-01-01

    Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.

  11. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  12. Removal of batch effects using distribution-matching residual networks.

    PubMed

    Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval

    2017-08-15

    Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Validation of a Biofeedback System for Wheelchair Propulsion Training

    PubMed Central

    Guo, Liyun; Kwarciak, Andrew M.; Rodriguez, Russell; Sarkar, Nilanjan; Richter, W. Mark

    2011-01-01

    This paper describes the design and validation of the OptiPush Biofeedback System, a commercially available, instrumented wheel system that records handrim biomechanics and provides stroke-by-stroke biofeedback and targeting for 11 propulsion variables. Testing of the system revealed accurate measurement of wheel angle (0.02% error), wheel speed (0.06% error), and handrim loads. The maximum errors in static force and torque measurements were 3.80% and 2.05%, respectively. Measured forces were also found to be highly linear (0.985 < slope < 1.011) and highly correlated to the reference forces (r 2 > .998). Dynamic measurements of planar forces (F x and F y) and axle torque also had low error (−0.96 N to 0.83 N for force and 0.10 Nm to 0.14 Nm for torque) and were highly correlated (r > .986) with expected force and torque values. Overall, the OptiPush Biofeedback System provides accurate measurement of wheel dynamics and handrim biomechanics and may be a useful tool for improving manual wheelchair propulsion. PMID:22110977

  14. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  15. A two-dimensional, finite-difference model of the high plains aquifer in southern South Dakota

    USGS Publications Warehouse

    Kolm, K.E.; Case, H. L.

    1983-01-01

    The High Plains aquifer is the principal source of water for irrigation, industry, municipalities, and domestic use in south-central South Dakota. The aquifer, composed of upper sandstone units of the Arikaree Formation, and the overlying Ogallala and Sand Hills Formations, was simulated using a two-dimensional, finite-difference computer model. The maximum difference between simulated and measured potentiometric heads was less than 60 feet (1- to 4-percent error). Two-thirds of the simulated potentiometric heads were within 26 feet of the measured values (3-percent error). The estimated saturated thickness, computed from simulated potentiometric heads, was within 25-percent error of the known saturated thickness for 95 percent of the study area. (USGS)

  16. Information systems as a tool to improve legal metrology activities

    NASA Astrophysics Data System (ADS)

    Rodrigues Filho, B. A.; Soratto, A. N. R.; Gonçalves, R. F.

    2016-07-01

    This study explores the importance of information systems applied to legal metrology as a tool to improve the control of measuring instruments used in trade. The information system implanted in Brazil has also helped to understand and appraise the control of the measurements due to the behavior of the errors and deviations of instruments used in trade, allowing the allocation of resources wisely, leading to a more effective planning and control on the legal metrology field. A study case analyzing the fuel sector is carried out in order to show the conformity of fuel dispersers according to maximum permissible errors. The statistics of measurement errors of 167,310 fuel dispensers of gasoline, ethanol and diesel used in the field were analyzed demonstrating the accordance of the fuel market in Brazil to the legal requirements.

  17. Design and performance evaluation of a master controller for endovascular catheterization.

    PubMed

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-01-01

    It is difficult to manipulate a flexible catheter to target a position within a patient's complicated and delicate vessels. However, few researchers focused on the controller designs with much consideration of the natural catheter manipulation skills obtained from manual catheterization. Also, the existing catheter motion measurement methods probably lead to the difficulties in designing the force feedback device. Additionally, the commercially available systems are too expensive which makes them cost prohibitive to most hospitals. This paper presents a simple and cost-effective master controller for endovascular catheterization that can allow the interventionalists to apply the conventional pull, push and twist of the catheter used in current practice. A catheter-sensing unit (used to measure the motion of the catheter) and a force feedback unit (used to provide a sense of resistance force) are both presented. A camera was used to allow a contactless measurement avoiding additional friction, and the force feedback in the axial direction was provided by the magnetic force generated between the permanent magnets and the powered coil. Performance evaluation of the controller was evaluated by first conducting comparison experiments to quantify the accuracy of the catheter-sensing unit, and then conducting several experiments to evaluate the force feedback unit. From the experimental results, the minimum and the maximum errors of translational displacement were 0.003 mm (0.01 %) and 0.425 mm (1.06 %), respectively. The average error was 0.113 mm (0.28 %). In terms of rotational angles, the minimum and the maximum errors were 0.39°(0.33 %) and 7.2°(6 %), respectively. The average error was 3.61°(3.01 %). The force resolution was approximately 25 mN and a maximum current of 3A generated an approximately 1.5 N force. Based on analysis of requirements and state-of-the-art computer-assisted and robot-assisted training systems for endovascular catheterization, a new master controller with force feedback interface was proposed to maintain the natural endovascular catheterization skills of the interventionalists.

  18. Characterization of the International Linear Collider damping ring optics

    NASA Astrophysics Data System (ADS)

    Shanks, J.; Rubin, D. L.; Sagan, D.

    2014-10-01

    A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.

  19. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  20. Using digital inpainting to estimate incident light intensity for the calculation of red blood cell oxygen saturation from microscopy images.

    PubMed

    Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G

    2018-05-25

    Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality

    USGS Publications Warehouse

    Gaeuman, David; Jacobson, Robert B.

    2005-01-01

    When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.

  2. Examining impulse-variability in overarm throwing.

    PubMed

    Urbin, M A; Stodden, David; Boros, Rhonda; Shannon, David

    2012-01-01

    The purpose of this study was to examine variability in overarm throwing velocity and spatial output error at various percentages of maximum to test the prediction of an inverted-U function as predicted by impulse-variability theory and a speed-accuracy trade-off as predicted by Fitts' Law Thirty subjects (16 skilled, 14 unskilled) were instructed to throw a tennis ball at seven percentages of their maximum velocity (40-100%) in random order (9 trials per condition) at a target 30 feet away. Throwing velocity was measured with a radar gun and interpreted as an index of overall systemic power output. Within-subject throwing velocity variability was examined using within-subjects repeated-measures ANOVAs (7 repeated conditions) with built-in polynomial contrasts. Spatial error was analyzed using mixed model regression. Results indicated a quadratic fit with variability in throwing velocity increasing from 40% up to 60%, where it peaked, and then decreasing at each subsequent interval to maximum (p < .001, η2 = .555). There was no linear relationship between speed and accuracy. Overall, these data support the notion of an inverted-U function in overarm throwing velocity variability as both skilled and unskilled subjects approach maximum effort. However, these data do not support the notion of a speed-accuracy trade-off. The consistent demonstration of an inverted-U function associated with systemic power output variability indicates an enhanced capability to regulate aspects of force production and relative timing between segments as individuals approach maximum effort, even in a complex ballistic skill.

  3. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  4. Tropospheric profiles of wet refractivity and humidity from the combination of remote sensing data sets and measurements on the ground

    NASA Astrophysics Data System (ADS)

    Hurter, F.; Maier, O.

    2013-11-01

    We reconstruct atmospheric wet refractivity profiles for the western part of Switzerland with a least-squares collocation approach from data sets of (a) zenith path delays that are a byproduct of the GPS (global positioning system) processing, (b) ground meteorological measurements, (c) wet refractivity profiles from radio occultations whose tangent points lie within the study area, and (d) radiosonde measurements. Wet refractivity is a parameter partly describing the propagation of electromagnetic waves and depends on the atmospheric parameters temperature and water vapour pressure. In addition, we have measurements of a lower V-band microwave radiometer at Payerne. It delivers temperature profiles at high temporal resolution, especially in the range from ground to 3000 m a.g.l., though vertical information content decreases with height. The temperature profiles together with the collocated wet refractivity profiles provide near-continuous dew point temperature or relative humidity profiles at Payerne for the study period from 2009 to 2011. In the validation of the humidity profiles, we adopt a two-step procedure. We first investigate the reconstruction quality of the wet refractivity profiles at the location of Payerne by comparing them to wet refractivity profiles computed from radiosonde profiles available for that location. We also assess the individual contributions of the data sets to the reconstruction quality and demonstrate a clear benefit from the data combination. Secondly, the accuracy of the conversion from wet refractivity to dew point temperature and relative humidity profiles with the radiometer temperature profiles is examined, comparing them also to radiosonde profiles. For the least-squares collocation solution combining GPS and ground meteorological measurements, we achieve the following error figures with respect to the radiosonde reference: maximum median offset of relative refractivity error is -16% and quartiles are 5% to 40% for the lower troposphere. We further added 189 radio occultations that met our requirements. They mostly improved the accuracy in the upper troposphere. Maximum median offsets have decreased from 120% relative error to 44% at 8 km height. Dew point temperature profiles after the conversion with radiometer temperatures compare to radiosonde profiles as to: absolute dew point temperature errors in the lower troposphere have a maximum median offset of -2 K and maximum quartiles of 4.5 K. For relative humidity, we get a maximum mean offset of 7.3%, with standard deviations of 12-20%. The methodology presented allows us to reconstruct humidity profiles at any location where temperature profiles, but no atmospheric humidity measurements other than from GPS are available. Additional data sets of wet refractivity are shown to be easily integrated into the framework and strongly aid the reconstruction. Since the used data sets are all operational and available in near-realtime, we envisage the methodology of this paper to be a tool for nowcasting of clouds and rain and to understand processes in the boundary layer and at its top.

  5. Ares I Static Tests Design

    NASA Technical Reports Server (NTRS)

    Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.

    2009-01-01

    Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.

  6. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  7. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  8. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  9. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes

    NASA Astrophysics Data System (ADS)

    Brian Leen, J.; Berman, Elena S. F.; Liebson, Lindsay; Gupta, Manish

    2012-04-01

    Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to δ2H and δ18O measurement errors (Δδ2H and Δδ18O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offset of the spectra is used to calculate a broadband spectral metric, mBB, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, mNB. These metrics are used to correct for Δδ2H and Δδ18O. The method was tested on 14 instruments and Δδ18O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while Δδ2H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with mNB. Using the isotope error versus mNB and mBB curves, Δδ2H and Δδ18O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 ‰ and 0.25 ‰ respectively, while Δδ2H and Δδ18O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 ‰ and 0.22 ‰. Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.

  10. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  11. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  12. A Measurement of Gravitational Lensing of the Cosmic Microwave Background by Galaxy Clusters Using Data from the South Pole Telescope

    DOE PAGES

    Baxter, E. J.; Keisler, R.; Dodelson, S.; ...

    2015-06-22

    Clusters of galaxies are expected to gravitationally lens the cosmic microwave background (CMB) and thereby generate a distinct signal in the CMB on arcminute scales. Measurements of this effect can be used to constrain the masses of galaxy clusters with CMB data alone. Here we present a measurement of lensing of the CMB by galaxy clusters using data from the South Pole Telescope (SPT). We also develop a maximum likelihood approach to extract the CMB cluster lensing signal and validate the method on mock data. We quantify the effects on our analysis of several potential sources of systematic error andmore » find that they generally act to reduce the best-fit cluster mass. It is estimated that this bias to lower cluster mass is roughly 0.85σ in units of the statistical error bar, although this estimate should be viewed as an upper limit. Furthermore, we apply our maximum likelihood technique to 513 clusters selected via their Sunyaev–Zeldovich (SZ) signatures in SPT data, and rule out the null hypothesis of no lensing at 3.1σ. The lensing-derived mass estimate for the full cluster sample is consistent with that inferred from the SZ flux: M 200,lens = 0.83 +0.38 -0.37 M 200,SZ (68% C.L., statistical error only).« less

  13. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  14. Improvement of GPS radio occultation retrieval error of E region electron density: COSMIC measurement and IRI model simulation

    NASA Astrophysics Data System (ADS)

    Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang

    2015-03-01

    In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a range of -6.5% to 20%.

  15. Characterizing error distributions for MISR and MODIS optical depth data

    NASA Astrophysics Data System (ADS)

    Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.

    2008-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.

  16. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  17. An intersecting chord method for minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Zhang, Qing; Liang, Lin; Liu, Dan

    2015-11-01

    As one of the Geometrical Product Specifications that are widely applied in industrial manufacturing and measurement, sphericity error can synthetically scale a 3D structure and reflects the machining quality of a spherical workpiece. Following increasing demands in the high motion performance of spherical parts, sphericity error is becoming an indispensable component in the evaluation of form error. However, the evaluation of sphericity error is still considered to be a complex mathematical issue, and the related research studies on the development of available models are lacking. In this paper, an intersecting chord method is first proposed to solve the minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error. This new modelling method leverages chord relationships to replace the characteristic points, thereby significantly reducing the computational complexity and improving the computational efficiency. Using the intersecting chords to generate a virtual centre, the reference sphere in two concentric spheres is simplified as a space intersecting structure. The position of the virtual centre on the space intersecting structure is determined by characteristic chords, which may reduce the deviation between the virtual centre and the centre of the reference sphere. In addition,two experiments are used to verify the effectiveness of the proposed method with real datasets from the Cartesian coordinates. The results indicate that the estimated errors are in perfect agreement with those of the published methods. Meanwhile, the computational efficiency is improved. For the evaluation of the sphericity error, the use of high performance computing is a remarkable change.

  18. Techniques for measurement of thoracoabdominal asynchrony

    NASA Technical Reports Server (NTRS)

    Prisk, G. Kim; Hammer, J.; Newth, Christopher J L.

    2002-01-01

    Respiratory motion measured by respiratory inductance plethysmography often deviates from the sinusoidal pattern assumed in the traditional Lissajous figure (loop) analysis used to determine thoraco-abdominal asynchrony, or phase angle phi. We investigated six different time-domain methods of measuring phi, using simulated data with sinusoidal and triangular waveforms, phase shifts of 0-135 degrees, and 10% noise. The techniques were then used on data from 11 lightly anesthetized rhesus monkeys (Macaca mulatta; 7.6 +/- 0.8 kg; 5.7 +/- 0.5 years old), instrumented with a respiratory inductive plethysmograph, and subjected to increasing levels of inspiratory resistive loading ranging from 5-1,000 cmH(2)O. L(-1). sec(-1).The best results were obtained from cross-correlation and maximum linear correlation, with errors less than approximately 5 degrees from the actual phase angle in the simulated data. The worst performance was produced by the loop analysis, which in some cases was in error by more than 30 degrees. Compared to correlation, other analysis techniques performed at an intermediate level. Maximum linear correlation and cross-correlation produced similar results on the data collected from monkeys (SD of the difference, 4.1 degrees ) but all other techniques had a high SD of the difference compared to the correlation techniques.We conclude that phase angles are best measured using cross-correlation or maximum linear correlation, techniques that are independent of waveform shape, and robust in the presence of noise. Copyright 2002 Wiley-Liss, Inc.

  19. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  20. Skin Friction at Very High Reynolds Numbers in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Watson, Ralph D.; Anders, John B.; Hall, Robert M.

    2006-01-01

    Skin friction coefficients were derived from measurements using standard measurement technologies on an axisymmetric cylinder in the NASA Langley National Transonic Facility (NTF) at Mach numbers from 0.2 to 0.85. The pressure gradient was nominally zero, the wall temperature was nominally adiabatic, and the ratio of boundary layer thickness to model diameter within the measurement region was 0.10 to 0.14, varying with distance along the model. Reynolds numbers based on momentum thicknesses ranged from 37,000 to 605,000. The measurements approximately doubled the range of available data for flat plate skin friction coefficients. Three different techniques were used to measure surface shear. The maximum error of Preston tube measurements was estimated to be 2.5 percent, while that of Clauser derived measurements was estimated to be approximately 5 percent. Direct measurements by skin friction balance proved to be subject to large errors and were not considered reliable.

  1. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  2. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  3. Comparison of three-dimensional parameters of Halo CMEs using three cone models

    NASA Astrophysics Data System (ADS)

    Na, H.; Moon, Y.; Jang, S.; Lee, K.

    2012-12-01

    Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms and their three dimensional structures are important for space weather. In this study, we compare three cone models: an elliptical cone model, an ice-cream cone model, and an asymmetric cone model. These models allow us to determine the three dimensional parameters of HCMEs such as radial speed, angular width, and the angle (γ) between sky plane and cone axis. We compare these parameters obtained from three models using 62 well-observed HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root mean square error (RMS error) between maximum measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another (R > 0.84). The correlation coefficients between angular widths are ranges from 0.04 to 0.53 and those between γ values are from -0.15 to 0.47, which are much smaller than expected. The reason may be due to different assumptions and methods. The RMS errors between the maximum measured projection speeds and the maximum estimated projection speeds of the elliptical cone model, the ice-cream cone model, and the asymmetric cone model are 213 km/s, 254 km/s, and 267 km/s, respectively. And we obtain the correlation coefficients between the location from the models and the flare location (R > 0.75). Finally, we discuss strengths and weaknesses of these models in terms of space weather application.

  4. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  5. Influence of Spatial Resolution in Three-dimensional Cine Phase Contrast Magnetic Resonance Imaging on the Accuracy of Hemodynamic Analysis

    PubMed Central

    Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi

    2017-01-01

    Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996

  6. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  7. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  8. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  9. Building a kinetic Monte Carlo model with a chosen accuracy.

    PubMed

    Bhute, Vijesh J; Chatterjee, Abhijit

    2013-06-28

    The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.

  10. The CO2 laser frequency stability measurements

    NASA Technical Reports Server (NTRS)

    Johnson, E. H., Jr.

    1973-01-01

    Carbon dioxide laser frequency stability data are considered for a receiver design that relates to maximum Doppler frequency and its rate of change. Results show that an adequate margin exists in terms of data acquisition, Doppler tracking, and bit error rate as they relate to laser stability and transmitter power.

  11. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  12. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  13. Target thrust measurement for applied-field magnetoplasmadynamic thruster

    NASA Astrophysics Data System (ADS)

    Wang, B.; Yang, W.; Tang, H.; Li, Z.; Kitaeva, A.; Chen, Z.; Cao, J.; Herdrich, G.; Zhang, K.

    2018-07-01

    In this paper, we present a flat target thrust stand which is designed to measure the thrust of a steady-state applied-field magnetoplasmadynamic thruster (AF-MPDT). In our experiments we varied target-thruster distances and target size to analyze their influence on the target thrust measurement results. The obtained thrust-distance curves increase to local maximum and then decreases with the increasing distance, which means that the plume of the AF-MPDT can still accelerate outside the thruster exit. The peak positions are related to the target sizes: larger targets can make the peak positions further from the thruster and decrease the measurement errors. To further improve the reliability of measurement results, a thermal equilibrium assumption combined with Knudsen’s cosine law is adapted to analyze the error caused by the back stream of plume particles. Under the assumption, the error caused by particle backflow is no more than 3.6% and the largest difference between the measured thrust and the theoretical thrust is 14%. Moreover, it was verified that target thrust measurement can disturb the working of the AF-MPD thruster, and the influence on the thrust measurement result is no more than 1% in our experiment.

  14. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    PubMed

    Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir

    2015-01-01

    The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  15. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  16. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  17. Evaluation of dynamic electromagnetic tracking deviation

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Figl, Michael; Bax, Michael; Shahidi, Ramin; Bergmann, Helmar; Birkfellner, Wolfgang

    2009-02-01

    Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. We found a root mean square error (eRMS) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error emax = 2.31mm, minimum error emin = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms.

  18. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  19. Body surface posture evaluation: construction, validation and protocol of the SPGAP system (Posture evaluation rotating platform system).

    PubMed

    Schwertner, Debora Soccal; Oliveira, Raul; Mazo, Giovana Zarpellon; Gioda, Fabiane Rosa; Kelber, Christian Roberto; Swarowsky, Alessandra

    2016-05-04

    Several posture evaluation devices have been used to detect deviations of the vertebral column. However it has been observed that the instruments present measurement errors related to the equipment, environment or measurement protocol. This study aimed to build, validate, analyze the reliability and describe a measurement protocol for the use of the Posture Evaluation Rotating Platform System (SPGAP, Brazilian abbreviation). The posture evaluation system comprises a Posture Evaluation Rotating Platform, video camera, calibration support and measurement software. Two pilot studies were carried out with 102 elderly individuals (average age 69 years old, SD = ±7.3) to establish a protocol for SPGAP, controlling the measurement errors related to the environment, equipment and the person under evaluation. Content validation was completed with input from judges with expertise in posture measurement. The variation coefficient method was used to validate the measurement by the instrument of an object with known dimensions. Finally, reliability was established using repeated measurements of the known object. Expert content judges gave the system excellent ratings for content validity (mean 9.4 out of 10; SD 1.13). The measurement of an object with known dimensions indicated excellent validity (all measurement errors <1 %) and test-retest reliability. A total of 26 images were needed to stabilize the system. Participants in the pilot studies indicated that they felt comfortable throughout the assessment. The use of only one image can offer measurements that underestimate or overestimate the reality. To verify the images of objects with known dimensions the values for the width and height were, respectively, CV 0.88 (width) and 2.33 (height), SD 0.22 (width) and 0.35 (height), minimum and maximum values 24.83-25.2 (width) and 14.56 - 15.75 (height). In the analysis of different images (similar) of an individual, greater discrepancies were observed in the values found. The cervical index, for example, presented minimum and maximum values of 15.38 and 37.5, a coefficient of variation of 0.29 and a standard deviation of 6.78. The SPGAP was shown to be a valid and reliable instrument for the quantitative analysis of body posture with applicability and clinical use, since it managed to reduce several measurement errors, amongst which parallax distortion.

  20. Measurement of the length of pedestrian crossings and detection of traffic lights from image data

    NASA Astrophysics Data System (ADS)

    Shioyama, Tadayoshi; Wu, Haiyuan; Nakamura, Naoki; Kitawaki, Suguru

    2002-09-01

    This paper proposes a method for measurement of the length of a pedestrian crossing and for the detection of traffic lights from image data observed with a single camera. The length of a crossing is measured from image data of white lines painted on the road at a crossing by using projective geometry. Furthermore, the state of the traffic lights, green (go signal) or red (stop signal), is detected by extracting candidates for the traffic light region with colour similarity and selecting a true traffic light from them using affine moment invariants. From the experimental results, the length of a crossing is measured with an accuracy such that the maximum relative error of measured length is less than 5% and the rms error is 0.38 m. A traffic light is efficiently detected by selecting a true traffic light region with an affine moment invariant.

  1. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  2. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Leen, J.; Berman, Elena S. F.; Gupta, Manish

    Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to {delta}{sup 2}H and {delta}{sup 18}O measurement errors ({Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offsetmore » of the spectra is used to calculate a broadband spectral metric, m{sub BB}, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, m{sub NB}. These metrics are used to correct for {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O. The method was tested on 14 instruments and {Delta}{delta}{sup 18}O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while {Delta}{delta}{sup 2}H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with m{sub NB}. Using the isotope error versus m{sub NB} and m{sub BB} curves, {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 per mille and 0.25 per mille respectively, while {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 per mille and 0.22 per mille . Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.« less

  3. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  4. WE-A-17A-03: Catheter Digitization in High-Dose-Rate Brachytherapy with the Assistance of An Electromagnetic (EM) Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, AL; Bhagwat, MS; Buzurovic, I

    Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less

  5. Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity

    NASA Astrophysics Data System (ADS)

    Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.

    2018-07-01

    The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.

  6. E-ELT M5 field stabilisation unit scale 1 demonstrator design and performances evaluation

    NASA Astrophysics Data System (ADS)

    Casalta, J. M.; Barriga, J.; Ariño, J.; Mercader, J.; San Andrés, M.; Serra, J.; Kjelberg, I.; Hubin, N.; Jochum, L.; Vernet, E.; Dimmler, M.; Müller, M.

    2010-07-01

    The M5 Field stabilization Unit (M5FU) for European Extremely Large Telescope (E-ELT) is a fast correcting optical system that shall provide tip-tilt corrections for the telescope dynamic pointing errors and the effect of atmospheric tiptilt and wind disturbances. A M5FU scale 1 demonstrator (M5FU1D) is being built to assess the feasibility of the key elements (actuators, sensors, mirror, mirror interfaces) and the real-time control algorithm. The strict constraints (e.g. tip-tilt control frequency range 100Hz, 3m ellipse mirror size, mirror first Eigen frequency 300Hz, maximum tip/tilt range +/- 30 arcsec, maximum tiptilt error < 40 marcsec) have been a big challenge for developing the M5FU Conceptual Design and its scale 1 demonstrator. The paper summarises the proposed design for the final unit and demonstrator and the measured performances compared to the applicable specifications.

  7. Estimation of the Total Atmospheric Water Vapor Content and Land Surface Temperature Based on AATSR Thermal Data

    PubMed Central

    Zhang, Tangtang; Wen, Jun; van der Velde, Rogier; Meng, Xianhong; Li, Zhenchao; Liu, Yuanyong; Liu, Rong

    2008-01-01

    The total atmospheric water vapor content (TAWV) and land surface temperature (LST) play important roles in meteorology, hydrology, ecology and some other disciplines. In this paper, the ENVISAT/AATSR (The Advanced Along-Track Scanning Radiometer) thermal data are used to estimate the TAWV and LST over the Loess Plateau in China by using a practical split window algorithm. The distribution of the TAWV is accord with that of the MODIS TAWV products, which indicates that the estimation of the total atmospheric water vapor content is reliable. Validations of the LST by comparing with the ground measurements indicate that the maximum absolute derivation, the maximum relative error and the average relative error is 4.0K, 11.8% and 5.0% respectively, which shows that the retrievals are believable; this algorithm can provide a new way to estimate the LST from AATSR data. PMID:27879795

  8. Flexible, multi-measurement guided wave damage detection under varying temperatures

    NASA Astrophysics Data System (ADS)

    Douglass, Alexander C. S.; Harley, Joel B.

    2018-04-01

    Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).

  9. Spectroscopic ellipsometer based on direct measurement of polarization ellipticity.

    PubMed

    Watkins, Lionel R

    2011-06-20

    A polarizer-sample-Wollaston prism analyzer ellipsometer is described in which the ellipsometric angles ψ and Δ are determined by direct measurement of the elliptically polarized light reflected from the sample. With the Wollaston prism initially set to transmit p- and s-polarized light, the azimuthal angle P of the polarizer is adjusted until the two beams have equal intensity. This condition yields ψ=±P and ensures that the reflected elliptically polarized light has an azimuthal angle of ±45° and maximum ellipticity. Rotating the Wollaston prism through 45° and adjusting the analyzer azimuth until the two beams again have equal intensity yields the ellipticity that allows Δ to be determined via a simple linear relationship. The errors produced by nonideal components are analyzed. We show that the polarizer dominates these errors but that for most practical purposes, the error in ψ is negligible and the error in Δ may be corrected exactly. A native oxide layer on a silicon substrate was measured at a single wavelength and multiple angles of incidence and spectroscopically at a single angle of incidence. The best fit film thicknesses obtained were in excellent agreement with those determined using a traditional null ellipsometer.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y; Fullerton, G; Goins, B

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less

  11. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehnke, E McKenzie; DeMarco, J; Steers, J

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less

  12. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  13. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  14. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  15. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  17. Uses and biases of volunteer water quality data

    USGS Publications Warehouse

    Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.

    2010-01-01

    State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.

  18. A Maximum Likelihood Approach for Multisample Nonlinear Structural Equation Models with Missing Continuous and Dichotomous Data

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2006-01-01

    Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…

  19. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  20. Minimum error discrimination between similarity-transformed quantum states

    NASA Astrophysics Data System (ADS)

    Jafarizadeh, M. A.; Sufiani, R.; Mazhari Khiavi, Y.

    2011-07-01

    Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreducible representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.

  1. Direct absorption spectroscopy sensor for temperature and H2O concentration of flat flame burner

    NASA Astrophysics Data System (ADS)

    Duan, Jin-hu; Jin, Xing; Wang, Guang-yu; Qu, Dong-sheng

    2016-01-01

    A tunable diode laser absorption sensor, based on direct absorption spectroscopy and time division multiplexing scheme, was developed to measure H2O concentration and temperature of flat flame burner. At the height of 15mm from the furnace surface, temperature and concentration were measured at different equivalence ratios. Then the distance between the laser and the furnace surface was changed while the equivalence ratio was fixed at 1 and experiments were performed to measure temperature and H2O concentration at every height. At last flame temperatures and H2O concentrations were obtained by simulation and computational analysis and these combustion parameters were compared with the reference. The results showed that the experimental results were in accordance with the reference values. Temperature errors were less than 4% and H2O component concentration errors were less than 5%and both of them reached their maximum when the equivalent ratio was set at 1. The temperature and H2O concentration increased with the height from furnace surface to laser when it varied from 3mm to 9mm and it decreased when it varied from 9mm to 30mm and they reached their maximum at the height of 9mm. Keywords: tunable diode laser, direct absorption spectroscopy

  2. Minimum error discrimination between similarity-transformed quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jafarizadeh, M. A.; Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-1795; Research Institute for Fundamental Sciences, Tabriz 51664

    2011-07-15

    Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreduciblemore » representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.« less

  3. A Monte Carlo comparison of the recovery of winds near upwind and downwind from the SASS-1 model function by means of the sum of squares algorithm and a maximum likelihood estimator

    NASA Technical Reports Server (NTRS)

    Pierson, W. J., Jr.

    1984-01-01

    Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.

  4. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  5. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  6. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  7. Development of one-shot aspheric measurement system with a Shack-Hartmann sensor.

    PubMed

    Furukawa, Yasunori; Takaie, Yuichi; Maeda, Yoshiki; Ohsaki, Yumiko; Takeuchi, Seiji; Hasegawa, Masanobu

    2016-10-10

    We present a measurement system for a rotationally symmetric aspheric surface that is designed for accurate and high-volume measurements. The system uses the Shack-Hartmann sensor and is capable of measuring aspheres with a maximum diameter of 90 mm in one shot. In our system, a reference surface, made with the same aspheric parameter as the test surface, is prepared. The test surface is recovered as the deviation from the reference surface using a figure-error reconstruction algorithm with a ray coordinate and angle variant table. In addition, we developed a method to calibrate the rotationally symmetric system error. These techniques produce stable measurements and high accuracy. For high-throughput measurements, a single measurement scheme and auto alignment are implemented; they produce a 4.5 min measurement time, including calibration and alignment. In this paper, we introduce the principle and calibration method of our system. We also demonstrate that our system achieved an accuracy better than 5.8 nm RMS and a repeatability of 0.75 nm RMS by comparing our system's aspheric measurement results with those of a probe measurement machine.

  8. An affordable cuff-less blood pressure estimation solution.

    PubMed

    Jain, Monika; Kumar, Niranjan; Deb, Sujay

    2016-08-01

    This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.

  9. SU-E-T-210: Surviving a Visit by the Radiological Physics Center.

    PubMed

    Grant, W; Mcgary, J; Rosen, I; Nitsch, P; Davidson, S

    2012-06-01

    To demonstrate an objective approach to determining if a negative report from the Radiological Physics Center (RPC) of greater than 10% error is valid or has clinical significance. The discrepancy involved the clinical activity (mgRaEq) of Cs-137 sources, some manufactured by 3M and some by Amersham. Measurements were made in the proprietary RPC Well Counter calibrated by the MD Anderson ADCL and our Well Counter (CNMC, Model 44D) calibrated by the same laboratory as well as the University of Wisconsin ADCL. In addition, we possess an Amersham Cs-137 Check Source that had been calibrated by the UW-ADCL in 2002. All clinical sources were checked in both Well Counters on the first visit. One clinical source and the Check Source were measured in a second visit that occurred 51 days later. On the initial RPC visit, 9 of 25 sources had a minimum of an 8% discrepancy between the RPC and the Institution, with a maximum of 11%. Contributing errors included using the incorrect straw position by us, an unexplained 2.3% error in the RPC data identified 73 days post-visit, a 2% variation in Chamber Factors for our Well Counter from the two ADCL's. When we use the 2004 value of Air Kerma Strength for the Check Source to determine a Calibration Factor of the Well Counter, all sources were within 0.5% of their decayed value established in 2002. This work emphasizes the value of having simple Constancy Check systems in a Quality Assurance program as 'Accuracy' has error bars. The disagreement in calibration data between the ADCL Laboratories, which was at the 2% maximum quoted in their Calibration Reports, is a reminder that there is uncertainty in measurements. Constancy Checks allow one to sort out discrepancies and to answer challenges to the validity of your program. © 2012 American Association of Physicists in Medicine.

  10. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-10-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.

  11. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-03-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.

  12. Bayesian energy landscape tilting: towards concordant models of molecular ensembles.

    PubMed

    Beauchamp, Kyle A; Pande, Vijay S; Das, Rhiju

    2014-03-18

    Predicting biological structure has remained challenging for systems such as disordered proteins that take on myriad conformations. Hybrid simulation/experiment strategies have been undermined by difficulties in evaluating errors from computational model inaccuracies and data uncertainties. Building on recent proposals from maximum entropy theory and nonequilibrium thermodynamics, we address these issues through a Bayesian energy landscape tilting (BELT) scheme for computing Bayesian hyperensembles over conformational ensembles. BELT uses Markov chain Monte Carlo to directly sample maximum-entropy conformational ensembles consistent with a set of input experimental observables. To test this framework, we apply BELT to model trialanine, starting from disagreeing simulations with the force fields ff96, ff99, ff99sbnmr-ildn, CHARMM27, and OPLS-AA. BELT incorporation of limited chemical shift and (3)J measurements gives convergent values of the peptide's α, β, and PPII conformational populations in all cases. As a test of predictive power, all five BELT hyperensembles recover set-aside measurements not used in the fitting and report accurate errors, even when starting from highly inaccurate simulations. BELT's principled framework thus enables practical predictions for complex biomolecular systems from discordant simulations and sparse data. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Amorphous Silicon p-i-n Structure Acting as Light and Temperature Sensor

    PubMed Central

    de Cesare, Giampiero; Nascetti, Augusto; Caputo, Domenico

    2015-01-01

    In this work, we propose a multi-parametric sensor able to measure both temperature and radiation intensity, suitable to increase the level of integration and miniaturization in Lab-on-Chip applications. The device is based on amorphous silicon p-doped/intrinsic/n-doped thin film junction. The device is first characterized as radiation and temperature sensor independently. We found a maximum value of responsivity equal to 350 mA/W at 510 nm and temperature sensitivity equal to 3.2 mV/K. We then investigated the effects of the temperature variation on light intensity measurement and of the light intensity variation on the accuracy of the temperature measurement. We found that the temperature variation induces an error lower than 0.55 pW/K in the light intensity measurement at 550 nm when the diode is biased in short circuit condition, while an error below 1 K/µW results in the temperature measurement when a forward bias current higher than 25 µA/cm2 is applied. PMID:26016913

  14. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  15. Navigator alignment using radar scan

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2016-04-05

    The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.

  16. Modeling Nonlinear Errors in Surface Electromyography Due To Baseline Noise: A New Methodology

    PubMed Central

    Law, Laura Frey; Krishnan, Chandramouli; Avin, Keith

    2010-01-01

    The surface electromyographic (EMG) signal is often contaminated by some degree of baseline noise. It is customary for scientists to subtract baseline noise from the measured EMG signal prior to further analyses based on the assumption that baseline noise adds linearly to the observed EMG signal. The stochastic nature of both the baseline and EMG signal, however, may invalidate this assumption. Alternately, “true” EMG signals may be either minimally or nonlinearly affected by baseline noise. This information is particularly relevant at low contraction intensities when signal-to-noise ratios (SNR) may be lowest. Thus, the purpose of this simulation study was to investigate the influence of varying levels of baseline noise (approximately 2 – 40 % maximum EMG amplitude) on mean EMG burst amplitude and to assess the best means to account for signal noise. The simulations indicated baseline noise had minimal effects on mean EMG activity for maximum contractions, but increased nonlinearly with increasing noise levels and decreasing signal amplitudes. Thus, the simple baseline noise subtraction resulted in substantial error when estimating mean activity during low intensity EMG bursts. Conversely, correcting EMG signal as a nonlinear function of both baseline and measured signal amplitude provided highly accurate estimates of EMG amplitude. This novel nonlinear error modeling approach has potential implications for EMG signal processing, particularly when assessing co-activation of antagonist muscles or small amplitude contractions where the SNR can be low. PMID:20869716

  17. Electroinduction disk sensor of electric field strength

    NASA Astrophysics Data System (ADS)

    Biryukov, S. V.; Korolyova, M. A.

    2018-01-01

    Measurement of the level of electric fields exposure to the technical and biological objects for a long time is an urgent task. To solve this problem, the required electric field sensors with specified metrological characteristics. The aim of the study is the establishment of theoretical assumptions for the calculation of the flat electric field sensors. It is proved that the accuracy of the sensor does not exceed 3% in the spatial range 0

  18. A hybrid demodulation method of fiber-optic Fabry-Perot pressure sensor

    NASA Astrophysics Data System (ADS)

    Yu, Le; Lang, Jianjun; Pan, Yong; Wu, Di; Zhang, Min

    2013-12-01

    The fiber-optic Fabry-Perot pressure sensors have been widely applied to measure pressure in oilfield. For multi-well it will take a long time (dozens of seconds) to demodulate downhole pressure values of all wells by using only one demodulation system and it will cost a lot when every well is equipped with one system, which heavily limits the sensor applied in oilfield. In present paper, a new hybrid demodulation method, combining the windowed nonequispaced discrete Fourier Transform (nDFT) method with segment search minimum mean square error estimation (MMSE) method, was developed, by which the demodulation time can be reduced to 200ms, i.e., measuring 10 channels/wells was less than 2s. Besides, experimental results showed the demodulation cavity length of the fiber-optic Fabry-Perot sensor has a maximum error of 0.5 nm and consequently pressure measurement accuracy can reach 0.4% F.S.

  19. Improved simulation of aerosol, cloud, and density measurements by shuttle lidar

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.

    1981-01-01

    Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.

  20. Calibration of low-temperature ac susceptometers with a copper cylinder standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, D.-X.; Skumryev, V.

    2010-02-15

    A high-quality low-temperature ac susceptometer is calibrated by comparing the measured ac susceptibility of a copper cylinder with its eddy-current ac susceptibility accurately calculated. Different from conventional calibration techniques that compare the measured results with the known property of a standard sample at certain fixed temperature T, field amplitude H{sub m}, and frequency f, to get a magnitude correction factor, here, the electromagnetic properties of the copper cylinder are unknown and are determined during the calibration of the ac susceptometer in the entire T, H{sub m}, and f range. It is shown that the maximum magnitude error and the maximummore » phase error of the susceptometer are less than 0.7% and 0.3 deg., respectively, in the region T=5-300 K and f=111-1111 Hz at H{sub m}=800 A/m, after a magnitude correction by a constant factor as done in a conventional calibration. However, the magnitude and phase errors can reach 2% and 4.3 deg. at 10 000 and 11 Hz, respectively. Since the errors are reproducible, a large portion of them may be further corrected after a calibration, the procedure for which is given. Conceptual discussions concerning the error sources, comparison with other calibration methods, and applications of ac susceptibility techniques are presented.« less

  1. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  2. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  3. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  4. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  5. A Ground Flash Fraction Retrieval Algorithm for GLM

    NASA Technical Reports Server (NTRS)

    Koshak, William J.

    2010-01-01

    A Bayesian inversion method is introduced for retrieving the fraction of ground flashes in a set of N lightning observed by a satellite lightning imager (such as the Geostationary Lightning Mapper, GLM). An exponential model is applied as a physically reasonable constraint to describe the measured lightning optical parameter distributions. Population statistics (i.e., the mean and variance) are invoked to add additional constraints to the retrieval process. The Maximum A Posteriori (MAP) solution is employed. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The approach is feasible for N greater than 2000, and retrieval errors decrease as N is increased.

  6. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  7. The Infrared Hubble Diagram of Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    Photometry of Type Ia supernovae reveals that these objects are standardizable candles in optical passbands - the peak luminosities are related to the rate of decline after maximum light. In the near-infrared bands, there is essentially a characteristic brightness at maximum light for each photometric band. Thus, in the near-infrared they are better than standardizable candles; they are essentially standard candles. Their absolute magnitudes are known to ±0.15 magnitude or better. The infrared observations have the extra advantage that interstellar extinction by dust along the line of sight is a factor of 3-10 smaller than in the optical B- and V -bands. The size of any systematic errors in the infrared extinction corrections typically become smaller than the photometric errors of the observations. Thus, we can obtain distances to the hosts of Type Ia supernovae to ±8 % or better. This is particularly useful for extragalactic astronomy and precise measurements of the dark energy component of the universe.

  8. A CT-based software tool for evaluating compensator quality in passively scattered proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zhang, Lifei; Dong, Lei; Sahoo, Narayan; Gillin, Michael T.; Zhu, X. Ronald

    2010-11-01

    We have developed a quantitative computed tomography (CT)-based quality assurance (QA) tool for evaluating the accuracy of manufactured compensators used in passively scattered proton therapy. The thickness of a manufactured compensator was measured from its CT images and compared with the planned thickness defined by the treatment planning system. The difference between the measured and planned thicknesses was calculated with use of the Euclidean distance transformation and the kd-tree search method. Compensator accuracy was evaluated by examining several parameters including mean distance, maximum distance, global thickness error and central axis shifts. Two rectangular phantoms were used to validate the performance of the QA tool. Nine patients and 20 compensators were included in this study. We found that mean distances, global thickness errors and central axis shifts were all within 1 mm for all compensators studied, with maximum distances ranging from 1.1 to 3.8 mm. Although all compensators passed manual verification at selected points, about 5% of the pixels still had maximum distances of >2 mm, most of which correlated with large depth gradients. The correlation between the mean depth gradient of the compensator and the percentage of pixels with mean distance <1 mm is -0.93 with p < 0.001, which suggests that the mean depth gradient is a good indicator of compensator complexity. These results demonstrate that the CT-based compensator QA tool can be used to quantitatively evaluate manufactured compensators.

  9. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  10. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  11. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  12. Final acceptance testing of the LSST monolithic primary/tertiary mirror

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Burge, James H.; Cuerden, Brian; Gressler, William; Martin, Hubert M.; West, Steven C.; Zhao, Chunyu

    2014-07-01

    The Large Synoptic Survey Telescope (LSST) is a three-mirror wide-field survey telescope with the primary and tertiary mirrors on one monolithic substrate1. This substrate is made of Ohara E6 borosilicate glass in a honeycomb sandwich, spin cast at the Steward Observatory Mirror Lab at The University of Arizona2. Each surface is aspheric, with the specification in terms of conic constant error, maximum active bending forces and finally a structure function specification on the residual errors3. There are high-order deformation terms, but with no tolerance, any error is considered as a surface error and is included in the structure function. The radii of curvature are very different, requiring two independent test stations, each with instantaneous phase-shifting interferometers with null correctors. The primary null corrector is a standard two-element Offner null lens. The tertiary null corrector is a phase-etched computer-generated hologram (CGH). This paper details the two optical systems and their tolerances, showing that the uncertainty in measuring the figure is a small fraction of the structure function specification. Additional metrology includes the radii of curvature, optical axis locations, and relative surface tilts. The methods for measuring these will also be described along with their tolerances.

  13. SU-E-J-88: The Study of Setup Error Measured by CBCT in Postoperative Radiotherapy for Cervical Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Runxiao, L; Aikun, W; Xiaomei, F

    2015-06-15

    Purpose: To compare two registration methods in the CBCT guided radiotherapy for cervical carcinoma, analyze the setup errors and registration methods, determine the margin required for clinical target volume(CTV) extending to planning target volume(PTV). Methods: Twenty patients with cervical carcinoma were enrolled. All patients were underwent CT simulation in the supine position. Transfering the CT images to the treatment planning system and defining the CTV, PTV and the organs at risk (OAR), then transmit them to the XVI workshop. CBCT scans were performed before radiotherapy and registered to planning CT images according to bone and gray value registration methods. Comparedmore » two methods and obtain left-right(X), superior-inferior(Y), anterior-posterior (Z) setup errors, the margin required for CTV to PTV were calculated. Results: Setup errors were unavoidable in postoperative cervical carcinoma irradiation. The setup errors measured by method of bone (systemic ± random) on X(1eft.right),Y(superior.inferior),Z(anterior.posterior) directions were(0.24±3.62),(0.77±5.05) and (0.13±3.89)mm, respectively, the setup errors measured by method of grey (systemic ± random) on X(1eft-right), Y(superior-inferior), Z(anterior-posterior) directions were(0.31±3.93), (0.85±5.16) and (0.21±4.12)mm, respectively.The spatial distributions of setup error was maximum in Y direction. The margins were 4 mm in X axis, 6 mm in Y axis, 4 mm in Z axis respectively.These two registration methods were similar and highly recommended. Conclusion: Both bone and grey registration methods could offer an accurate setup error. The influence of setup errors of a PTV margin would be suggested by 4mm, 4mm and 6mm on X, Y and Z directions for postoperative radiotherapy for cervical carcinoma.« less

  14. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  15. Measurement uncertainty associated with chromatic confocal profilometry for 3D surface texture characterization of natural human enamel.

    PubMed

    Mullan, F; Bartlett, D; Austin, R S

    2017-06-01

    To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.

  16. Measurement of the inertial properties of the Helios F-1 spacecraft

    NASA Technical Reports Server (NTRS)

    Gayman, W. H.

    1975-01-01

    A gravity pendulum method of measuring lateral moments of inertia of large structures with an error of less than 1% is outlined. The method is based on the fact that in a physical pendulum with a knife-edge support the distance from the axis of rotation to the system center of gravity determines the minimal period of oscillation and is equal to the system centroidal radius of gyration. The method is applied to results of a test procedure in which the Helios F-1 spacecraft was placed in a roll fixture with crossed flexure pivots as elastic constraints and system oscillation measurements were made with each of a set of added moment-of-inertia increments. Equations of motion are derived with allowance for the effect of the finite pivot radius and an error analysis is carried out to find the criterion for maximum accuracy in determining the square of the centroidal radius of gyration. The test procedure allows all measurements to be made with the specimen in upright position.

  17. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  18. Why a simulation system doesn`t match the plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, R.

    1998-03-01

    Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less

  19. TDRS orbit determination by radio interferometry

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometry tracking accuracy. The Orbit Determination Accuracy Estimator (ODAE) models the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with the group and phase delay measurements from radio interferometry. ODAE models the statistical properties of tracking error sources, including inherent observable imprecision, atmospheric delays, clock offsets, station location uncertainty, and measurement biases, and through Monte Carlo simulation, ODAE calculates the statistical properties of errors in the predicted satellites state vector. This paper presents results from ODAE application to orbit determination of the Tracking and Data Relay Satellite (TDRS) by radio interferometry. Conclusions about optimal ground station locations for interferometric tracking of TDRS are presented, along with a discussion of operational advantages of radio interferometry.

  20. Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data

    NASA Astrophysics Data System (ADS)

    Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.

    2015-12-01

    Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.

  1. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  2. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    PubMed

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  3. What is the acceptable hemolysis index for the measurements of plasma potassium, LDH and AST?

    PubMed

    Rousseau, Nathalie; Pige, Raphaëlle; Cohen, Richard; Pecquet, Matthieu

    2016-06-01

    Hemolysis is a cause of variability in test results for plasma potassium, LDH and AST and is a non-negligible part of measurement uncertainty. However, allowable levels of hemolysis provided by reagent suppliers take neither analytical variability (trueness and precision) nor the measurand into account. Using a calibration range of hemolysis, we measured the plasma concentrations of potassium, LDH and AST, and hemolysis indices with a Cobas C501 analyzer (Roche Diagnostics(®), Meylan, France). Based on the allowable total error (according to Ricós et al.) and the expanded measurement uncertainty equation we calculated the maximum allowable bias for two concentrations of each measurand. Finally, we determined the allowable hemolysis indices for all three measurands. We observed a linear relationship between the observed increases of concentration and hemolysis indices. The LDH measurement was the most sensitive to hemolysis, followed by AST and potassium measurements. The determination of the allowable hemolysis index depends on the targeted measurand, its concentration and the chosen level of requirement of allowable total error.

  4. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    PubMed

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  5. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  6. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  7. An evaluation of the underlying mechanisms of bloodstain pattern analysis error.

    PubMed

    Behrooz, Nima; Hulse-Smith, Lee; Chandra, Sanjeev

    2011-09-01

    An experiment was designed to explore the underlying mechanisms of blood disintegration and its subsequent effect on area of origin (AO) calculations. Blood spatter patterns were created through the controlled application of pressurized air (20-80 kPa) for 0.1 msec onto suspended blood droplets (2.7-3.2 mm diameter). The resulting disintegration process was captured using high-speed photography. Straight-line triangulation resulted in a 50% height overestimation, whereas using the lowest calculated height for each spatter pattern reduced this error to 8%. Incorporation of projectile motion resulted in a 28% height underestimation. The AO xy-coordinate was found to be very accurate with a maximum offset of only 4 mm, while AO size calculations were found to be two- to fivefold greater than expected. Subsequently, reverse triangulation analysis revealed the rotational offset for 26% of stains could not be attributed to measurement error, suggesting that some portion of error is inherent in the disintegration process. © 2011 American Academy of Forensic Sciences.

  8. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    PubMed Central

    Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou

    2013-01-01

    Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491

  9. Strength tests for elite rowers: low- or high-repetition?

    PubMed

    Lawton, Trent W; Cronin, John B; McGuigan, Michael R

    2014-01-01

    The purpose of this project was to evaluate the utility of low- and high-repetition maximum (RM) strength tests used to assess rowers. Twenty elite heavyweight males (age 23.7 ± 4.0 years) performed four tests (5 RM, 30 RM, 60 RM and 120 RM) using leg press and seated arm pulling exercise on a dynamometer. Each test was repeated on two further occasions; 3 and 7 days from the initial trial. Per cent typical error (within-participant variation) and intraclass correlation coefficients (ICCs) were calculated using log-transformed repeated-measures data. High-repetition tests (30 RM, 60 RM and 120 RM), involving seated arm pulling exercise are not recommended to be included in an assessment battery, as they had unsatisfactory measurement precision (per cent typical error > 5% or ICC < 0.9). Conversely, low-repetition tests (5 RM) involving leg press and seated arm pulling exercises could be used to assess elite rowers (per cent typical error ≤ 5% and ICC ≥ 0.9); however, only 5 RM leg pressing met criteria (per cent typical error = 2.7%, ICC = 0.98) for research involving small samples (n = 20). In summary, low-repetition 5 RM strength testing offers greater utility as assessments of rowers, as they can be used to measure upper- and lower-body strength; however, only the leg press exercise is recommended for research involving small squads of elite rowers.

  10. [A plane-based hand-eye calibration method for surgical robots].

    PubMed

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi

    2017-04-01

    In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.

  11. Prediction of adult height in girls: the Beunen-Malina-Freitas method.

    PubMed

    Beunen, Gaston P; Malina, Robert M; Freitas, Duarte L; Thomis, Martine A; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Maes, Hermine H; Lefevre, Johan

    2011-12-01

    The purpose of this study was to validate and cross-validate the Beunen-Malina-Freitas method for non-invasive prediction of adult height in girls. A sample of 420 girls aged 10-15 years from the Madeira Growth Study were measured at yearly intervals and then 8 years later. Anthropometric dimensions (lengths, breadths, circumferences, and skinfolds) were measured; skeletal age was assessed using the Tanner-Whitehouse 3 method and menarcheal status (present or absent) was recorded. Adult height was measured and predicted using stepwise, forward, and maximum R (2) regression techniques. Multiple correlations, mean differences, standard errors of prediction, and error boundaries were calculated. A sample of the Leuven Longitudinal Twin Study was used to cross-validate the regressions. Age-specific coefficients of determination (R (2)) between predicted and measured adult height varied between 0.57 and 0.96, while standard errors of prediction varied between 1.1 and 3.9 cm. The cross-validation confirmed the validity of the Beunen-Malina-Freitas method in girls aged 12-15 years, but at lower ages the cross-validation was less consistent. We conclude that the Beunen-Malina-Freitas method is valid for the prediction of adult height in girls aged 12-15 years. It is applicable to European populations or populations of European ancestry.

  12. SU-G-TeP4-12: Individual Beam QA for a Robotic Radiosurgery System Using a Scintillator Cone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuinness, C; Descovich, M; Sudhyadhom, A

    2016-06-15

    Purpose: The targeting accuracy of the Cyberknife system is measured by end-to-end tests delivering multiple isocentric beams to a point in space. While the targeting accuracy of two representative beams can be determined by a Winston-Lutz-type test, no test is available today to determine the targeting accuracy of each clinical beam. We used a scintillator cone to measure the accuracy of each individual beam. Methods: The XRV-124 from Logos Systems Int’l is a scintillator cone with an imaging system that is able to measure individual beam vectors and a resulting error between planned and measured beam coordinates. We measured themore » targeting accuracy of isocentric and non-isocentric beams for a number of test cases using the Iris and the fixed collimator. The average difference between plan and measured beam position was 0.8–1.2mm across the collimator sizes and plans considered here. The max error for a single beam was 2.5mm for the isocentric plans, and 1.67mm for the non-isocentric plans. The standard deviation of the differences was 0.5mm or less. Conclusion: The CyberKnife System is specified to have an overall targeting accuracy for static targets of less than 0.95mm. In E2E tests using the XRV124 system we measure average beam accuracy between 0.8 to 1.23mm, with maximum of 2.5mm. We plan to investigate correlations between beam position error and robot position, and to quantify the effect of beam position errors on patient specific plans. Martina Descovich has received research support and speaker honoraria from Accuray.« less

  13. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  14. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  15. Shape sensing using multi-core fiber optic cable and parametric curve solutions.

    PubMed

    Moore, Jason P; Rogge, Matthew D

    2012-01-30

    The shape of a multi-core optical fiber is calculated by numerically solving a set of Frenet-Serret equations describing the path of the fiber in three dimensions. Included in the Frenet-Serret equations are curvature and bending direction functions derived from distributed fiber Bragg grating strain measurements in each core. The method offers advantages over prior art in that it determines complex three-dimensional fiber shape as a continuous parametric solution rather than an integrated series of discrete planar bends. Results and error analysis of the method using a tri-core optical fiber is presented. Maximum error expressed as a percentage of fiber length was found to be 7.2%.

  16. Best estimate of luminal cross-sectional area of coronary arteries from angiograms

    NASA Technical Reports Server (NTRS)

    Lee, P. L.; Selzer, R. H.

    1988-01-01

    We have reexamined the problem of estimating the luminal area of an elliptically-shaped coronary artery cross section from two or more radiographic diameter measurements. The expected error is found to be much smaller than the maximum potential error. In the cae of two orthogonal views, closed form expressions have been derived for calculating the area and the uncertainty. Assuming that the underlying ellipse has limited ellipticity (major/minor axis ratio less than five), it is shown that the average uncertainty in the area is less than 14 percent. When more than two views are available, we suggest using a least-squares fit method to extract all available information from the data.

  17. Determination of thorium by fluorescent x-ray spectrometry

    USGS Publications Warehouse

    Adler, I.; Axelrod, J.M.

    1955-01-01

    A fluorescent x-ray spectrographic method for the determination of thoria in rock samples uses thallium as an internal standard. Measurements are made with a two-channel spectrometer equipped with quartz (d = 1.817 A.) analyzing crystals. Particle-size effects are minimized by grinding the sample components with a mixture of silicon carbide and aluminum and then briquetting. Analyses of 17 samples showed that for the 16 samples containing over 0.7% thoria the average error, based on chemical results, is 4.7% and the maximum error, 9.5%. Because of limitations of instrumentation, 0.2% thoria is considered the lower limit of detection. An analysis can be made in about an hour.

  18. Digital terrestrial photogrammetric methods for tree stem analysis

    Treesearch

    Neil A. Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matt Winn

    2000-01-01

    A digital camera was used to measure diameters at various heights along the stem on 20 red oak trees. Diameter at breast height ranged from 16 to over 60 cm, and height to a 10-cm top ranged from 12 to 20 m. The chi-square maximum anticipated error of geometric mean diameter estimates at the 95 percent confidence level was within ±4 cm for all heights when...

  19. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.

  20. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    NASA Astrophysics Data System (ADS)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  1. Monte Carlo simulation of edge placement error

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Estrella, Joel; Enomoto, Masashi

    2018-03-01

    In the discussion of edge placement error (EPE), we proposed interactive pattern fidelity error (IPFE) as an indicator to judge pass/fail of integrated patterns. IPFE consists of lower and upper layer EPEs (CD and center of gravity: COG) and overlay, which is decided from the combination of each maximum variation. We succeeded in obtaining the IPFE density function by Monte Carlo simulation. In the results, we also found that the standard deviation (σ) of each indicator should be controlled by 4.0σ, at the semiconductor grade, such as 100 billion patterns per die. Moreover, CD, COG and overlay were analyzed by analysis of variance (ANOVA); we can discuss all variations from wafer to wafer (WTW), pattern to pattern (PTP), line edge roughness (LWR) and stochastic pattern noise (SPN) on an equal footing. From the analysis results, we can determine that these variations belong to which process and tools. Furthermore, measurement length of LWR is also discussed in ANOVA. We propose that the measurement length for IPFE analysis should not be decided to the micro meter order, such as >2 μm length, but for which device is actually desired.

  2. Ranging/tracking system for proximity operations

    NASA Technical Reports Server (NTRS)

    Nilsen, P.; Udalov, S.

    1982-01-01

    The hardware development and testing phase of a hand held radar for the ranging and tracking for Shuttle proximity operations are considered. The radar is to measure range to a 3 sigma accuracy of 1 m (3.28 ft) to a maximum range of 1850 m (6000 ft) and velocity to a 3 sigma accuracy of 0.03 m/s (0.1 ft/s). Size and weight are similar to hand held radars, frequently seen in use by motorcycle police officers. Meeting these goals for a target in free space was very difficult to obtain in the testing program; however, at a range of approximately 700 m, the 3 sigma range error was found to be 0.96 m. It is felt that much of this error is due to clutter in the test environment. As an example of the velocity accuracy, at a range of 450 m, a 3 sigma velocity error of 0.02 m/s was measured. The principles of the radar and recommended changes to its design are given. Analyses performed in support of the design process, the actual circuit diagrams, and the software listing are included.

  3. Wind friction parametrisation used in emission models for wastewater treatment plants: A critical review.

    PubMed

    Prata, Ademir A; Santos, Jane M; Timchenko, Victoria; Reis, Neyval C; Stuetz, Richard M

    2017-11-01

    Emission models are widely applied tools for estimating atmospheric emissions from wastewater treatment plants (WWTPs). The friction velocity u ∗ is a key variable for the modelling of emissions from passive liquid surfaces in WWTPs. This work evaluated different parametrisations of u ∗ for passive liquid surfaces at the scale of WWTP units, which present relatively small fetches, based on available wind friction and wave data measured at wind-wave tanks (fetches spanning from approximately 3 to 100 m, and wind speeds from 2 to 17 m s -1 ). The empirical correlation by Smith (1980; J. Phys. Oceanogr. 10, 709-726), which has been frequently adopted in air emission models (despite the fact that it was originally derived for the ocean) presented a general tendency to overestimate u ∗ , with significant (although not extreme) relative errors (mean and maximum errors of 13.5% and 36.6%, respectively); the use of Charnock's relation, with Charnock constant 0.010, performed in a very similar manner (mean and maximum errors of 13.3% and 37.8%, respectively). Better estimates of u ∗ were achieved by parametrisations based on the significant wave steepness. Simplified correlations between the wind drag and the non-dimensional fetch were obtained. An approach was devised, comprising the use of Charnock's relation (with Charnock constant 0.010) and of these simplified correlations, depending on the ranges of frequency of the peak waves, fetch and wind speed. The proposed approach predicted u ∗ with improved accuracy (mean, maximum and 95%-percentile relative errors of 6.6%, 16.7% and 13.9%, respectively), besides being able to incorporate the influence of the fetch in the wind drag, thus taking into account the size of the tanks in the WWTPs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Cross-Sectional Elasticity Imaging of Arterial Wall by Comparing Measured Change in Thickness with Model Waveform

    NASA Astrophysics Data System (ADS)

    Tang, Jiang; Hasegawa, Hideyuki; Kanai, Hiroshi

    2005-06-01

    For the assessment of the elasticity of the arterial wall, we have developed the phased tracking method [H. Kanai et al.: IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43 (1996) 791] for measuring the minute change in thickness due to heartbeats and the elasticity of the arterial wall with transcutaneous ultrasound. For various reasons, for example, an extremely small deformation of the wall, the minute change in wall thickness during one heartbeat is largely influenced by noise in these cases and the reliability of the elasticity distribution obtained from the maximum change in thickness deteriorates because the maximum value estimation is largely influenced by noise. To obtain a more reliable cross-sectional image of the elasticity of the arterial wall, in this paper, a matching method is proposed to evaluate the waveform of the measured change in wall thickness by comparing the measured waveform with a template waveform. The maximum deformation, which is used in the calculation of elasticity, was determined from the amplitude of the matched model waveform to reduce the influence of noise. The matched model waveform was obtained by minimizing the difference between the measured and template waveforms. Furthermore, a random error, which was obtained from the reproducibility among the heartbeats of the measured waveform, was considered useful for the evaluation of the reliability of the measured waveform.

  5. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  6. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  7. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  8. Design implementation in model-reference adaptive systems. [application and implementation on space shuttle

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.

  9. PAPER-CHROMATOGRAM MEASUREMENT OF SUBSTANCES LABELLED WITH H$sup 3$ (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, M.

    1961-03-01

    Compounds labelled with H/sup 3/ can be detected with a paper chromatogram using a methane flow counter with a count yield of 1%. The yield can be estimated from the beta maximum energy. A new double counter was developed which increases the count yield to 2% and also considerably decreases the margin of error. Calibration curves with leucine and glucosamine show satisfactory linearity between measured and applied activity in the range from 4 to 50 x 10/sup -//sup 3/ mu c of H/sup 3/. (auth)

  10. An intelligent maximum permissible exposure meter for safety assessments of laser radiation

    NASA Astrophysics Data System (ADS)

    Corder, D. A.; Evans, D. R.; Tyrer, J. R.

    1996-09-01

    There is frequently a need to make laser power or energy density measurements when determining whether radiation from a laser system exceeds the Maximum Permissible Exposure (MPE) as defined in BS EN 60825. This can be achieved using standard commercially available laser power or energy measurement equipment, but some of these have shortcomings when used in this application. Calculations must be performed by the user to compare the measured value to the MPE. The measurement and calculation procedure appears complex to the nonexpert who may be performing the assessment. A novel approach is described which uses purpose designed hardware and software to simplify the process. The hardware is optimized for measuring the relatively low powers associated with MPEs. The software runs on a Psion Series 3a palmtop computer. This reduces the cost and size of the system yet allows graphical and numerical presentation of data. Data output to other software running on PCs is also possible, enabling the instrument to be used as part of a quality system. Throughout the measurement process the opportunity for user error has been minimized by the hardware and software design.

  11. Development of real-time rotating waveplate Stokes polarimeter using multi-order retardation for ITER poloidal polarimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imazawa, R., E-mail: imazawa.ryota@jaea.go.jp; Kawano, Y.; Ono, T.

    The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000more » rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.« less

  12. Development of real-time rotating waveplate Stokes polarimeter using multi-order retardation for ITER poloidal polarimeter.

    PubMed

    Imazawa, R; Kawano, Y; Ono, T; Itami, K

    2016-01-01

    The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000 rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.

  13. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.

  14. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. J. Haugh and M. B. Schneider

    2008-10-31

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 μm square pixels, and 15 μm thick. Amore » multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/ΔE≈10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.« less

  15. Flat field anomalies in an x-ray charge coupled device camera measured using a Manson x-ray source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugh, M. J.; Schneider, M. B.

    2008-10-15

    The static x-ray imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the x rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The charge coupled device (CCD) chip is an x-ray sensitive silicon sensor, with a large format array (2kx2k), 24 {mu}m square pixels, and 15 {mu}mmore » thick. A multianode Manson x-ray source, operating up to 10 kV and 10 W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/{delta}E{approx_equal}10. The x-ray beam intensity was measured using an x-ray photodiode that has an accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The x-ray beam provides full CCD illumination and is flat, within {+-}1% maximum to minimum. The spectral efficiency was measured at ten energy bands ranging from 930 to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an x-ray CCD imager. These errors are quite different from those found in a visible CCD imager.« less

  16. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  17. Accuracy of the Heidelberg Spectralis in the alignment between near-infrared image and tomographic scan in a model eye: a multicenter study.

    PubMed

    Barteselli, Giulio; Bartsch, Dirk-Uwe; Viola, Francesco; Mojana, Francesca; Pellegrini, Marco; Hartmann, Kathrin I; Benatti, Eleonora; Leicht, Simon; Ratiglia, Roberto; Staurenghi, Giovanni; Weinreb, Robert N; Freeman, William R

    2013-09-01

    To evaluate temporal changes and predictors of accuracy in the alignment between simultaneous near-infrared image and optical coherence tomography (OCT) scan on the Heidelberg Spectralis using a model eye. Laboratory investigation. After calibrating the device, 6 sites performed weekly testing of the alignment for 12 weeks using a model eye. The maximum error was compared with multiple variables to evaluate predictors of inaccurate alignment. Variables included the number of weekly scanned patients, total number of OCT scans and B-scans performed, room temperature and its variation, and working time of the scanning laser. A 4-week extension study was subsequently performed to analyze short-term changes in the alignment. The average maximum error in the alignment was 15 ± 6 μm; the greatest error was 35 μm. The error increased significantly at week 1 (P = .01), specifically after the second imaging study (P < .05); reached a maximum after the eighth patient (P < .001); and then varied randomly over time. Predictors for inaccurate alignment were temperature variation and scans per patient (P < .001). For each 1 unit of increase in temperature variation, the estimated increase in maximum error was 1.26 μm. For the average number of scans per patient, each increase of 1 unit increased the error by 0.34 μm. Overall, the accuracy of the Heidelberg Spectralis was excellent. The greatest error happened in the first week after calibration, and specifically after the second imaging study. To improve the accuracy, room temperature should be kept stable and unnecessary scans should be avoided. The alignment of the device does not need to be checked on a regular basis in the clinical setting, but it should be checked after every other patient for more precise research purposes. Published by Elsevier Inc.

  18. Portable bioimpedance monitor evaluation for continuous impedance measurements. Towards wearable plethysmography applications.

    PubMed

    Ferreira, J; Seoane, F; Lindecrantz, K

    2013-01-01

    Personalised Health Systems (PHS) that could benefit the life quality of the patients as well as decreasing the health care costs for society among other factors are arisen. The purpose of this paper is to study the capabilities of the System-on-Chip Impedance Network Analyser AD5933 performing high speed single frequency continuous bioimpedance measurements. From a theoretical analysis, the minimum continuous impedance estimation time was determined, and the AD5933 with a custom 4-Electrode Analog Front-End (AFE) was used to experimentally determine the maximum continuous impedance estimation frequency as well as the system impedance estimation error when measuring a 2R1C electrical circuit model. Transthoracic Electrical Bioimpedance (TEB) measurements in a healthy subject were obtained using 3M gel electrodes in a tetrapolar lateral spot electrode configuration. The obtained TEB raw signal was filtered in MATLAB to obtain the respiration and cardiogenic signals, and from the cardiogenic signal the impedance derivative signal (dZ/dt) was also calculated. The results have shown that the maximum continuous impedance estimation rate was approximately 550 measurements per second with a magnitude estimation error below 1% on 2R1C-parallel bridge measurements. The displayed respiration and cardiac signals exhibited good performance, and they could be used to obtain valuable information in some plethysmography monitoring applications. The obtained results suggest that the AD5933-based monitor could be used for the implementation of a portable and wearable Bioimpedance plethysmograph that could be used in applications such as Impedance Cardiography. These results combined with the research done in functional garments and textile electrodes might enable the implementation of PHS applications in a relatively short time from now.

  19. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  20. Position Tracking During Human Walking Using an Integrated Wearable Sensing System.

    PubMed

    Zizzo, Giulio; Ren, Lei

    2017-12-10

    Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.

  1. Levels of asymmetry in Formica pratensis Retz. (Hymenoptera, Insecta) from a chronic metal-contaminated site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabitsch, W.B.

    1997-07-01

    Asymmetries of bilaterally symmetrical morphological traits in workers of the ant Formica pratensis Retzius were compared at sites with different levels of metal contamination and between mature and pre-mature colonies. Statistical analyses of the right-minus-left differences revealed that their distributions fit assumptions of fluctuating asymmetry (FA). No direct asymmetry or antisymmetry were present. Mean measurement error accounts for a third of the variation, but the maximum measurement error was 65%. Although significant differences of FA in ants were observed, the inconsistent results render uncovering a clear pattern difficult. Lead, cadmium, and zinc concentrations in the ants decreased with the distancemore » from the contamination source, but no relation was found between FA and the heavy metal levels. Ants from the premature colonies were more asymmetrical than those from mature colonies but accumulated less metals. The use of asymmetry measures in ecotoxicology and biomonitoring is criticized, but should remain widely applicable if statistical assumptions are complemented by genetic and historical data.« less

  2. [Left ventricular volume determination by first-pass radionuclide angiocardiography using a semi-geometric count-based method].

    PubMed

    Kinoshita, S; Suzuki, T; Yamashita, S; Muramatsu, T; Ide, M; Dohi, Y; Nishimura, K; Miyamae, T; Yamamoto, I

    1992-01-01

    A new radionuclide technique for the calculation of left ventricular (LV) volume by the first-pass (FP) method was developed and examined. Using a semi-geometric count-based method, the LV volume can be measured by the following equation: CV = CM/(L/d). V = (CT/CV) x d3 = (CT/CM) x L x d2. (V = LV volume, CV = voxel count, CM = the maximum LV count, CT = the total LV count, L = LV depth where the maximum count was obtained, and d = pixel size.) This theorem was applied to FP LV images obtained in the 30-degree right anterior oblique position. Frame-mode acquisition was performed and the LV end-diastolic maximum count and total count were obtained. The maximum LV depth was obtained as the maximum width of the LV on the FP end-diastolic image, using the assumption that the LV cross-section is circular. These values were substituted in the above equation and the LV end-diastolic volume (FP-EDV) was calculated. A routine equilibrium (EQ) study was done, and the end-diastolic maximum count and total count were obtained. The LV maximum depth was measured on the FP end-diastolic frame, as the maximum length of the LV image. Using these values, the EQ-EDV was calculated and the FP-EDV was compared to the EQ-EDV. The correlation coefficient for these two values was r = 0.96 (n = 23, p less than 0.001), and the standard error of the estimated volume was 10 ml.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  4. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  5. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  6. Low-Budget Instrumentation of a Conventional Leg Press to Measure Reliable Isometric-Strength Capacity.

    PubMed

    Baur, Heiner; Groppa, Alessia Severina; Limacher, Regula; Radlinger, Lorenz

    2016-02-02

    Maximum strength and rate of force development (RFD) are 2 important strength characteristics for everyday tasks and athletic performance. Measurements of both parameters must be reliable. Expensive isokinetic devices with isometric modes are often used. The possibility of cost-effective measurements in a practical setting would facilitate quality control. The purpose of this study was to assess the reliability of measurements of maximum isometric strength (Fmax) and RFD on a conventional leg press. Sixteen subjects (23 ± 2 y, 1.68 ± 0.05 m, 59 ± 5 kg) were tested twice within 1 session. After warm-up, subjects performed 2 times 5 trials eliciting maximum voluntary isometric contractions on an instrumented leg press (1- and 2-legged randomized). Fmax (N) and RFD (N/s) were extracted from force-time curves. Reliability was determined for Fmax and RFD by calculating the intraclass correlation coefficient (ICC), the test-retest variability (TRV), and the bias and limits of agreement. Reliability measures revealed good to excellent ICCs of .80-.93. TRV showed mean differences between measurement sessions of 0.4-6.9%. The systematic error was low compared with the absolute mean values (Fmax 5-6%, RFD 1-4%). The implementation of a force transducer into a conventional leg press provides a viable procedure to assess Fmax and RFD. Both performance parameters can be assessed with good to excellent reliability allowing quality control of interventions.

  7. Re-assessing accumulated oxygen deficit in middle-distance runners.

    PubMed

    Bickham, D; Le Rossignol, P; Gibbons, C; Russell, A P

    2002-12-01

    The purpose of this study was to re-assess the accumulated oxygen deficit (AOD), incorporating recent methodological improvements i.e., 4 min submaximal tests spread above and below the lactate threshold (LT). We Investigated the Influence of the VO2 -speed regression, on the precision of the estimated total energy demand and AOD. utilising different numbers of regression points and including measurement errors. Seven trained middle-distance runners (mean +/- SD age: 25.3 +/- 5.4y, mass: 73.7 +/- 4.3kg. VO2max 64.4 +/- 6.1 mL x kg(-1) x min(-1)) completed a VO2max, LT, 10 x 4 min exercise tests (above and below LT) and high-intensity exhaustive tests. The VO2 -speed regression was developed using 10 submaximal points and a forced y-intercept value. The average precision (measured as the width of 95% confidence Interval) for the estimated total energy demand using this regression was 7.8mL O2 Eq x kg(-1) x min(-1). There was a two-fold decrease in precision of estimated total energy demand with the Inclusion of measurement errors from the metabolic system. The mean AOD value was 43.3 mL O2 Eq x kg(-1) (upper and lower 95% CI 32.1 and 54.5mL o2 Eq x kg(-1) respectively). Converting the 95% CI for estimated total energy demand to AOD or including maximum possible measurement errors amplified the error associated with the estimated total energy demand. No significant difference in AOD variables were found, using 10,4 or 2 regression points with a forced y-intercept. For practical purposes we recommend the use of 4 submaximal values with a y-intercept. Using 95% CIs and calculating error highlighted possible error in estimating AOD. Without accurate data collection, increased variability could decrease the accuracy of the AOD as shown by a 95% CI of the AOD.

  8. Ultrasound monitoring of inter-knee distances during gait.

    PubMed

    Lai, Daniel T H; Wrigley, Tim V; Palaniswami, M

    2009-01-01

    Knee osteoarthritis is an extremely common, debilitating disease associated with pain and loss of function. There is considerable interest in monitoring lower limb alignment due to its close association with joint overload leading to disease progression. The effects of gait modifications that can lower joint loading are of particular interest. Here we describe an ultrasound-based system for monitoring an important aspect of dynamic lower limb alignment, the inter-knee distance during walking. Monitoring this gait parameter should facilitate studies in reducing knee loading, a primary risk factor of knee osteoarthritis progression. The portable device is composed of an ultrasound sensor connected to an Intel iMote2 equipped with Bluetooth wireless capability. Static tests and calibration results show that the sensor possesses an effective beam envelope of 120 degrees, with maximum distance errors of 10% at the envelope edges. Dynamic walking trials reveal close correlation of inter-knee distance trends between that measured by an optical system (Optotrak Certus NDI) and the sensor device. The maximum average root mean square error was found to be 1.46 cm. Future work will focus on improving the accuracy of the device.

  9. Development of a micro hole measuring system based on the capacitance principle

    NASA Astrophysics Data System (ADS)

    Chang, Ting-Yen; Liao, Yunn-Shiuan; Liu, Wei-Cheng

    2009-10-01

    A new 3D micro hole measuring system has been developed in this paper. The system is mainly composed of a probe, a rotary stage and a program which can convert data points to a 3D profile. The principle of capacitance is adopted and a device to sense the variation of capacitance when the probe touches the workpiece is designed and implemented. With the aid of rotation stage, positions around the contour are measured. The measured coordinates are calculated by an algorithm proposed in this paper. The developed system is capable of measuring the interior profile of a high aspect ratio micro hole and calculating its roundness. A grade A gauge block is used to verify the developed system. It is found that the repeatability error of the system is within ±0.78 µm. The linearity error can approach 1 µm and the maximum measuring depth is 15 mm. Finally, a micro hole of 1.0 mm in diameter and 10 mm in depth is successfully measured and the 3D profile is constructed accordingly. The roundness of each layer spacing 1 mm apart and the inclination of the axis of the micro hole are calculated as well.

  10. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  11. Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.

    PubMed

    Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak

    2017-07-01

    We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error E MPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session. © 2017 American Academy of Forensic Sciences.

  12. Dynamic characterization of Galfenol

    NASA Astrophysics Data System (ADS)

    Scheidler, Justin J.; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.

    2015-04-01

    A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.

  13. Dynamic Characterization of Galfenol

    NASA Technical Reports Server (NTRS)

    Scheidler, Justin; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.

    2015-01-01

    A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.

  14. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  15. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris

    2016-04-21

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  16. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris

    2016-04-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  17. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  18. Comparison of HRV parameters derived from photoplethysmography and electrocardiography signals.

    PubMed

    Jeyhani, Vala; Mahdiani, Shadi; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    Heart rate variability (HRV) has become a useful tool in analysis of cardiovascular system in both research and clinical fields. HRV has been also used in other applications such as stress level estimation in wearable devices. HRV is normally obtained from ECG as the time interval of two successive R waves. Recently PPG has been proposed as an alternative for ECG in HRV analysis to overcome some difficulties in measurement of ECG. In addition, PPG-HRV is also used in some commercial devices such as modern optical wrist-worn heart rate monitors. However, some researches have shown that PPG is not a surrogate for heart rate variability analysis. In this work, HRV analysis was applied on beat-to-beat intervals obtained from ECG and PPG in 19 healthy male subjects. Some important HRV parameters were calculated from PPG-HRV and ECG-HRV. Maximum of PPG and its second derivative were considered as two methods for obtaining the beat-to-beat signals from PPG and the results were compared with those achieved from ECG-HRV. Our results show that the smallest error happens in SDNN and SD2 with relative error of 2.46% and 2%, respectively. The most affected parameter is pNN50 with relative error of 29.89%. In addition, in our trial, using the maximum of PPG gave better results than its second derivative.

  19. CORRECTION OF THE INERTIAL EFFECT RESULTING FROM A PLATE MOVING UNDER LOW FRICTION CONDITIONS

    PubMed Central

    Yang, Feng; Pai, Yi-Chung

    2007-01-01

    The purpose of the present study was to develop a set of equations that can be employed to remove the inertial effect introduced by the movable platform upon which a person stands during a slip induced in gait; this allows the real ground reaction force (GRF) and its center of pressure (COP) to be determined. Analyses were also performed to determine how sensitive the COP offsets were to the changes of the parameters in the equation that affected the correction of the inertial effect. In addition, the results were verified empirically using a low friction movable platform together with a stationary object, a pendulum, and human subjects during a slip induced during gait. Our analyses revealed that the amount of correction required for the inertial effect due to the movable component is affected by its mass and its center of mass (COM) position, acceleration, the friction coefficient, and the landing position of the foot relative to the COM. The maximum error in the horizontal component of the GRF was close to 0.09 body weight during the recovery from a slip in walking. When uncorrected, the maximum error in the COP measurement could reach as much as 4 cm. Finally, these errors were magnified in the joint moment computation and propagated proximally, ranging from 0.2 to 1.0 Nm/body mass from the ankle to the hip. PMID:17306274

  20. Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.

    PubMed

    Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L

    2015-08-01

    Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.

  1. Evaluation of Eight Methods for Aligning Orientation of Two Coordinate Systems.

    PubMed

    Mecheri, Hakim; Robert-Lachaine, Xavier; Larue, Christian; Plamondon, André

    2016-08-01

    The aim of this study was to evaluate eight methods for aligning the orientation of two different local coordinate systems. Alignment is very important when combining two different systems of motion analysis. Two of the methods were developed specifically for biomechanical studies, and because there have been at least three decades of algorithm development in robotics, it was decided to include six methods from this field. To compare these methods, an Xsens sensor and two Optotrak clusters were attached to a Plexiglas plate. The first optical marker cluster was fixed on the sensor and 20 trials were recorded. The error of alignment was calculated for each trial, and the mean, the standard deviation, and the maximum values of this error over all trials were reported. One-way repeated measures analysis of variance revealed that the alignment error differed significantly across the eight methods. Post-hoc tests showed that the alignment error from the methods based on angular velocities was significantly lower than for the other methods. The method using angular velocities performed the best, with an average error of 0.17 ± 0.08 deg. We therefore recommend this method, which is easy to perform and provides accurate alignment.

  2. An occultation satellite system for determining pressure levels in the atmosphere

    NASA Technical Reports Server (NTRS)

    Ungar, S. G.; Lusignan, B. B.

    1972-01-01

    An operational two-satellite microwave occultation system will establish a pressure reference level to be used in fixing the temperature-pressure profile generated by the SIRS infrared sensor as a function of altitude. In the final error analysis, simulated data for the SIRS sensor were used to test the performance of the occultation system. The results of this analysis indicate that the occultation system is capable of measuring the altitude of the 300-mb level to within 24 mrms, given a maximum error of 2 K in the input temperature profile. The effects of water vapor can be corrected by suitable climatological profiles, and improvements in the accuracy of the SIRS instrument should yield additional improvements in the performance of the occultation system.

  3. A channel dynamics model for real-time flood forecasting

    USGS Publications Warehouse

    Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.

    1989-01-01

    A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.

  4. Evaluation of techniques for slice sensitivity profile measurement and analysis

    PubMed Central

    Greene, Travis C.

    2014-01-01

    The purpose of this study was to compare the resulting full width at half maximum of slice sensitivity profiles (SSP) generated by several commercially available point response phantoms, and determine an appropriate imaging technique and analysis method. Four CT phantoms containing point response objects designed to produce a delta impulse signal used in this study: a Fluke CT‐SSP phantom, a Gammex 464, a CatPhan 600, and a Kagaku Micro Disc phantom. Each phantom was imaged using 120 kVp, 325 mAs, head scan field of view, 32×0.625 mm helical scan with a 20 mm beam width and a pitch of 0.969. The acquired images were then reconstructed into all available slice thicknesses (0.625 mm−5.0 mm). A computer program was developed to analyze the images of each dataset for generating a SSP from which the full width at half maximum (FWHM) was determined. Two methods for generating SSPs were evaluated and compared by choosing the mean vs. maximum value in the ROI, along with two methods for evaluating the FWHM of the SSP, linear interpolation and Gaussian curve fitting. FWHMs were compared with the manufacturer's specifications using percent error and z‐test with a significance value of p<0.05. The FWHMs from each phantom were not significantly different (p≥0.089) with an average error of 3.5%. The FWHMs from SSPs generated from the mean value were statistically different (p≤3.99×1013). The FWHMs from the different FWHM methods were not statistically different (p≤0.499). Evaluation of the SSP is dependent on the ROI value used. The maximum value from the ROI should be used to generate the SSP whenever possible. SSP measurement is independent of the phantoms used in this study. PACS number: 87. PMID:24710429

  5. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  6. Integrity monitoring of vehicle positioning in urban environment using RTK-GNSS, IMU and speedometer

    NASA Astrophysics Data System (ADS)

    El-Mowafy, Ahmed; Kubo, Nobuaki

    2017-05-01

    Continuous and trustworthy positioning is a critical capability for advanced driver assistance systems (ADAS). To achieve continuous positioning, methods such as global navigation satellite systems real-time kinematic (RTK), Doppler-based positioning, and positioning using low-cost inertial measurement unit (IMU) with car speedometer data are combined in this study. To ensure reliable positioning, the system should have integrity monitoring above a certain level, such as 99%. Achieving this level when combining different types of measurements that have different characteristics and different types of errors is a challenge. In this study, a novel integrity monitoring approach is presented for the proposed integrated system. A threat model of the measurements of the system components is discussed, which includes both the nominal performance and possible fault modes. A new protection level is presented to bound the maximum directional position error. The proposed approach was evaluated through a kinematic test in an urban area in Japan with a focus on horizontal positioning. Test results show that by integrating RTK, Doppler with IMU/speedometer, 100% positioning availability was achieved. The integrity monitoring availability was assessed and found to meet the target value where the position errors were bounded by the protection level, which was also less than an alert level, indicating the effectiveness of the proposed approach.

  7. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  8. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    PubMed

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  9. Aerosol lidar observations of atmospheric mixing in Los Angeles: Climatology and implications for greenhouse gas observations

    NASA Astrophysics Data System (ADS)

    Ware, John; Kort, Eric A.; DeCola, Phil; Duren, Riley

    2016-08-01

    Atmospheric observations of greenhouse gases provide essential information on sources and sinks of these key atmospheric constituents. To quantify fluxes from atmospheric observations, representation of transport—especially vertical mixing—is a necessity and often a source of error. We report on remotely sensed profiles of vertical aerosol distribution taken over a 2 year period in Pasadena, California. Using an automated analysis system, we estimate daytime mixing layer depth, achieving high confidence in the afternoon maximum on 51% of days with profiles from a Sigma Space Mini Micropulse LiDAR (MiniMPL) and on 36% of days with a Vaisala CL51 ceilometer. We note that considering ceilometer data on a logarithmic scale, a standard method, introduces, an offset in mixing height retrievals. The mean afternoon maximum mixing height is 770 m Above Ground Level in summer and 670 m in winter, with significant day-to-day variance (within season σ = 220m≈30%). Taking advantage of the MiniMPL's portability, we demonstrate the feasibility of measuring the detailed horizontal structure of the mixing layer by automobile. We compare our observations to planetary boundary layer (PBL) heights from sonde launches, North American regional reanalysis (NARR), and a custom Weather Research and Forecasting (WRF) model developed for greenhouse gas (GHG) monitoring in Los Angeles. NARR and WRF PBL heights at Pasadena are both systematically higher than measured, NARR by 2.5 times; these biases will cause proportional errors in GHG flux estimates using modeled transport. We discuss how sustained lidar observations can be used to reduce flux inversion error by selecting suitable analysis periods, calibrating models, or characterizing bias for correction in post processing.

  10. Aerosol lidar observations of atmospheric mixing in Los Angeles: Climatology and implications for greenhouse gas observations.

    PubMed

    Ware, John; Kort, Eric A; DeCola, Phil; Duren, Riley

    2016-08-27

    Atmospheric observations of greenhouse gases provide essential information on sources and sinks of these key atmospheric constituents. To quantify fluxes from atmospheric observations, representation of transport-especially vertical mixing-is a necessity and often a source of error. We report on remotely sensed profiles of vertical aerosol distribution taken over a 2 year period in Pasadena, California. Using an automated analysis system, we estimate daytime mixing layer depth, achieving high confidence in the afternoon maximum on 51% of days with profiles from a Sigma Space Mini Micropulse LiDAR (MiniMPL) and on 36% of days with a Vaisala CL51 ceilometer. We note that considering ceilometer data on a logarithmic scale, a standard method, introduces, an offset in mixing height retrievals. The mean afternoon maximum mixing height is 770 m Above Ground Level in summer and 670 m in winter, with significant day-to-day variance (within season σ = 220m≈30%). Taking advantage of the MiniMPL's portability, we demonstrate the feasibility of measuring the detailed horizontal structure of the mixing layer by automobile. We compare our observations to planetary boundary layer (PBL) heights from sonde launches, North American regional reanalysis (NARR), and a custom Weather Research and Forecasting (WRF) model developed for greenhouse gas (GHG) monitoring in Los Angeles. NARR and WRF PBL heights at Pasadena are both systematically higher than measured, NARR by 2.5 times; these biases will cause proportional errors in GHG flux estimates using modeled transport. We discuss how sustained lidar observations can be used to reduce flux inversion error by selecting suitable analysis periods, calibrating models, or characterizing bias for correction in post processing.

  11. Quantifying the Seasonal and Interannual Variability of North American Isoprene Emissions Using Satellite Observations of the Formaldehyde Column

    NASA Technical Reports Server (NTRS)

    Palmer, Paul I.; Abbot, Dorian S.; Fu, Tzung-May; Jacob, Daniel J.; Chance, Kelly; Kurosu, Thomas P.; Guenther, Alex; Wiedinmyer, Christine; Stanton, Jenny C.; Pilling, Michael J.; hide

    2006-01-01

    Quantifying isoprene emissions using satellite observations of the formaldehyde (HCHO) columns is subject to errors involving the column retrieval and the assumed relationship between HCHO columns and isoprene emissions, taken here from the GEOS-CHEM chemical transport model. Here we use a 6-year (1996-2001) HCHO column data set from the Global Ozone Monitoring Experiment (GOME) satellite instrument to (1) quantify these errors, (2) evaluate GOME-derived isoprene emissions with in situ flux measurements and a process-based emission inventory (Model of Emissions of Gases and Aerosols from Nature, MEGAN), and (3) investigate the factors driving the seasonal and interannual variability of North American isoprene emissions. The error in the GOME HCHO column retrieval is estimated to be 40%. We use the Master Chemical Mechanism (MCM) to quantify the time-dependent HCHO production from isoprene, alpha- and beta-pinenes, and methylbutenol and show that only emissions of isoprene are detectable by GOME. The time-dependent HCHO yield from isoprene oxidation calculated by MCM is 20-30% larger than in GEOS-CHEM. GOME-derived isoprene fluxes track the observed seasonal variation of in situ measurements at a Michigan forest site with a -30% bias. The seasonal variation of North American isoprene emissions during 2001 inferred from GOME is similar to MEGAN, with GOME emissions typically 25% higher (lower) at the beginning (end) of the growing season. GOME and MEGAN both show a maximum over the southeastern United States, but they differ in the precise location. The observed interannual variability of this maximum is 20-30%, depending on month. The MEGAN isoprene emission dependence on surface air temperature explains 75% of the month-to-month variability in GOME-derived isoprene emissions over the southeastern United States during May-September 1996-2001.

  12. Validation of Globsnow-2 Snow Water Equivalent Over Eastern Canada

    NASA Technical Reports Server (NTRS)

    Larue, Fanny; Royer, Alain; De Seve, Danielle; Langlois, Alexandre; Roy, Alexandre R.; Brucker, Ludovic

    2017-01-01

    In Qubec, Eastern Canada, snowmelt runoff contributes more than 30% of the annual energy reserve for hydroelectricity production, and uncertainties in annual maximum snow water equivalent (SWE) over the region are one of the main constraints for improved hydrological forecasting. Current satellite-based methods for mapping SWE over Qubec's main hydropower basins do not meet Hydro-Qubec operational requirements for SWE accuracies with less than 15% error. This paper assesses the accuracy of the GlobSnow-2 (GS-2) SWE product, which combines microwave satellite data and in situ measurements, for hydrological applications in Qubec. GS-2 SWE values for a 30-year period (1980 to 2009) were compared with space- and time-matched values from a comprehensive dataset of in situ SWE measurements (a total of 38,990 observations in Eastern Canada). The root mean square error (RMSE) of the GS-2 SWE product is 94.1+/- 20.3 mm, corresponding to an overall relative percentage error (RPE) of 35.9%. The main sources of uncertainty are wet and deep snow conditions (when SWE is higher than 150 mm), and forest cover type. However, compared to a typical stand-alone brightness temperature channel difference algorithm, the assimilation of surface information in the GS-2 algorithm clearly improves SWE accuracy by reducing the RPE by about 30%. Comparison of trends in annual mean and maximum SWE between surface observations and GS-2 over 1980-2009 showed agreement for increasing trends over southern Qubec, but less agreement on the sign and magnitude of trends over northern Qubec. Extended at a continental scale, the GS-2 SWE trends highlight a strong regional variability.

  13. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease

    NASA Astrophysics Data System (ADS)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.

    2011-02-01

    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  14. Dynamic-MLC leaf control utilizing on-flight intensity calculations: a robust method for real-time IMRT delivery over moving rigid targets.

    PubMed

    McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy

    2007-08-01

    An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.

  15. The effect of subject measurement error on joint kinematics in the conventional gait model: Insights from the open-source pyCGM tool using high performance computing methods.

    PubMed

    Schwartz, Mathew; Dixon, Philippe C

    2018-01-01

    The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided in open source format and available at https://github.com/cadop/pyCGM.

  16. The effect of subject measurement error on joint kinematics in the conventional gait model: Insights from the open-source pyCGM tool using high performance computing methods

    PubMed Central

    Dixon, Philippe C.

    2018-01-01

    The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided in open source format and available at https://github.com/cadop/pyCGM. PMID:29293565

  17. Proprioceptive deficit in individuals with unilateral tearing of the anterior cruciate ligament after active evaluation of the sense of joint position.

    PubMed

    Cossich, Victor; Mallrich, Frédéric; Titonelli, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To ascertain whether the proprioceptive deficit in the sense of joint position continues to be present when patients with a limb presenting a deficient anterior cruciate ligament (ACL) are assessed by testing their active reproduction of joint position, in comparison with the contralateral limb. Twenty patients with unilateral ACL tearing participated in the study. Their active reproduction of joint position in the limb with the deficient ACL and in the healthy contralateral limb was tested. Meta-positions of 20% and 50% of the maximum joint range of motion were used. Proprioceptive performance was determined through the values of the absolute error, variable error and constant error. Significant differences in absolute error were found at both of the positions evaluated, and in constant error at 50% of the maximum joint range of motion. When evaluated in terms of absolute error, the proprioceptive deficit continues to be present even when an active evaluation of the sense of joint position is made. Consequently, this sense involves activity of both intramuscular and tendon receptors.

  18. Pencil beam proton radiography using a multilayer ionization chamber

    NASA Astrophysics Data System (ADS)

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-01

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  19. Pencil beam proton radiography using a multilayer ionization chamber.

    PubMed

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-07

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  20. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less

  1. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  2. Synergies in Astrometry: Predicting Navigational Error of Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Gessner Stewart, Susan

    2015-08-01

    Celestial navigation can employ a number of bright stars which are in binary systems. Often these are unresolved, appearing as a single, center-of-light object. A number of these systems are, however, in wide systems which could introduce a margin of error in the navigation solution if not handled properly. To illustrate the importance of good orbital solutions for binary systems - as well as good astrometry in general - the relationship between the center-of-light versus individual catalog position of celestial bodies and the error in terrestrial position derived via celestial navigation is demonstrated. From the list of navigational binary stars, fourteen such binary systems with at least 3.0 arcseconds apparent separation are explored. Maximum navigational error is estimated under the assumption that the bright star in the pair is observed at maximum separation, but the center-of-light is employed in the navigational solution. The relationships between navigational error and separation, orbital periods, and observers' latitude are discussed.

  3. Examining Impulse-Variability Theory and the Speed-Accuracy Trade-Off in Children's Overarm Throwing Performance.

    PubMed

    Molina, Sergio L; Stodden, David F

    2018-04-01

    This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.

  4. Chair rise transfer detection and analysis using a pendant sensor: an algorithm for fall risk assessment in older people.

    PubMed

    Zhang, Wei; Regterschot, G Ruben H; Wahle, Fabian; Geraedts, Hilde; Baldus, Heribert; Zijlstra, Wiebren

    2014-01-01

    Falls result in substantial disability, morbidity, and mortality among older people. Early detection of fall risks and timely intervention can prevent falls and injuries due to falls. Simple field tests, such as repeated chair rise, are used in clinical assessment of fall risks in older people. Development of on-body sensors introduces potential beneficial alternatives for traditional clinical methods. In this article, we present a pendant sensor based chair rise detection and analysis algorithm for fall risk assessment in older people. The recall and the precision of the transfer detection were 85% and 87% in standard protocol, and 61% and 89% in daily life activities. Estimation errors of chair rise performance indicators: duration, maximum acceleration, peak power and maximum jerk were tested in over 800 transfers. Median estimation error in transfer peak power ranged from 1.9% to 4.6% in various tests. Among all the performance indicators, maximum acceleration had the lowest median estimation error of 0% and duration had the highest median estimation error of 24% over all tests. The developed algorithm might be feasible for continuous fall risk assessment in older people.

  5. Application of fiber spectrometers for etch depth measurement of binary computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Korolkov, V. P.; Konchenko, A. S.; Poleshchuk, A. G.

    2013-01-01

    Novel spectrophotometric method of computer-generated holograms depth measurement is presented. It is based on spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase a phase depth of the grooves by factor of 2 and measure more precisely shallow phase gratings. Diffraction binary structures with depth from several hundreds to thousands nanometers could be measured by the method. Measurement uncertainty is mainly defined by following parameters - shifts of the spectrum maximums that are occurred due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method can ensure 0.25-1% error for desktop spectrophotometers. However fiber spectrometers are more convenient for creation of real measurement system with scanning measurement of large area computer-generated holograms which are used for optical testing of aspheric optics. Especially diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. Experimental system for characterization of binary computer-generated holograms was developed using spectrophotometric unit of confocal sensor CHR-150 (STIL SA).

  6. Effect of monochromatic aberrations on photorefractive patterns

    NASA Astrophysics Data System (ADS)

    Campbell, Melanie C. W.; Bobier, W. R.; Roorda, A.

    1995-08-01

    Photorefractive methods have become popular in the measurement of refractive and accommodative states of infants and children owing to their photographic nature and rapid speed of measurement. As in the case of any method that measures the refractive state of the human eye, monochromatic aberrations will reduce the accuracy of the measurement. Monochromatic aberrations cannot be as easily predicted or controlled as chromatic aberrations during the measurement, and accordingly they will introduce measurement errors. This study defines this error or uncertainty by extending the existing paraxial optical analyses of coaxial and eccentric photorefraction. This new optical analysis predicts that, for the amounts of spherical aberration (SA) reported for the human eye, there will be a significant degree of measurement uncertainty introduced for all photorefractive methods. The dioptric amount of this uncertainty may exceed the maximum amount of SA present in the eye. The calculated effects on photorefractive measurement of a real eye with a mixture of spherical aberration and coma are shown to be significant. The ability, developed here, to predict photorefractive patterns corresponding to different amounts and types of monochromatic aberration may in the future lead to an extension of photorefractive methods to the dual measurement of refractive states and aberrations of individual eyes. aberration, retinal image quality,

  7. Prediction of Mass Evaporation of During Measurements of Thermophysical Properties Using an Electrostatic Levitator

    NASA Astrophysics Data System (ADS)

    Lee, J.; Matson, D. M.

    2014-10-01

    This paper describes the prediction of mass evaporation of at% alloys during thermophysical property measurements using the electrostatic levitator at NASA Marshall Space Flight Center in Huntsville, AL. The final mass, final composition, and activity of individual component are considered in the calculation of mass evaporation. The predicted reduction in mass and variation in composition are validated with six ESL samples which underwent different thermal cycles. The predicted mass evaporation and composition shift show good agreement with experiments with the maximum relative errors of 4.8 % and 1.7 %, respectively.

  8. Microwave non-contact imaging of subcutaneous human body tissues.

    PubMed

    Kletsov, Andrey; Chernokalov, Alexander; Khripkov, Alexander; Cho, Jaegeol; Druchinin, Sergey

    2015-10-01

    A small-size microwave sensor is developed for non-contact imaging of a human body structure in 2D, enabling fitness and health monitoring using mobile devices. A method for human body tissue structure imaging is developed and experimentally validated. Subcutaneous fat tissue reconstruction depth of up to 70 mm and maximum fat thickness measurement error below 2 mm are demonstrated by measurements with a human body phantom and human subjects. Electrically small antennas are developed for integration of the microwave sensor into a mobile device. Usability of the developed microwave sensor for fitness applications, healthcare, and body weight management is demonstrated.

  9. Improved model of the retardance in citric acid coated ferrofluids using stepwise regression

    NASA Astrophysics Data System (ADS)

    Lin, J. F.; Qiu, X. R.

    2017-06-01

    Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.

  10. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE PAGES

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...

    2017-08-25

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  11. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  12. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  13. Nearshore coastal mapping. [in Lake Michigan and Puerto Rico

    NASA Technical Reports Server (NTRS)

    Polcyn, F. C.; Lyzenga, D. R.

    1975-01-01

    Two test sites of different water quality and bottom topography were used to test for maximum water depth penetration using the Skylab S-192 MSS for measurement of nearshore coastal bathymetry. Sites under investigation lie along the Lake Michigan coastline where littoral transport acts to erode sand bluffs and endangers developments along 1,200 miles of shore, and on the west coast of Puerto Rico where unreliable shoal location and depth information constitutes a safety hazard to navigation. The S-192 and S-190A and B provide data on underwater features because of water transparency in the blue/green portion of the spectrum. Depth of 20 meters were measured with the S-192 in the Puerto Rico test site. The S-190B photography with its improved spatial resolution clearly delineates the triple sand bar topography in the Lake Michigan test site. Several processing techniques were employed to test for maximum depth measurement with least error. The results are useful for helping to determine an optimum spectral bandwidth for future space sensors that will increase depth measurements for different water attenuation conditions where a bottom reflection is detectable.

  14. Nimbus 7 earth radiation budget wide field of view climate data set improvement. I - The earth albedo from deconvolution of shortwave measurements

    NASA Technical Reports Server (NTRS)

    Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee

    1987-01-01

    A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.

  15. Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.

    PubMed

    Wu, Dongjin; Xia, Linyuan; Geng, Jijun

    2018-06-19

    Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.

  16. Measuring human remains in the field: Grid technique, total station, or MicroScribe?

    PubMed

    Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel

    2012-09-10

    Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Evaluation of methods for measuring particulate matter emissions from gas turbines.

    PubMed

    Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David

    2011-04-15

    The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.

  18. Fast scattering simulation tool for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  19. Etch depth mapping of phase binary computer-generated holograms by means of specular spectroscopic scatterometry

    NASA Astrophysics Data System (ADS)

    Korolkov, Victor P.; Konchenko, Alexander S.; Cherkashin, Vadim V.; Mironnikov, Nikolay G.; Poleshchuk, Alexander G.

    2013-09-01

    Detailed analysis of etch depth map for phase binary computer-generated holograms intended for testing aspheric optics is a very important task. In particular, diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. We offer a simplified version of the specular spectroscopic scatterometry method. It is based on the spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase the phase depth of the grooves by a factor of 2 and measure more precisely shallow phase gratings. Measurement uncertainty is mainly defined by the following parameters: shifts of the spectrum maximums that occur due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method we describe can ensure 1% error. However, fiber spectrometers are more convenient for scanning measurements of large area computer-generated holograms. Our experimental system for characterization of binary computer-generated holograms was developed using a fiber spectrometer.

  20. Compensation method of cloud infrared radiation interference based on a spinning projectile's attitude measurement

    NASA Astrophysics Data System (ADS)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-01-01

    Based on the study of earth infrared radiation and further requirement of anticloud interference ability for a spinning projectile's infrared attitude measurement, a compensation method of cloud infrared radiation interference is proposed. First, the theoretical model of infrared radiation interference is established by analyzing the generation mechanism and interference characteristics of cloud infrared radiation. Then, the influence of cloud infrared radiation on attitude angle is calculated in the following two situations. The first situation is the projectile in cloud, and the maximum of roll angle error can reach ± 20 deg. The second situation is the projectile outside of cloud, and it results in the inability to measure the projectile's attitude angle. Finally, a multisensor weighted fusion algorithm is proposed based on trust function method to reduce the influence of cloud infrared radiation. The results of semiphysical experiments show that the error of roll angle with a weighted fusion algorithm can be kept within ± 0.5 deg in the presence of cloud infrared radiation interference. This proposed method improves the accuracy of roll angle by nearly four times in attitude measurement and also solves the problem of low accuracy of infrared radiation attitude measurement in navigation and guidance field.

  1. Casing pipe damage detection with optical fiber sensors: a case study in oil well constructions

    NASA Astrophysics Data System (ADS)

    Zhou, Zhi; He, Jianping; Huang, Minghua; He, Jun; Ou, Jinping; Chen, Genda

    2010-04-01

    Casing pipes in oil well constructions may suddenly buckle inward as their inside and outside hydrostatic pressure difference increases. For the safety of construction workers and the steady development of oil industries, it is critically important to measure the stress state of a casing pipe. This study develops a rugged, real-time monitoring, and warning system that combines the distributed Brillouin Scattering Time Domain Reflectometry (BOTDR) and the discrete fiber Bragg grating (FBG) measurement. The BOTDR optical fiber sensors were embedded with no optical fiber splice joints in a fiber reinforced polymer (FRP) rebar and the FBG sensors were wrapped in epoxy resins and glass clothes, both installed during the segmental construction of casing pipes. In-situ tests indicate that the proposed sensing system and installation technique can survive the downhole driving process of casing pipes, withstand a harsh service environment, and remain in tact with the casing pipes for compatible strain measurements. The relative error of the measured strains between the distributed and discrete sensors is less than 12%. The FBG sensors successfully measured the maximum horizontal principal stress with a relative error of 6.7% in comparison with a cross multi-pole array acoustic instrument.

  2. Numerical experiment for ultrasonic-measurement-integrated simulation of three-dimensional unsteady blood flow.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2008-08-01

    Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.

  3. Fourier transform profilometry (FTP) using an innovative band-pass filter for accurate 3-D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc

    2010-02-01

    This article presents a novel band-pass filter for Fourier transform profilometry (FTP) for accurate 3-D surface reconstruction. FTP can be employed to obtain 3-D surface profiles by one-shot images to achieve high-speed measurement. However, its measurement accuracy has been significantly influenced by the spectrum filtering process required to extract the phase information representing various surface heights. Using the commonly applied 2-D Hanning filter, the measurement errors could be up to 5-10% of the overall measuring height and it is unacceptable to various industrial application. To resolve this issue, the article proposes an elliptical band-pass filter for extracting the spectral region possessing essential phase information for reconstructing accurate 3-D surface profiles. The elliptical band-pass filter was developed and optimized to reconstruct 3-D surface models with improved measurement accuracy. Some experimental results verify that the accuracy can be effectively enhanced by using the elliptical filter. The accuracy improvement of 44.1% and 30.4% can be achieved in 3-D and sphericity measurement, respectively, when the elliptical filter replaces the traditional filter as the band-pass filtering method. Employing the developed method, the maximum measured error can be kept within 3.3% of the overall measuring range.

  4. Design and Analysis of the Measurement Characteristics of a Bidirectional-Decoupling Over-Constrained Six-Dimensional Parallel-Mechanism Force Sensor

    PubMed Central

    Zhao, Tieshi; Zhao, Yanzhi; Hu, Qiangqiang; Ding, Shixing

    2017-01-01

    The measurement of large forces and the presence of errors due to dimensional coupling are significant challenges for multi-dimensional force sensors. To address these challenges, this paper proposes an over-constrained six-dimensional force sensor based on a parallel mechanism of steel ball structures as a measurement module. The steel ball structure can be subject to rolling friction instead of sliding friction, thus reducing the influence of friction. However, because the structure can only withstand unidirectional pressure, the application of steel balls in a six-dimensional force sensor is difficult. Accordingly, a new design of the sensor measurement structure was designed in this study. The static equilibrium and displacement compatibility equations of the sensor prototype’s over-constrained structure were established to obtain the transformation function, from which the forces in the measurement branches of the proposed sensor were then analytically derived. The sensor’s measurement characteristics were then analysed through numerical examples. Finally, these measurement characteristics were confirmed through calibration and application experiments. The measurement accuracy of the proposed sensor was determined to be 1.28%, with a maximum coupling error of 1.98%, indicating that the proposed sensor successfully overcomes the issues related to steel ball structures and provides sufficient accuracy. PMID:28867812

  5. Low-cost FM oscillator for capacitance type of blade tip clearance measurement system

    NASA Technical Reports Server (NTRS)

    Barranger, John P.

    1987-01-01

    The frequency-modulated (FM) oscillator described is part of a blade tip clearance measurement system that meets the needs of a wide class of fans, compressors, and turbines. As a result of advancements in the technology of ultra-high-frequency operational amplifiers, the FM oscillator requires only a single low-cost integrated circuit. Its carrier frequency is 42.8 MHz when it is used with an integrated probe and connecting cable assembly consisting of a 0.81 cm diameter engine-mounted capacitance probe and a 61 cm long hermetically sealed coaxial cable. A complete circuit analysis is given, including amplifier negative resistance characteristics. An error analysis of environmentally induced effects is also derived, and an error-correcting technique is proposed. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a rotor blade thickness of 1.2 mm. High-temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum permissible seal temperature. The major source of error is the residual FM oscillator noise, which produces a clearance error of + or - 10 microns at a clearance of 0.5 mm. The oscillator electronics accommodates the high rotor speeds associated with small engines, the signals from which may have frequency components as high as 1 MHz.

  6. Limitations of Airway Dimension Measurement on Images Obtained Using Multi-Detector Row Computed Tomography

    PubMed Central

    Oguma, Tsuyoshi; Hirai, Toyohiro; Niimi, Akio; Matsumoto, Hisako; Muro, Shigeo; Shigematsu, Michio; Nishimura, Takashi; Kubo, Yoshiro; Mishima, Michiaki

    2013-01-01

    Objectives (a) To assess the effects of computed tomography (CT) scanners, scanning conditions, airway size, and phantom composition on airway dimension measurement and (b) to investigate the limitations of accurate quantitative assessment of small airways using CT images. Methods An airway phantom, which was constructed using various types of material and with various tube sizes, was scanned using four CT scanner types under different conditions to calculate airway dimensions, luminal area (Ai), and the wall area percentage (WA%). To investigate the limitations of accurate airway dimension measurement, we then developed a second airway phantom with a thinner tube wall, and compared the clinical CT images of healthy subjects with the phantom images scanned using the same CT scanner. The study using clinical CT images was approved by the local ethics committee, and written informed consent was obtained from all subjects. Data were statistically analyzed using one-way ANOVA. Results Errors noted in airway dimension measurement were greater in the tube of small inner radius made of material with a high CT density and on images reconstructed by body algorithm (p<0.001), and there was some variation in error among CT scanners under different fields of view. Airway wall thickness had the maximum effect on the accuracy of measurements with all CT scanners under all scanning conditions, and the magnitude of errors for WA% and Ai varied depending on wall thickness when airways of <1.0-mm wall thickness were measured. Conclusions The parameters of airway dimensions measured were affected by airway size, reconstruction algorithm, composition of the airway phantom, and CT scanner types. In dimension measurement of small airways with wall thickness of <1.0 mm, the accuracy of measurement according to quantitative CT parameters can decrease as the walls become thinner. PMID:24116105

  7. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    PubMed Central

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  8. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  9. The application of micro-vacuo-certo-contacting ophthalmophanto in X-ray radiosurgery for tumors in an eyeball.

    PubMed

    Li, Shuying; Wang, Yunyan; Hu, Likuan; Liang, Yingchun; Cai, Jing

    2014-11-01

    The large errors of routine localization for eyeball tumors restricted X-ray radiosurgery application, just for the eyeball to turn around. To localize the accuracy site, the micro-vacuo-certo-contacting ophthalmophanto (MVCCOP) method was used. Also, the outcome of patients with tumors in the eyeball was evaluated. In this study, computed tomography (CT) localization accuracy was measured by repeating CT scan using MVCCOP to fix the eyeball in radiosurgery. This study evaluated the outcome of the tumors and the survival of the patients by follow-up. The results indicated that the accuracy of CT localization of Brown-Roberts-Wells (BRW) head ring was 0.65 mm and maximum error was 1.09 mm. The accuracy of target localization of tumors in the eyeball using MVCCOP was 0.87 mm averagely, and the maximum error was 1.19 mm. The errors of fixation of the eyeball were 0.84 mm averagely and 1.17 mm maximally. The total accuracy was 1.34 mm, and 95% confidence accuracy was 2.09 mm. The clinical application of this method in 14 tumor patients showed satisfactory results, and all of the tumors showed the clear rims. The site of ten retinoblastomas was decreased significantly. The local control interval of tumors were 6 ∼ 24 months, median of 10.5 months. The survival of ten patients was 7 ∼ 30 months, median of 16.5 months. Also, the tumors were kept stable or shrank in the other four patients with angioma and melanoma. In conclusion, the MVCCOP is suitable and dependable for X-ray radiosurgery for eyeball tumors. The tumor control and survival of patients are satisfactory, and this method can effectively postpone or avoid extirpation of eyeball.

  10. An undulator based soft x-ray source for microscopy on the Duke electron storage ring

    NASA Astrophysics Data System (ADS)

    Johnson, Lewis Elgin

    1998-09-01

    This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.

  11. The development of alignment turning system for precision len cells

    NASA Astrophysics Data System (ADS)

    Huang, Chien-Yao; Ho, Cheng-Fang; Wang, Jung-Hsing; Chung, Chien-Kai; Chen, Jun-Cheng; Chang, Keng-Shou; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Chen, Fong-Zhi

    2017-08-01

    In general, the drop-in and cell-mounted assembly are used for standard and high performance optical system respectively. The optical performance is limited by the residual centration error and position accuracy of the conventional assembly. Recently, the poker chip assembly with high precision lens barrels that can overcome the limitation of conventional assembly is widely applied to ultra-high performance optical system. ITRC also develops the poker chip assembly solution for high numerical aperture objective lenses and lithography projection lenses. In order to achieve high precision lens cell for poker chip assembly, an alignment turning system (ATS) is developed. The ATS includes measurement, alignment and turning modules. The measurement module including a non-contact displacement sensor and an autocollimator can measure centration errors of the top and the bottom surface of a lens respectively. The alignment module comprising tilt and translation stages can align the optical axis of the lens to the rotating axis of the vertical lathe. The key specifications of the ATS are maximum lens diameter, 400mm, and radial and axial runout of the rotary table < 2 μm. The cutting performances of the ATS are surface roughness Ra < 1 μm, flatness < 2 μm, and parallelism < 5 μm. After measurement, alignment and turning processes on our ATS, the centration error of a lens cell with 200mm in diameter can be controlled in 10 arcsec. This paper also presents the thermal expansion of the hydrostatic rotating table. A poker chip assembly lens cell with three sub-cells is accomplished with average transmission centration error in 12.45 arcsec by fresh technicians. The results show that ATS can achieve high assembly efficiency for precision optical systems.

  12. Pennation angle dependency in skeletal muscle tissue doppler strain in dynamic contractions.

    PubMed

    Lindberg, Frida; Öhberg, Fredrik; Granåsen, Gabriel; Brodin, Lars-Åke; Grönlund, Christer

    2011-07-01

    Tissue velocity imaging (TVI) is a Doppler based ultrasound technique that can be used to study regional deformation in skeletal muscle tissue. The aim of this study was to develop a biomechanical model to describe the TVI strain's dependency on the pennation angle. We demonstrate its impact as the subsequent strain measurement error using dynamic elbow contractions from the medial and the lateral part of biceps brachii at two different loadings; 5% and 25% of maximum voluntary contraction (MVC). The estimated pennation angles were on average about 4° in extended position and increased to a maximal of 13° in flexed elbow position. The corresponding relative angular error spread from around 7% up to around 40%. To accurately apply TVI on skeletal muscles, the error due to angle changes should be compensated for. As a suggestion, this could be done according to the presented model. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  13. Development of a bio-magnetic measurement system and sensor configuration analysis for rats

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Eun; Kim, In-Seon; Kim, Kiwoong; Lim, Sanghyun; Kwon, Hyukchan; Kang, Chan Seok; Ahn, San; Yu, Kwon Kyu; Lee, Yong-Ho

    2017-04-01

    Magnetoencephalography (MEG) based on superconducting quantum interference devices enables the measurement of very weak magnetic fields (10-1000 fT) generated from the human or animal brain. In this article, we introduce a small MEG system that we developed specifically for use with rats. Our system has the following characteristics: (1) variable distance between the pick-up coil and outer Dewar bottom (˜5 mm), (2) small pick-up coil (4 mm) for high spatial resolution, (3) good field sensitivity (45 ˜ 80 fT /cm/√{Hz} ) , (4) the sensor interval satisfies the Nyquist spatial sampling theorem, and (5) small source localization error for the region to be investigated. To reduce source localization error, it is necessary to establish an optimal sensor layout. To this end, we simulated confidence volumes at each point on a grid on the surface of a virtual rat head. In this simulation, we used locally fitted spheres as model rat heads. This enabled us to consider more realistic volume currents. We constrained the model such that the dipoles could have only four possible orientations: the x- and y-axes from the original coordinates, and two tangentially layered dipoles (local x- and y-axes) in the locally fitted spheres. We considered the confidence volumes according to the sensor layout and dipole orientation and positions. We then conducted a preliminary test with a 4-channel MEG system prior to manufacturing the multi-channel system. Using the 4-channel MEG system, we measured rat magnetocardiograms. We obtained well defined P-, QRS-, and T-waves in rats with a maximum value of 15 pT/cm. Finally, we measured auditory evoked fields and steady state auditory evoked fields with maximum values 400 fT/cm and 250 fT/cm, respectively.

  14. Ciliary Muscle Thickness in Anisometropia

    PubMed Central

    Kuchem, Mallory K; Sinnott, Loraine T; Kao, Chiu-Yen; Bailey, Melissa D

    2014-01-01

    Purpose The purpose of this study was to investigate the relationships between ciliary muscle thickness (CMT), refractive error, and axial length both across subjects and between the more and less myopic eyes of adults with anisometropia. Methods Both eyes of 29 adult subjects with at least 1.00 D of anisometropia were measured. Ciliary muscle thickness was measured at the maximum thickness (CMTMAX) and at 1.0 mm (CMT1), 2.0 mm (CMT2), and 3.0 mm (CMT3) posterior to the scleral spur, and also at the apical region (Apical CMTMAX = CMTMAX – CMT2, and Apical CMT1 = CMT1 – CMT2). Multilevel regression models were used to determine the relationship between the various CMT measures and cycloplegic refractive error or axial length, and to assess whether there are CMT differences between the more and less myopic eyes of an anisometropic adult. Results CMTMAX, CMT1, CMT2 and CMT3 were negatively associated with mean refractive error (all p ≤ 0.03), and the strongest association was in the posterior region (CMT2 and CMT3). Apical CMTMAX and Apical CMT1, however, were positively associated with mean refractive error (both p < 0.0001) across subjects. Within a subject, i.e., comparing the two anisometropic eyes, there was no statistically significant difference in CMT in any region. Conclusions Similar to previous studies, across anisometropic subjects, a thicker posterior region of the ciliary muscle (CMT2 and CMT3) was associated with increased myopic refractive error. Conversely, shorter, more hyperopic eyes tended to have thicker anterior, apical fiber portions of their ciliary muscle (Apical CMTMAX and Apical CMT1). There was no difference between the two eyes for any CMT measurement, indicating that in anisometropia, an eye can grow longer and more myopic than its fellow eye without resulting in an increase in CMT. PMID:24100479

  15. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

  16. Impact of Scanning Density on Measurements from Spectral Domain Optical Coherence Tomography

    PubMed Central

    Keane, Pearse A.; Ouyang, Yanling; Updike, Jared F.; Walsh, Alexander C.

    2010-01-01

    Purpose. To investigate the relationship between B-scan density and retinal thickness measurements obtained by spectral domain optical coherence tomography (SDOCT) in eyes with retinal disease. Methods. Data were collected from 115 patients who underwent volume OCT imaging with Cirrus HD-OCT using the 512 × 128 horizontal raster protocol. Raw OCT data, including the location of the automated retinal boundaries, were exported from the Cirrus HD-OCT instrument and imported into the Doheny Image Reading Center (DIRC) OCT viewing and grading software, termed “3D-OCTOR.” For each case, retinal thickness maps similar to those produced by Cirrus HD-OCT were generated using all 128 B-scans, as well as using less dense subsets of scans, ranging from every other scan to every 16th scan. Retinal thickness measurements derived using only a subset of scans were compared to measurements using all 128 B-scans, and differences for the foveal central subfield (FCS) and total macular volume were computed. Results. The mean error in FCS retinal thickness measurement increased as the density of B-scans decreased, but the error was small (<2 μm), except at the sparsest densities evaluated. The maximum error at a density of every fourth scan (32 scans spaced 188 μm apart) was <1%. Conclusions. B-scan density in volume SDOCT acquisitions can be reduced to 32 horizontal B-scans (spaced 188 μm apart) with minimal change in calculated retinal thickness measurements. This information may be of value in design of scanning protocols for SDOCT for use in future clinical trials. PMID:19797199

  17. Feasibility study of direct spectra measurements for Thomson scattered signals for KSTAR fusion-grade plasmas

    NASA Astrophysics Data System (ADS)

    Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.

    2017-11-01

    Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.

  18. Impact of rock mass temperature on potential power and electricity generation in the ORC installation

    NASA Astrophysics Data System (ADS)

    Kaczmarczyk, Michał

    2017-11-01

    The basic source of information for determining the temperature distribution in the rock mass and thus the potential for thermal energy contained in geothermal water conversion to electricity, are: temperature measurements in stable geothermic conditions, temperature measurements in unstable conditions, measurements of maximum temperatures at the bottom of the well. Incorrect temperature estimation can lead to errors during thermodynamic parameters calculation and consequently economic viability of the project. The analysis was performed for the geothermal water temperature range of 86-100°C, for dry working fluid R245fa. As a result of the calculations, the data indicate an increase in geothermal power as the geothermal water temperature increases. At 86°C, the potential power is 817.48 kW, increases to 912.20 kW at 88°C and consequently to 1 493.34 kW at 100°C. These results are not surprising, but show a scale of error in assessing the potential that can result improper interpretation of the rock mass and geothermal waters temperature.

  19. Development of Bio-impedance Analyzer (BIA) for Body Fat Calculation

    NASA Astrophysics Data System (ADS)

    Riyadi, Munawar A.; Nugraha, A.; Santoso, M. B.; Septaditya, D.; Prakoso, T.

    2017-04-01

    Common weight scales cannot assess body composition or determine fat mass and fat-fress mass that make up the body weight. This research propose bio-impedance analysis (BIA) tool capable to body composition assessment. This tool uses four electrodes, two of which are used for 50 kHz sine wave current flow to the body and the rest are used to measure the voltage produced by the body for impedance analysis. Parameters such as height, weight, age, and gender are provided individually. These parameters together with impedance measurements are then in the process to produce a body fat percentage. The experimental result shows impressive repeatability for successive measurements (stdev ≤ 0.25% fat mass). Moreover, result on the hand to hand node scheme reveals average absolute difference of total subjects between two analyzer tools of 0.48% (fat mass) with maximum absolute discrepancy of 1.22% (fat mass). On the other hand, the relative error normalized to Omron’s HBF-306 as comparison tool reveals less than 2% relative error. As a result, the system performance offers good evaluation tool for fat mass in the body.

  20. A Gompertzian model with random effects to cervical cancer growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati

    2015-05-15

    In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.

  1. Opto-Mechanical Design of a Chromotomographic Imager Direct-Vision Prism Element

    DTIC Science & Technology

    2013-03-01

    The paramount conclusion to be made from these relationships is that the angular dispersion must be known for all wavelengths of interest in order to...respect to the range of angular spread of approximately 4◦ seen in Figure 3.4, the angular error in the measurement is as much as 2.4 minutes of arc...angle is the maximum angular difference between the surface normal, N̂, and the incident ray direction vector, î, for which refraction occurs across a

  2. Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal

    NASA Astrophysics Data System (ADS)

    Ahmed, Suhad; Qasim, Zainab

    2018-05-01

    This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).

  3. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  4. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  5. Spirality: A Noval Way to Measure Spiral Arm Pitch Angle

    NASA Astrophysics Data System (ADS)

    Shields, Douglas W.; Boe, Benjamin; Henderson, Casey L.; Hartley, Matthew; Davis, Benjamin L.; Pour Imani, Hamed; Kennefick, Daniel; Kennefick, Julia D.

    2015-01-01

    We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. For a given pitch angle template, the mean pixel value is found along each of typically 1000 spiral axes. The fitting function, which shows a local maximum at the best-fit pitch angle, is the variance of these means. Error bars are found by varying the inner radius of the measurement annulus and finding the standard deviation of the best-fit pitches. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 128 synthetic spiral images of known pitch. These spirals varied in the number of spiral arms, pitch angle, degree of logarithmicity, radius, SNR, inclination angle, bar length, and bulge radius. A correct result is defined as a result that matches the true pitch within the error bars, with error bars no greater than ±7°. For the non-logarithmic spiral sample, the correct answer is similarly defined, with the mean pitch as function of radius in place of the true pitch. For all synthetic spirals, correct results were obtained so long as SNR > 0.25, the bar length was no more than 60% of the spiral's diameter (when the bar was included in the measurement), the input center of the spiral was no more than 6% of the spiral radius away from the true center, and the inclination angle was no more than 30°. The synthetic spirals were not deprojected prior to measurement. The code produced the correct result for all barred spirals when the measurement annulus was placed outside the bar. Additionally, we compared the code's results against 2DFFT results for 203 visually selected spiral galaxies in GOODS North and South. Among the entire sample, Spirality's error bars overlapped 2DFFT's error bars 64% of the time. For those galaxies in which Source code is available by email request from the primary author.

  6. Student Errors in Fractions and Possible Causes of These Errors

    ERIC Educational Resources Information Center

    Aksoy, Nuri Can; Yazlik, Derya Ozlem

    2017-01-01

    In this study, it was aimed to determine the errors and misunderstandings of 5th and 6th grade middle school students in fractions and operations with fractions. For this purpose, the case study model, which is a qualitative research design, was used in the research. In the study, maximum diversity sampling, which is a purposeful sampling method,…

  7. Thermal Energy Storage and Heat Transfer Support Program. Task 4. Thermionic Energy Conversion Studies. Volume 2

    DTIC Science & Technology

    1991-03-01

    Target Temperature as a Function of the Py erot Temperature ........ .... ............. 13 2.4 Emitter Temperature as a Functio of th Liode Target...Temperatm .. ........................ 14 2.5 Experimental Calibration Data and Polynomial Fit for ASTAR-811C Diode ... . ......... . ...... 18 2.6 Actual...12.2152(V) - 0.0099 (5.2) Maximum error a 0.0093% C) TR = 420 K P = 4.5541(V)3 - 23.58 18 (V)2 + 18.1602(V) + 0.002 (5.3) Maximum error =.1.632% d) TR = 450

  8. Structure design and characteristic analysis of micro-nano probe based on six dimensional micro-force measuring principle

    NASA Astrophysics Data System (ADS)

    Yang, Hong-tao; Cai, Chun-mei; Fang, Chuan-zhi; Wu, Tian-feng

    2013-10-01

    In order to develop micro-nano probe having error self-correcting function and good rigidity structure, a new micro-nano probe system was developed based on six-dimensional micro-force measuring principle. The structure and working principle of the probe was introduced in detail. The static nonlinear decoupling method was established with BP neural network to do the static decoupling for the dimension coupling existing in each direction force measurements. The optimal parameters of BP neural network were selected and the decoupling simulation experiments were done. The maximum probe coupling rate after decoupling is 0.039% in X direction, 0.025% in Y direction and 0.027% in Z direction. The static measurement sensitivity of the probe can reach 10.76μɛ / mN in Z direction and 14.55μɛ / mN in X and Y direction. The modal analysis and harmonic response analysis under three dimensional harmonic load of the probe were done by using finite element method. The natural frequencies under different vibration modes were obtained and the working frequency of the probe was determined, which is higher than 10000 Hz . The transient response analysis of the probe was done, which indicates that the response time of the probe can reach 0.4 ms. From the above results, it is shown that the developed micro-nano probe meets triggering requirements of micro-nano probe. Three dimension measuring force can be measured precisely by the developed probe, which can be used to predict and correct the force deformation error and the touch error of the measuring ball and the measuring rod.

  9. Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes

    NASA Astrophysics Data System (ADS)

    Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd

    2016-04-01

    In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.

  10. Geomagnetic storm forecasting service StormFocus: 5 years online

    NASA Astrophysics Data System (ADS)

    Podladchikova, Tatiana; Petrukovich, Anatoly; Yermolaev, Yuri

    2018-04-01

    Forecasting geomagnetic storms is highly important for many space weather applications. In this study, we review performance of the geomagnetic storm forecasting service StormFocus during 2011-2016. The service was implemented in 2011 at SpaceWeather.Ru and predicts the expected strength of geomagnetic storms as measured by Dst index several hours ahead. The forecast is based on L1 solar wind and IMF measurements and is updated every hour. The solar maximum of cycle 24 is weak, so most of the statistics are on rather moderate storms. We verify quality of selection criteria, as well as reliability of real-time input data in comparison with the final values, available in archives. In real-time operation 87% of storms were correctly predicted while the reanalysis running on final OMNI data predicts successfully 97% of storms. Thus the main reasons for prediction errors are discrepancies between real-time and final data (Dst, solar wind and IMF) due to processing errors, specifics of datasets.

  11. RELIABILITY OF THE ONE REPETITION-MAXIMUM POWER CLEAN TEST IN ADOLESCENT ATHLETES

    PubMed Central

    Faigenbaum, Avery D.; McFarland, James E.; Herman, Robert; Naclerio, Fernando; Ratamess, Nicholas A.; Kang, Jie; Myer, Gregory D.

    2013-01-01

    Although the power clean test is routinely used to assess strength and power performance in adult athletes, the reliability of this measure in younger populations has not been examined. Therefore, the purpose of this study was to determine the reliability of the one repetition maximum (1 RM) power clean in adolescent athletes. Thirty-six male athletes (age 15.9 ± 1.1 yrs, body mass 79.1 ± 20.3 kg, height 175.1 ±7.4 cm) who had more than 1 year of training experience with weightlifting exercises performed a 1 RM power clean on two nonconsecutive days in the afternoon following standardized procedures. All test procedures were supervised by a senior level weightlifting coach and consisted of a systematic progression in test load until the maximum resistance that could be lifted for one repetition using proper exercise technique was determined. Data were analyzed using an intraclass correlation coefficient (ICC [2,k]), Pearson correlation coefficient (r), repeated measures ANOVA, Bland-Altman plot, and typical error analyses. Analysis of the data revealed that the test measures were highly reliable demonstrating a test-retest ICC of 0.98 (95% CI = 0.96–0.99). Testing also demonstrated a strong relationship between 1 RM measures on trial 1 and trial 2 (r=0.98, p<0.0001) with no significant difference in power clean performance between trials (70.6 ± 19.8 vs. 69.8 ± 19.8 kg). Bland Altman plots confirmed no systematic shift in 1 RM between trial 1 and trial 2. The typical error to be expected between 1 RM power clean trials is 2.9 kg and a change of at least 8.0 kg is indicated to determine a real change in lifting performance between tests in young lifters. No injuries occurred during the study period and the testing protocol was well-tolerated by all subjects. These findings indicate that 1 RM power clean testing has a high degree of reproducibility in trained male adolescent athletes when standardized testing procedures are followed and qualified instruction is present. PMID:22233786

  12. An interlaboratory comparison of dosimetry for a multi-institutional radiobiological research project: Observations, problems, solutions and lessons learned.

    PubMed

    Seed, Thomas M; Xiao, Shiyun; Manley, Nancy; Nikolich-Zugich, Janko; Pugh, Jason; Van den Brink, Marcel; Hirabayashi, Yoko; Yasutomo, Koji; Iwama, Atsushi; Koyasu, Shigeo; Shterev, Ivo; Sempowski, Gregory; Macchiarini, Francesca; Nakachi, Kei; Kunugi, Keith C; Hammer, Clifford G; Dewerd, Lawrence A

    2016-01-01

    An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤5%. Comparable rates of 'dosimetric compliance' were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between 'measured' and 'target' doses, with errors falling largely between 0 and 20%. Outliers were most notable for OSL-based tests, while multiple tests by 'non-compliant' laboratories using orthovoltage X-rays contributed heavily to the wide variation in dosing errors. For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized.

  13. Does the Length of Elbow Flexors and Visual Feedback Have Effect on Accuracy of Isometric Muscle Contraction in Men after Stroke?

    PubMed Central

    Juodzbaliene, Vilma; Darbutas, Tomas; Skurvydas, Albertas

    2016-01-01

    The aim of the study was to determine the effect of different muscle length and visual feedback information (VFI) on accuracy of isometric contraction of elbow flexors in men after an ischemic stroke (IS). Materials and Methods. Maximum voluntary muscle contraction force (MVMCF) and accurate determinate muscle force (20% of MVMCF) developed during an isometric contraction of elbow flexors in 90° and 60° of elbow flexion were measured by an isokinetic dynamometer in healthy subjects (MH, n = 20) and subjects after an IS during their postrehabilitation period (MS, n = 20). Results. In order to evaluate the accuracy of the isometric contraction of the elbow flexors absolute errors were calculated. The absolute errors provided information about the difference between determinate and achieved muscle force. Conclusions. There is a tendency that greater absolute errors generating determinate force are made by MH and MS subjects in case of a greater elbow flexors length despite presence of VFI. Absolute errors also increase in both groups in case of a greater elbow flexors length without VFI. MS subjects make greater absolute errors generating determinate force without VFI in comparison with MH in shorter elbow flexors length. PMID:27042670

  14. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  15. A Modified MinMax k-Means Algorithm Based on PSO.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  16. Effect of Missing Inter-Beat Interval Data on Heart Rate Variability Analysis Using Wrist-Worn Wearables.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook

    2017-08-15

    Most of the wrist-worn devices on the market provide a continuous heart rate measurement function using photoplethysmography, but have not yet provided a function to measure the continuous heart rate variability (HRV) using beat-to-beat pulse interval. The reason for such is the difficulty of measuring a continuous pulse interval during movement using a wearable device because of the nature of photoplethysmography, which is susceptible to motion noise. This study investigated the effect of missing heart beat interval data on the HRV analysis in cases where pulse interval cannot be measured because of movement noise. First, we performed simulations by randomly removing data from the RR interval of the electrocardiogram measured from 39 subjects and observed the changes of the relative and normalized errors for the HRV parameters according to the total length of the missing heart beat interval data. Second, we measured the pulse interval from 20 subjects using a wrist-worn device for 24 h and observed the error value for the missing pulse interval data caused by the movement during actual daily life. The experimental results showed that mean NN and RMSSD were the most robust for the missing heart beat interval data among all the parameters in the time and frequency domains. Most of the pulse interval data could not be obtained during daily life. In other words, the sample number was too small for spectral analysis because of the long missing duration. Therefore, the frequency domain parameters often could not be calculated, except for the sleep state with little motion. The errors of the HRV parameters were proportional to the missing data duration in the presence of missing heart beat interval data. Based on the results of this study, the maximum missing duration for acceptable errors for each parameter is recommended for use when the HRV analysis is performed on a wrist-worn device.

  17. The characteristics of dose at mass interface on lung cancer Stereotactic Body Radiotherapy (SBRT) simulation

    NASA Astrophysics Data System (ADS)

    Wulansari, I. H.; Wibowo, W. E.; Pawiro, S. A.

    2017-05-01

    In lung cancer cases, there exists a difficulty for the Treatment Planning System (TPS) to predict the dose at or near the mass interface. This error prediction might influence the minimum or maximum dose received by lung cancer. In addition to target motion, the target dose prediction error also contributes in the combined error during the course of treatment. The objective of this work was to verify dose plan calculated by adaptive convolution algorithm in Pinnacle3 at the mass interface against a set of measurement. The measurement was performed using Gafchromic EBT 3 film in static and dynamic CIRS phantom with amplitudes of 5 mm, 10 mm, and 20 mm in superior-inferior motion direction. Static and dynamic phantom were scanned with fast CT and slow CT before planned. The results showed that adaptive convolution algorithm mostly predicted mass interface dose lower than the measured dose in a range of -0,63% to 8,37% for static phantom in fast CT scanning and -0,27% to 15,9% for static phantom in slow CT scanning. In dynamic phantom, this algorithm was predicted mass interface dose higher than measured dose up to -89% for fast CT and varied from -17% until 37% for slow CT. This interface of dose differences caused the dose mass decreased in fast CT, except for 10 mm motion amplitude, and increased in slow CT for the greater amplitude of motion.

  18. Effect of simulated intraoral variables on the accuracy of a photogrammetric imaging technique for complete-arch implant prostheses.

    PubMed

    Bratos, Manuel; Bergin, Jumping M; Rubenstein, Jeffrey E; Sorensen, John A

    2018-03-17

    Conventional impression techniques to obtain a definitive cast for a complete-arch implant-supported prosthesis are technique-sensitive and time-consuming. Direct optical recording with a camera could offer an alternative to conventional impression making. The purpose of this in vitro study was to test a novel intraoral image capture protocol to obtain 3-dimensional (3D) implant spatial measurement data under simulated oral conditions of vertical opening and lip retraction. A mannequin was assembled simulating the intraoral conditions of a patient having an edentulous mandible with 5 interforaminal implants. Simulated mouth openings with 2 interincisal openings (35 mm and 55 mm) and 3 lip retractions (55 mm, 75 mm, and 85 mm) were evaluated to record the implant positions. The 3D spatial orientations of implant replicas embedded in the reference model were measured using a coordinate measuring machine (CMM) (control). Five definitive casts were made with a splinted conventional impression technique of the reference model. The positions of the implant replicas for each of the 5 casts were measured with a Nobel Procera Scanner (conventional digital method). For the prototype, optical targets were secured to the implant replicas, and 3 sets of 12 images each were recorded for the photogrammetric process of 6 groups of retractions and openings using a digital camera and a standardized image capture protocol. Dimensional data were imported into photogrammetry software (photogrammetry method). The calculated and/or measured precision and accuracy of the implant positions in 3D space for the 6 groups were compared with 1-way ANOVA with an F-test (α=.05). The precision (standard error [SE] of measurement) for CMM was 3.9 μm (95% confidence interval [CI] 2.7 to 7.1 μm). For the conventional impression method, the SE of measurement was 17.2 μm (95% CI 10.3 to 49.4 μm). For photogrammetry, a grand mean was calculated for groups MinR-AvgO, MinR-MaxO, AvgR-AvgO, and MaxR-AvgO obtaining a value of 26.8 μm (95% CI 18.1 to 51.4 μm). The overall linear measurement error for accurately locating the top center points (TCP) followed a similar pattern as for precision. CMM (coordinate measurement machine) measurement represents the nonclinical gold standard, with an average error TCP distance of 4.6 μm (95% CI 3.5 to 6 μm). All photogrammetry groups presented an accuracy that ranged from 63 μm (SD 17.6) to 47 μm (SD 9.2). The grand mean of accuracy was calculated as 55.2 μm (95% CI 8.8 to 130.8 μm). The CMM group (control) demonstrated the highest levels of accuracy and precision. Most of the groups with the photogrammetric method were statistically similar to the conventional group except for groups AvgR-MaxO and MaxR-MaxO, which represented maximum opening with average retraction and maximum opening with maximum retraction. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  19. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  20. Coarse initial orbit determination for a geostationary satellite using single-epoch GPS measurements.

    PubMed

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-04-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite's state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state.

  1. Coarse Initial Orbit Determination for a Geostationary Satellite Using Single-Epoch GPS Measurements

    PubMed Central

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-01-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite’s state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state. PMID:25835299

  2. Vertical profile of tropospheric ozone derived from synergetic retrieval using three different wavelength ranges, UV, IR, and microwave: sensitivity study for satellite observation

    NASA Astrophysics Data System (ADS)

    Sato, Tomohiro O.; Sato, Takao M.; Sagawa, Hideo; Noguchi, Katsuyuki; Saitoh, Naoko; Irie, Hitoshi; Kita, Kazuyuki; Mahani, Mona E.; Zettsu, Koji; Imasu, Ryoichi; Hayashida, Sachiko; Kasai, Yasuko

    2018-03-01

    We performed a feasibility study of constraining the vertical profile of the tropospheric ozone by using a synergetic retrieval method on multiple spectra, i.e., ultraviolet (UV), thermal infrared (TIR), and microwave (MW) ranges, measured from space. This work provides, for the first time, a quantitative evaluation of the retrieval sensitivity of the tropospheric ozone by adding the MW measurement to the UV and TIR measurements. Two observation points in East Asia (one in an urban area and one in an ocean area) and two observation times (one during summer and one during winter) were assumed. Geometry of line of sight was nadir down-looking for the UV and TIR measurements, and limb sounding for the MW measurement. The retrieval sensitivities of the ozone profiles in the upper troposphere (UT), middle troposphere (MT), and lowermost troposphere (LMT) were estimated using the degree of freedom for signal (DFS), the pressure of maximum sensitivity, reduction rate of error from the a priori error, and the averaging kernel matrix, derived based on the optimal estimation method. The measurement noise levels were assumed to be the same as those for currently available instruments. The weighting functions for the UV, TIR, and MW ranges were calculated using the SCIATRAN radiative transfer model, the Line-By-Line Radiative Transfer Model (LBLRTM), and the Advanced Model for Atmospheric Terahertz Radiation Analysis and Simulation (AMATERASU), respectively. The DFS value was increased by approximately 96, 23, and 30 % by adding the MW measurements to the combination of UV and TIR measurements in the UT, MT, and LMT regions, respectively. The MW measurement increased the DFS value of the LMT ozone; nevertheless, the MW measurement alone has no sensitivity to the LMT ozone. The pressure of maximum sensitivity value for the LMT ozone was also increased by adding the MW measurement. These findings indicate that better information on LMT ozone can be obtained by adding constraints on the UT and MT ozone from the MW measurement. The results of this study are applicable to the upcoming air-quality monitoring missions, APOLLO, GMAP-Asia, and uvSCOPE.

  3. Optical Fourier filtering for whole lens assessment of progressive power lenses.

    PubMed

    Spiers, T; Hull, C C

    2000-07-01

    Four binary filter designs for use in an optical Fourier filtering set-up were evaluated when taking quantitative measurements and when qualitatively mapping the power variation of progressive power lenses (PPLs). The binary filters tested were concentric ring, linear grating, grid and "chevron" designs. The chevron filter was considered best for quantitative measurements since it permitted a vernier acuity task to be used for measuring the fringe spacing, significantly reducing errors, and it also gave information on the polarity of the lens power. The linear grating filter was considered best for qualitatively evaluating the power variation. Optical Fourier filtering and a Nidek automatic focimeter were then used to measure the powers in the distance and near portions of five PPLs of differing design. Mean measurement error was 0.04 D with a maximum value of 0.13 D. Good qualitative agreement was found between the iso-cylinder plots provided by the manufacturer and the Fourier filter fringe patterns for the PPLs indicating that optical Fourier filtering provides the ability to map the power distribution across the entire lens aperture without the need for multiple point measurements. Arguments are presented that demonstrate that it should be possible to derive both iso-sphere and iso-cylinder plots from the binary filter patterns.

  4. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2014-10-01

    A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.

  5. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2015-03-01

    A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.

  6. The Refurbishment and Upgrade of the Atmospheric Radiation Measurement Raman Lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.D.; Goldsmith, J.E.M.

    The Atmospheric Radiation Measurement Program (ARM) Climate Research Facility (ACRF) Raman lidar (CARL) is an autonomous, turn-key system that profiles water vapor, aerosols, and clouds throughout the diurnal cycle for days without attention (Goldsmith et al. 1998). CARL was first deployed to the Southern Great Plains CRF during the summer of 1996 and participated in the 1996 and 1997 water vapor intensive operational periods (IOPs). Since February 1998, the system has collected over 38,000 hrs of data (equivalent of almost 4.4 years), with an average monthly uptime of 62% during this time period. This unprecedented performance by CARL makes itmore » the premier operational Raman lidar in the world. Unfortunately, CARL began degrading in early 2002. This loss of sensitivity, which affected all observed variables, was very gradual and thus was not identified until the autumn of 2003. Analysis of the data suggested the problem was not associated with the laser or transmit portion of the system, but rather in the detection subsystem, as both the background values and the peak signals showed a marked decreases over this time period. The loss of sensitivity of a factor of 2-4, depending on the channel, resulted in higher random error in the retrieved products, such as the aerosol backscatter coefficient and water vapor mixing ratio. Figure 1 shows the random error at 2 km for aerosol backscatter coefficient (top) and water vapor mixing ratio (middle), in terms of percent of the signal for both average daytime (red) and nighttime (blue) data from 1998 to 2005. The seasonal variation of water vapor is easily seen in the random error in the water vapor mixing ratio data. The loss of sensitivity also affected the maximum range of the usable data, as illustrated by the dramatic decrease in the maximum height seen in the water vapor mixing ratio data (bottom). This degradation, which results in much larger random errors, greatly hinders the analysis of data sets such as the Aerosol IOP (March 2003) and the AIRS Water Vapor Experiment (December 2003). The degradation and its impact on the Aerosol IOP analysis are reported in Ferrare et al. 2005.« less

  7. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  8. Microwave non-contact imaging of subcutaneous human body tissues

    PubMed Central

    Chernokalov, Alexander; Khripkov, Alexander; Cho, Jaegeol; Druchinin, Sergey

    2015-01-01

    A small-size microwave sensor is developed for non-contact imaging of a human body structure in 2D, enabling fitness and health monitoring using mobile devices. A method for human body tissue structure imaging is developed and experimentally validated. Subcutaneous fat tissue reconstruction depth of up to 70 mm and maximum fat thickness measurement error below 2 mm are demonstrated by measurements with a human body phantom and human subjects. Electrically small antennas are developed for integration of the microwave sensor into a mobile device. Usability of the developed microwave sensor for fitness applications, healthcare, and body weight management is demonstrated. PMID:26609415

  9. Apollo 16, LM-11 descent propulsion system final flight evaluation

    NASA Technical Reports Server (NTRS)

    Avvenire, A. T.

    1974-01-01

    The performance of the LM-11 descent propulsion system during the Apollo 16 missions was evaluated and found satisfactory. The average engine effective specific impulse was 0.1 second higher than predicted, but well within the predicted one sigma uncertainty of 0.2 seconds. Several flight measurement discrepancies existed during the flight as follows: (1) the chamber pressure transducer had a noticeable drift, exhibiting a maximum error of about 1.5 psi at approximately 130 seconds after engine ignition, (2) the fuel and oxidizer interface pressure measurements appeared to be low during the entire flight, and (3) the fuel propellant quantity gaging system did not perform within expected accuracies.

  10. A microprocessor controlled pressure scanning system

    NASA Technical Reports Server (NTRS)

    Anderson, R. C.

    1976-01-01

    A microprocessor-based controller and data logger for pressure scanning systems is described. The microcomputer positions and manages data from as many as four 48-port electro-mechanical pressure scanners. The maximum scanning rate is 80 pressure measurements per second (20 ports per second on each of four scanners). The system features on-line calibration, position-directed data storage, and once-per-scan display in engineering units of data from a selected port. The system is designed to be interfaced to a facility computer through a shared memory. System hardware and software are described. Factors affecting measurement error in this type of system are also discussed.

  11. A method for calibration of Soleil-Babinet compensator using a spectrophotometer

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Chen, Lei; Li, Bo; Shi, Lili; Luo, Ting

    2010-06-01

    A method using a spectrophotometer for calibrating Soleil-Babinet compensator is proposed. It is based on the spectroscopic method which utilizes the relation between transmittance and wavelength to obtain retardation. By placing a multiple order half wave plate behind the Soleil-Babinet compensator, zero-order retardation can be measured, which is difficult to accomplish by spectroscopic method. In the experiment, the retardations of the compensator in the range 0- λ are measured. It is demonstrated that the precision of retardation is 0.45 nm at the position 0 and λ while the maximum error is less than 1 nm between the two positions.

  12. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  13. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  14. The passive cable properties of hair cell stereocilia and their contribution to somatic capacitance measurements.

    PubMed

    Breneman, Kathryn D; Highstein, Stephen M; Boyle, Richard D; Rabbitt, Richard D

    2009-01-01

    Somatic measurements of whole-cell capacitance are routinely used to understand physiologic events occurring in remote portions of cells. These studies often assume the intracellular space is voltage-clamped. We questioned this assumption in auditory and vestibular hair cells with respect to their stereocilia based on earlier studies showing that neurons, with radial dimensions similar to stereocilia, are not always isopotential under voltage-clamp. To explore this, we modeled the stereocilia as passive cables with transduction channels located at their tips. We found that the input capacitance measured at the soma changes when the transduction channels at the tips of the stereocilia are open compared to when the channels are closed. The maximum capacitance is felt with the transducer closed but will decrease as the transducer opens due to a length-dependent voltage drop along the stereocilium length. This potential drop is proportional to the intracellular resistance and stereocilium tip conductance and can produce a maximum capacitance error on the order of fF for single stereocilia and pF for the bundle.

  15. Comparison of measured and modeled BRDF of natural targets

    NASA Astrophysics Data System (ADS)

    Boucher, Yannick; Cosnefroy, Helene; Petit, Alain D.; Serrot, Gerard; Briottet, Xavier

    1999-07-01

    The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.

  16. Evaluation of FNS control systems: software development and sensor characterization.

    PubMed

    Riess, J; Abbas, J J

    1997-01-01

    Functional Neuromuscular Stimulation (FNS) systems activate paralyzed limbs by electrically stimulating motor neurons. These systems have been used to restore functions such as standing and stepping in people with thoracic level spinal cord injury. Research in our laboratory is directed at the design and evaluation of the control algorithms for generating posture and movement. This paper describes software developed for implementing FNS control systems and the characterization of a sensor system used to implement and evaluate controllers in the laboratory. In order to assess FNS control algorithms, we have developed a versatile software package using Lab VIEW (National Instruments, Corp). This package provides the ability to interface with sensor systems via serial port or A/D board, implement data processing and real-time control algorithms, and interface with neuromuscular stimulation devices. In our laboratory, we use the Flock of Birds (Ascension Technology Corp.) motion tracking sensor system to monitor limb segment position and orientation (6 degrees of freedom). Errors in the sensor system have been characterized and nonlinear polynomial models have been developed to account for these errors. With this compensation, the error in the distance measurement is reduced by 90 % so that the maximum error is less than 1 cm.

  17. Seasonal Differences in Spatial Scales of Chlorophyll-A Concentration in Lake TAIHU,CHINA

    NASA Astrophysics Data System (ADS)

    Bao, Y.; Tian, Q.; Sun, S.; Wei, H.; Tian, J.

    2012-08-01

    Spatial distribution of chlorophyll-a (chla) concentration in Lake Taihu is non-uniform and seasonal variability. Chla concentration retrieval algorithms were separately established using measured data and remote sensing images (HJ-1 CCD and MODIS data) in October 2010, March 2011, and September 2011. Then parameters of semi- variance were calculated on the scale of 30m, 250m and 500m for analyzing spatial heterogeneity in different seasons. Finally, based on the definitions of Lumped chla (chlaL) and Distributed chla (chlaD), seasonal model of chla concentration scale error was built. The results indicated that: spatial distribution of chla concentration in spring was more uniform. In summer and autumn, chla concentration in the north of the lake such as Meiliang Bay and Zhushan Bay was higher than that in the south of Lake Taihu. Chla concentration on different scales showed the similar structure in the same season, while it had different structure in different seasons. And inversion chla concentration from MODIS 500m had a greater scale error. The spatial scale error changed with seasons. It was higher in summer and autumn than that in spring. The maximum relative error can achieve 23%.

  18. A directional cylindrical anemometer with four sets of differential pressure sensors

    NASA Astrophysics Data System (ADS)

    Liu, C.; Du, L.; Zhao, Z.

    2016-03-01

    This paper presents a solid-state directional anemometer for simultaneously measuring the speed and direction of a wind in a plane in a speed range 1-40 m/s. This instrument has a cylindrical shape and works by detecting the pressure differences across diameters of the cylinder when exposed to wind. By analyzing our experimental data in a Reynolds number regime 1.7 × 103-7 × 104, we figure out the relationship between the pressure difference distribution and the wind velocity. We propose a novel and simple solution based on the relationship and design an anemometer which composes of a circular cylinder with four sets of differential pressure sensors, tubes connecting these sensors with the cylinder's surface, and corresponding circuits. In absence of moving parts, this instrument is small and immune of friction. It has simple internal structures, and the fragile sensing elements are well protected. Prototypes have been fabricated to estimate performance of proposed approach. The power consumption of the prototype is less than 0.5 W, and the sample rate is up to 31 Hz. The test results in a wind tunnel indicate that the maximum relative speed measuring error is 5% and the direction error is no more than 5° in a speed range 2-40 m/s. In theory, it is capable of measuring wind up to 60 m/s. When the air stream goes slower than 2 m/s, the measuring errors of directions are slightly greater, and the performance of speed measuring degrades but remains in an acceptable range of ±0.2 m/s.

  19. Probability of Detection of Genotyping Errors and Mutations as Inheritance Inconsistencies in Nuclear-Family Data

    PubMed Central

    Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael

    2002-01-01

    Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214

  20. Cognitive performance in women with fibromyalgia: A case-control study.

    PubMed

    Pérez de Heredia-Torres, Marta; Huertas-Hoyas, Elisabet; Máximo-Bocanegra, Nuria; Palacios-Ceña, Domingo; Fernández-De-Las-Peñas, César

    2016-10-01

    This study aimed to evaluate the differences in cognitive skills between women with fibromyalgia and healthy women, and the correlations between functional independence and cognitive limitations. A cross-sectional study was performed. Twenty women with fibromyalgia and 20 matched controls participated. Outcomes included the Numerical Pain Rating Scale, the Functional Independence Measure, the Fibromyalgia Impact Questionnaire and Gradior © software. The Student's t-test and the Spearman's rho test were applied to the data. Women affected required a greater mean time (P < 0.020) and maximum time (P < 0.015) during the attention test than the healthy controls. In the memory test they displayed greater execution errors (P < 0.001), minimal time (P < 0.001) and mean time (P < 0.001) whereas, in the perception tests, they displayed a greater mean time (P < 0.009) and maximum time (P < 0.048). Correlations were found between the domains of the functional independence measure and the cognitive abilities assessed. Women with fibromyalgia exhibited a decreased cognitive ability compared to healthy controls, which negatively affected the performance of daily activities, such as upper limb dressing, feeding and personal hygiene. Patients required more time to perform activities requiring both attention and perception, decreasing their functional independence. Also, they displayed greater errors when performing activities requiring the use of memory. Occupational therapists treating women with fibromyalgia should consider the negative impact of possible cognitive deficits on the performance of daily activities and offer targeted support strategies. © 2016 Occupational Therapy Australia.

  1. Detrimental Effect Elimination of Laser Frequency Instability in Brillouin Optical Time Domain Reflectometer by Using Self-Heterodyne Detection

    PubMed Central

    Li, Yongqian; Li, Xiaojuan; An, Qi; Zhang, Lixin

    2017-01-01

    A useful method for eliminating the detrimental effect of laser frequency instability on Brillouin signals by employing the self-heterodyne detection of Rayleigh and Brillouin scattering is presented. From the analysis of Brillouin scattering spectra from fibers with different lengths measured by heterodyne detection, the maximum usable pulse width immune to laser frequency instability is obtained to be about 4 µs in a self-heterodyne detection Brillouin optical time domain reflectometer (BOTDR) system using a broad-band laser with low frequency stability. Applying the self-heterodyne detection of Rayleigh and Brillouin scattering in BOTDR system, we successfully demonstrate that the detrimental effect of laser frequency instability on Brillouin signals can be eliminated effectively. Employing the broad-band laser modulated by a 130-ns wide pulse driven electro-optic modulator, the observed maximum errors in temperatures measured by the local heterodyne and self-heterodyne detection BOTDR systems are 7.9 °C and 1.2 °C, respectively. PMID:28335508

  2. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  3. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  4. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  6. Ultrasound virtual endoscopy: Polyp detection and reliability of measurement in an in vitro study with pig intestine specimens

    PubMed Central

    Liu, Jin-Ya; Chen, Li-Da; Cai, Hua-Song; Liang, Jin-Yu; Xu, Ming; Huang, Yang; Li, Wei; Feng, Shi-Ting; Xie, Xiao-Yan; Lu, Ming-De; Wang, Wei

    2016-01-01

    AIM: To present our initial experience regarding the feasibility of ultrasound virtual endoscopy (USVE) and its measurement reliability for polyp detection in an in vitro study using pig intestine specimens. METHODS: Six porcine intestine specimens containing 30 synthetic polyps underwent USVE, computed tomography colonography (CTC) and optical colonoscopy (OC) for polyp detection. The polyp measurement defined as the maximum polyp diameter on two-dimensional (2D) multiplanar reformatted (MPR) planes was obtained by USVE, and the absolute measurement error was analyzed using the direct measurement as the reference standard. RESULTS: USVE detected 29 (96.7%) of 30 polyps, remaining a 7-mm one missed. There was one false-positive finding. Twenty-six (89.7%) of 29 reconstructed images were clearly depicted, while 29 (96.7%) of 30 polyps were displayed on CTC with one false-negative finding. In OC, all the polyps were detected. The intraclass correlation coefficient was 0.876 (95%CI: 0.745-0.940) for measurements obtained with USVE. The pooled absolute measurement errors ± the standard deviations of the depicted polyps with actual sizes ≤ 5 mm, 6-9 mm, and ≥ 10 mm were 1.9 ± 0.8 mm, 0.9 ± 1.2 mm, and 1.0 ± 1.4 mm, respectively. CONCLUSION: USVE is reliable for polyp detection and measurement in in vitro study. PMID:27022217

  7. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    NASA Astrophysics Data System (ADS)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  8. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  9. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers.

    PubMed

    Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P

    2017-01-01

    Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.

  10. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  11. Improving receiver performance of diffusive molecular communication with enzymes.

    PubMed

    Noel, Adam; Cheung, Karen C; Schober, Robert

    2014-03-01

    This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.

  12. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Evaluation of dynamic balance among community-dwelling older adult fallers: a generalizability study of the limits of stability test.

    PubMed

    Clark, S; Rose, D J

    2001-04-01

    To establish reliability estimates of the 75% Limits of Stability Test (75% LOS test) when administered to community-dwelling older adults with a history of falls. Generalizability theory was used to estimate both the relative contribution of identified error sources to the total measurement error and generalizability coefficients. A random effects repeated-measures analysis of variance (ANOVA) was used to assess consistency of LOS test movement variables across both days and targets. A motor control research laboratory in a university setting. Fifty community-dwelling older adults with 2 or more falls in the previous year. Spatial and temporal measures of dynamic balance derived from the 75% LOS test included average movement velocity, maximum center of gravity (COG) excursion, end-point COG excursion, and directional control. Estimated generalizability coefficients for 2 testing days ranged from.58 to.87. Total variance in LOS test measures attributable to inconsistencies in day-to-day test performance (Day and Subject x Day facets) ranged from 2.5% to 8.4%. The ANOVA results indicated that no significant differences were observed in the LOS test variables across the 2 testing days. The 75% LOS test administered to older adult fallers on 2 consecutive days provides consistent and reliable measures of dynamic balance.

  14. Proprioceptive deficit in patients with complete tearing of the anterior cruciate ligament.

    PubMed

    Godinho, Pedro; Nicoliche, Eduardo; Cossich, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To investigate the existence of proprioceptive deficits between the injured limb and the uninjured (i.e. contralateral normal) limb, in individuals who suffered complete tearing of the anterior cruciate ligament (ACL), using a strength reproduction test. Sixteen patients with complete tearing of the ACL participated in the study. A voluntary maximum isometric strength test was performed, with reproduction of the muscle strength in the limb with complete tearing of the ACL and the healthy contralateral limb, with the knee flexed at 60°. The meta-intensity was used for the procedure of 20% of the voluntary maximum isometric strength. The proprioceptive performance was determined by means of absolute error, variable error and constant error values. Significant differences were found between the control group and ACL group for the variables of absolute error (p = 0.05) and constant error (p = 0.01). No difference was found in relation to variable error (p = 0.83). Our data corroborate the hypothesis that there is a proprioceptive deficit in subjects with complete tearing of the ACL in an injured limb, in comparison with the uninjured limb, during evaluation of the sense of strength. This deficit can be explained in terms of partial or total loss of the mechanoreceptors of the ACL.

  15. The reliability of three devices used for measuring vertical jump height.

    PubMed

    Nuzzo, James L; Anning, Jonathan H; Scharfenberg, Jessica M

    2011-09-01

    The purpose of this investigation was to assess the intrasession and intersession reliability of the Vertec, Just Jump System, and Myotest for measuring countermovement vertical jump (CMJ) height. Forty male and 39 female university students completed 3 maximal-effort CMJs during 2 testing sessions, which were separated by 24-48 hours. The height of the CMJ was measured from all 3 devices simultaneously. Systematic error, relative reliability, absolute reliability, and heteroscedasticity were assessed for each device. Systematic error across the 3 CMJ trials was observed within both sessions for males and females, and this was most frequently observed when the CMJ height was measured by the Vertec. No systematic error was discovered across the 2 testing sessions when the maximum CMJ heights from the 2 sessions were compared. In males, the Myotest demonstrated the best intrasession reliability (intraclass correlation coefficient [ICC] = 0.95; SEM = 1.5 cm; coefficient of variation [CV] = 3.3%) and intersession reliability (ICC = 0.88; SEM = 2.4 cm; CV = 5.3%; limits of agreement = -0.08 ± 4.06 cm). Similarly, in females, the Myotest demonstrated the best intrasession reliability (ICC = 0.91; SEM = 1.4 cm; CV = 4.5%) and intersession reliability (ICC = 0.92; SEM = 1.3 cm; CV = 4.1%; limits of agreement = 0.33 ± 3.53 cm). Additional analysis revealed that heteroscedasticity was present in the CMJ when measured from all 3 devices, indicating that better jumpers demonstrate greater fluctuations in CMJ scores across testing sessions. To attain reliable CMJ height measurements, practitioners are encouraged to familiarize athletes with the CMJ technique and then allow the athletes to complete numerous repetitions until performance plateaus, particularly if the Vertec is being used.

  16. Radiated microwave power transmission system efficiency measurements

    NASA Technical Reports Server (NTRS)

    Dickinson, R. M.; Brown, W. C.

    1975-01-01

    The measured and calculated results from determining the operating efficiencies of a laboratory version of a system for transporting electric power from one point to another via a wireless free space radiated microwave beam are reported. The system's overall end-to-end efficiency as well as intermediated conversion efficiencies were measured. The maximum achieved end-to-end dc-to-ac system efficiency was 54.18% with a probable error of + or - 0.94%. The dc-to-RF conversion efficiency was measured to be 68.87% + or - 1.0% and the RF-to-dc conversion efficiency was 78.67 + or - 1.1%. Under these conditions a dc power of 495.62 + or - 3.57 W was received with a free space transmitter antenna receiver antenna separation of 170.2 cm (67 in).

  17. Absorption of Solar Radiation by Clouds: Interpretations of Satellite, Surface, and Aircraft Measurements

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Zhang, M. H.; Zhou, Y.; Jing, X.; Dvortsov, V.

    1996-01-01

    To investigate the absorption of shortwave radiation by clouds, we have collocated satellite and surface measurements of shortwave radiation at several locations. Considerable effort has been directed toward understanding and minimizing sampling errors caused by the satellite measurements being instantaneous and over a grid that is much larger than the field of view of an upward facing surface pyranometer. The collocated data indicate that clouds absorb considerably more shortwave radiation than is predicted by theoretical models. This is consistent with the finding from both satellite and aircraft measurements that observed clouds are darker than model clouds. In the limit of thick clouds, observed top-of-the-atmosphere albedos do not exceed a value of 0.7, whereas in models the maximum albedo can be 0.8.

  18. Temperature-compensated distributed hydrostatic pressure sensor with a thin-diameter polarization-maintaining photonic crystal fiber based on Brillouin dynamic gratings.

    PubMed

    Teng, Lei; Zhang, Hongying; Dong, Yongkang; Zhou, Dengwang; Jiang, Taofei; Gao, Wei; Lu, Zhiwei; Chen, Liang; Bao, Xiaoyi

    2016-09-15

    A temperature-compensated distributed hydrostatic pressure sensor based on Brillouin dynamic gratings (BDGs) is proposed and demonstrated experimentally for the first time, to the best of our knowledge. The principle is to measure the hydrostatic pressure induced birefringence changes through exciting and probing the BDGs in a thin-diameter pure silica polarization-maintaining photonic crystal fiber. The temperature cross-talk to the hydrostatic pressure sensing can be compensated through measuring the temperature-induced Brillouin frequency shift (BFS) changes using Brillouin optical time-domain analysis. A distributed measurement of hydrostatic pressure is demonstrated experimentally using a 4-m sensing fiber, which has a high sensitivity, with a maximum measurement error less than 0.03 MPa at a 20-cm spatial resolution.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less

  20. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most accurate method to measure the motion and strain with an average median motion error of 0.42 mm and a median strain error of 2.0 ± 0.9%, 2.1 ± 1.3% and 7.1 ± 4.9% for circumferential, longitudinal and radial strain respectively. It also showed its capability to identify abnormal segments with reduced cardiac function and timing differences for the dyssynchrony cases. These results indicate that the proposed diffeomorphic speckle tracking method provides robust and accurate motion and strain estimation. Copyright © 2015. Published by Elsevier B.V.

  1. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  2. Biomass Thermogravimetric Analysis: Uncertainty Determination Methodology and Sampling Maps Generation

    PubMed Central

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

    2010-01-01

    The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532

  3. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Wing Shape Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strain-based method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2 percent, 0.28 percent, and 0.09 percent, respectively; and maximum slope errors in roll and pitch directions are 0.28 percent and -3.2 percent, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8 percent of the photogrammetry data and are accurate to within 2.2 percent in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction..

  5. Wing Shape Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2015-01-01

    A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strainbased method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2%, 0.28%, and 0.09%, respectively; and maximum slope errors in roll and pitch directions are 0.28% and -3.2%, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8% of the photogrammetry data and are accurate to within 2.2% in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction.

  6. Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Sen, S.

    2016-12-01

    Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.

  7. Intra and inter-session reliability of rapid Transcranial Magnetic Stimulation stimulus-response curves of tibialis anterior muscle in healthy older adults.

    PubMed

    Peri, Elisabetta; Ambrosini, Emilia; Colombo, Vera Maria; van de Ruit, Mark; Grey, Michael J; Monticone, Marco; Ferriero, Giorgio; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Ferrante, Simona

    2017-01-01

    The clinical use of Transcranial Magnetic Stimulation (TMS) as a technique to assess corticospinal excitability is limited by the time for data acquisition and the measurement variability. This study aimed at evaluating the reliability of Stimulus-Response (SR) curves acquired with a recently proposed rapid protocol on tibialis anterior muscle of healthy older adults. Twenty-four neurologically-intact adults (age:55-75 years) were recruited for this test-retest study. During each session, six SR curves, 3 at rest and 3 during isometric muscle contractions at 5% of maximum voluntary contraction (MVC), were acquired. Motor Evoked Potentials (MEPs) were normalized to the maximum peripherally evoked response; the coil position and orientation were monitored with an optical tracking system. Intra- and inter-session reliability of motor threshold (MT), area under the curve (AURC), MEPmax, stimulation intensity at which the MEP is mid-way between MEPmax and MEPmin (I50), slope in I50, MEP latency, and silent period (SP) were assessed in terms of Standard Error of Measurement (SEM), relative SEM, Minimum Detectable Change (MDC), and Intraclass Correlation Coefficient (ICC). The relative SEM was ≤10% for MT, I50, latency and SP both at rest and 5%MVC, while it ranged between 11% and 37% for AURC, MEPmax, and slope. MDC values were overall quite large; e.g., MT required a change of 12%MSO at rest and 10%MSO at 5%MVC to be considered a real change. Inter-sessions ICC were >0.6 for all measures but slope at rest and MEPmax and latency at 5%MVC. Measures derived from SR curves acquired in <4 minutes are affected by similar measurement errors to those found with long-lasting protocols, suggesting that the rapid method is at least as reliable as the traditional methods. As specifically designed to include older adults, this study provides normative data for future studies involving older neurological patients (e.g. stroke survivors).

  8. Development of Maximum Bubble Pressure Method for Surface Tension Measurement of High Viscosity Molten Silicate

    NASA Astrophysics Data System (ADS)

    Takeda, Osamu; Iwamoto, Hirone; Sakashita, Ryota; Iseki, Chiaki; Zhu, Hongmin

    2017-07-01

    A surface tension measurement method based on the maximum bubble pressure (MBP) method was developed in order to precisely determine the surface tension of molten silicates in this study. Specifically, the influence of viscosity on surface tension measurements was quantified, and the criteria for accurate measurement were investigated. It was found that the MBP apparently increased with an increase in viscosity. This was because extra pressure was required for the flowing liquid inside the capillary due to viscous resistance. It was also expected that the extra pressure would decrease by decreasing the fluid velocity. For silicone oil with a viscosity of 1000 \\hbox {mPa}{\\cdot }\\hbox {s}, the error on the MBP could be decreased to +1.7 % by increasing the bubble detachment time to 300 \\hbox {s}. However, the error was still over 1 % even when the bubble detachment time was increased to 600 \\hbox {s}. Therefore, a true value of the MBP was determined by using a curve-fitting technique with a simple relaxation function, and that was succeeded for silicone oil at 1000 \\hbox {mPa}{\\cdot } \\hbox {s} of viscosity. Furthermore, for silicone oil with a viscosity as high as 10 000 \\hbox {mPa}{\\cdot }\\hbox {s}, the apparent MBP approached a true value by interrupting the gas introduction during the pressure rising period and by re-introducing the gas at a slow flow rate. Based on the fundamental investigation at room temperature, the surface tension of the \\hbox {SiO}2-40 \\hbox {mol}%\\hbox {Na}2\\hbox {O} and \\hbox {SiO}2-50 \\hbox {mol}%\\hbox {Na}2\\hbox {O} melts was determined at a high temperature. The obtained value was slightly lower than the literature values, which might be due to the influence of viscosity on surface tension measurements being removed in this study.

  9. Dependence of Aerosol Light Absorption and Single-Scattering Albedo On Ambient Relative Humidity for Sulfate Aerosols with Black Carbon Cores

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Russell, Philip B.; Hamill, Patrick

    2001-01-01

    Atmospheric aerosols frequently contain hygroscopic sulfate species and black carbon (soot) inclusions. In this paper we report results of a modeling study to determine the change in aerosol absorption due to increases in ambient relative humidity (RH), for three common sulfate species, assuming that the soot mass fraction is present as a single concentric core within each particle. Because of the lack of detailed knowledge about various input parameters to models describing internally mixed aerosol particle optics, we focus on results that were aimed at determining the maximum effect that particle humidification may have on aerosol light absorption. In the wavelength range from 450 to 750 nm, maximum absorption humidification factors (ratio of wet to 'dry=30% RH' absorption) for single aerosol particles are found to be as large as 1.75 when the RH changes from 30 to 99.5%. Upon lesser humidification from 30 to 80% RH, absorption humidification for single particles is only as much as 1.2, even for the most favorable combination of initial ('dry') soot mass fraction and particle size. Integrated over monomodal lognormal particle size distributions, maximum absorption humidification factors range between 1.07 and 1.15 for humidification from 30 to 80% and between 1.1 and 1.35 for humidification from 30 to 95% RH for all species considered. The largest humidification factors at a wavelength of 450 nm are obtained for 'dry' particle size distributions that peak at a radius of 0.05 microns, while the absorption humidification factors at 700 nm are largest for 'dry' size distributions that are dominated by particles in the radius range of 0.06 to 0.08 microns. Single-scattering albedo estimates at ambient conditions are often based on absorption measurements at low RH (approx. 30%) and the assumption that aerosol absorption does not change upon humidification (i.e., absorption humidification equal to unity). Our modeling study suggests that this assumption alone can introduce absolute errors in estimates of the midvisible single-scattering albedo of up to 0.05 for realistic dry particle size distributions. Our study also indicates that this error increases with increasing wavelength. The potential errors in aerosol single-scattering albedo derived here are comparable in magnitude and in addition to uncertainties in single-scattering albedo estimates that are based on measurements of aerosol light absorption and scattering.

  10. Finite element modelling and updating of a lively footbridge: The complete process

    NASA Astrophysics Data System (ADS)

    Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul

    2007-03-01

    The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.

  11. SU-G-BRA-07: An Innovative Fiducial-Less Tracking Method for Radiation Treatment of Abdominal Tumors by Diaphragm Disparity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dick, D; Zhao, W; Wu, X

    2016-06-15

    Purpose: To investigate the feasibility of tracking abdominal tumors without the use of gold fiducial markers Methods: In this simulation study, an abdominal 4DCT dataset, acquired previously and containing 8 phases of the breathing cycle, was used as the testing data. Two sets of DRR images (45 and 135 degrees) were generated for each phase. Three anatomical points along the lung-diaphragm interface on each of the Digital Reconstructed Radiograph(DRR) images were identified by cross-correlation. The gallbladder, which simulates the tumor, was contoured for each phase of the breathing cycle and the corresponding centroid values serve as the measured center ofmore » the tumor. A linear model was created to correlate the diaphragm’s disparity of the three identified anatomical points with the center of the tumor. To verify the established linear model, we sequentially removed one phase of the data (i.e., 3 anatomical points and the corresponding tumor center) and created new linear models with the remaining 7 phases. Then we substituted the eliminated phase data (disparities of the 3 anatomical points) into the corresponding model to compare model-generated tumor center and the measured tumor center. Results: The maximum difference between the modeled and the measured centroid values across the 8 phases were 0.72, 0.29 and 0.30 pixels in the x, y and z directions respectively, which yielded a maximum mean-squared-error value of 0.75 pixels. The outcomes of the verification process, by eliminating each phase, produced mean-squared-errors ranging from 0.41 to 1.28 pixels. Conclusion: Gold fiducial markers, requiring surgical procedures to be implanted, are conventionally used in radiation therapy. The present work shows the feasibility of a fiducial-less tracking method for localizing abdominal tumors. Through developed diaphragm disparity analysis, the established linear model was verified with clinically accepted errors. The tracking method in real time under different radiation therapy platforms will be further investigated.« less

  12. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201

  13. Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments

    NASA Astrophysics Data System (ADS)

    Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel

    2017-05-01

    We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.

  14. Land Surface Temperature Measurements form EOS MODIS Data

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming

    1996-01-01

    We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.

  15. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  16. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  17. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption

    NASA Astrophysics Data System (ADS)

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  18. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption.

    PubMed

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (∼100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ∼0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  19. Multiple symbol partially coherent detection of MPSK

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1992-01-01

    It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.

  20. The Measurement of Pressure Through Tubes in Pressure Distribution Tests

    NASA Technical Reports Server (NTRS)

    Hemke, Paul E

    1928-01-01

    The tests described in this report were made to determine the error caused by using small tubes to connect orifices on the surface of aircraft to central pressure capsules in making pressure distribution tests. Aluminum tubes of 3/16-inch inside diameter were used to determine this error. Lengths from 20 feet to 226 feet and pressures whose maxima varied from 2 inches to 140 inches of water were used. Single-pressure impulses for which the time of rise of pressure from zero to a maximum varied from 0.25 second to 3 seconds were investigated. The results show that the pressure recorded at the capsule on the far end of the tube lags behind the pressure at the orifice end and experiences also a change in magnitude. For the values used in these tests the time lag and pressure change vary principally with the time of rise of pressure from zero to a maximum and the tube length. Curves are constructed showing the time lag and pressure change. Empirical formulas are also given for computing the time lag. Analysis of pressure distribution tests made on airplanes in flight shows that the recorded pressures are slightly higher than the pressures at the orifice and that the time lag is negligible. The apparent increase in pressure is usually within the experimental error, but in the case of the modern pursuit type of airplane the pressure increase may be 5 per cent. For pressure-distribution tests on airships the analysis shows that the time lag and pressure change may be neglected.

  1. CEDAR-GEM Challenge for Systematic Assessment of Ionosphere/Thermosphere Models in Predicting TEC During the 2006 December Storm Event

    NASA Astrophysics Data System (ADS)

    Shim, J. S.; Rastätter, L.; Kuznetsova, M.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Fedrizzi, M.; Förster, M.; Fuller-Rowell, T. J.; Gardner, L. C.; Goncharenko, L.; Huba, J.; McDonald, S. E.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.

    2017-10-01

    In order to assess current modeling capability of reproducing storm impacts on total electron content (TEC), we considered quantities such as TEC, TEC changes compared to quiet time values, and the maximum value of the TEC and TEC changes during a storm. We compared the quantities obtained from ionospheric models against ground-based GPS TEC measurements during the 2006 AGU storm event (14-15 December 2006) in the selected eight longitude sectors. We used 15 simulations obtained from eight ionospheric models, including empirical, physics-based, coupled ionosphere-thermosphere, and data assimilation models. To quantitatively evaluate performance of the models in TEC prediction during the storm, we calculated skill scores such as RMS error, Normalized RMS error (NRMSE), ratio of the modeled to observed maximum increase (Yield), and the difference between the modeled peak time and observed peak time. Furthermore, to investigate latitudinal dependence of the performance of the models, the skill scores were calculated for five latitude regions. Our study shows that RMSE of TEC and TEC changes of the model simulations range from about 3 TECU (total electron content unit, 1 TECU = 1016 el m-2) (in high latitudes) to about 13 TECU (in low latitudes), which is larger than latitudinal average GPS TEC error of about 2 TECU. Most model simulations predict TEC better than TEC changes in terms of NRMSE and the difference in peak time, while the opposite holds true in terms of Yield. Model performance strongly depends on the quantities considered, the type of metrics used, and the latitude considered.

  2. Levels of maximum end-expiratory carbon monoxide and certain cardiovascular parameters following hubble-bubble smoking.

    PubMed

    Shafagoj, Yanal A; Mohammed, Faisal I

    2002-08-01

    The physiological effects of cigarette smoking have been widely studied, however, little is known regarding the effects of smoking hubble-bubble. We examined the acute effects of hubble-bubble smoking on heart rate, systolic, diastolic, and mean arterial blood pressure and maximum end-expiratory carbon monoxide. This study was carried out in the student laboratory, School of Medicine, Department of Physiology, University of Jordan, Amman, Jordan, during the summer of 1999. In 18 healthy habitual hubble-bubble smokers, heart rate, blood pressure, and maximum end-expiratory carbon monoxide was measured before, during and post smoking of one hubble-bubble run (45 minutes). Compared to base line (time zero), at the end of smoking heart rate, systolic blood pressure, diastolic blood pressure, mean arterial blood pressure, and maximum end-expiratory carbon monoxide were increased 16 2.4 beats per minute, 6.7 2.5 mm Hg, 4.4 1.6 mm Hg, 5.2 1.7 mm Hg, and 14.2 1.8 ppm, (mean standard error of mean, P<.05). Acute short-term active hubble-bubble smoking elicits a modest increase in heart rate, systolic blood pressure, diastolic blood pressure, mean arterial blood pressure and maximum end-expiratory carbon monoxide in healthy hubble-bubble smokers.

  3. Maximum likelihood phase-retrieval algorithm: applications.

    PubMed

    Nahrstedt, D A; Southwell, W H

    1984-12-01

    The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.

  4. A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.

    ERIC Educational Resources Information Center

    Wood, Terry M.; Safrit, Margaret J.

    1987-01-01

    A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…

  5. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-05-01

    MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.

  6. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  7. Analytic Perturbation Method for Estimating Ground Flash Fraction from Satellite Lightning Observations

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2013-01-01

    An analytic perturbation method is introduced for estimating the lightning ground flash fraction in a set of N lightning flashes observed by a satellite lightning mapper. The value of N is large, typically in the thousands, and the observations consist of the maximum optical group area produced by each flash. The method is tested using simulated observations that are based on Optical Transient Detector (OTD) and Lightning Imaging Sensor (LIS) data. National Lightning Detection NetworkTM (NLDN) data is used to determine the flash-type (ground or cloud) of the satellite-observed flashes, and provides the ground flash fraction truth for the simulation runs. It is found that the mean ground flash fraction retrieval errors are below 0.04 across the full range 0-1 under certain simulation conditions. In general, it is demonstrated that the retrieval errors depend on many factors (i.e., the number, N, of satellite observations, the magnitude of random and systematic measurement errors, and the number of samples used to form certain climate distributions employed in the model).

  8. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  9. Maximum likelihood bolometric tomography for the determination of the uncertainties in the radiation emission on JET TOKAMAK

    NASA Astrophysics Data System (ADS)

    Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors

    2018-05-01

    The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.

  10. Solar Mesosphere Explorer near-infrared spectrometer Measurements of 1.27-micron radiances and the inference of mesospheric ozone

    NASA Astrophysics Data System (ADS)

    Thomas, R. J.; Barth, C. A.; Rusch, W.; Sanders, R. W.

    1984-10-01

    Ozone in the mesosphere is determined from observations made by the near-infrared spectrometer experiment on the Solar Mesosphere Explorer satellite (SME) between 50 and 90 km over most latitudes at 3:00 p.m. local time. The spectrometer measures emission from 02 (1 Delta g) at 1.27 microns that is primarily due to the photodissociation of ozone. The instrument consists of a parabolic telescope that limits the field of view to less than 0.1 degrees, an Ebert-Fastie spectrometer, and a passively cooled lead sulfide detector system. The limb radiances, measured as the spacecraft spins, are inverted, producing volume emission rate profiles from which ozone densities are inferred. The vertical resolution is better than 3.5 km. The calculation of ozone accounts for quenching and atmospheric transmission of both solar radiation and 1.27 microns radiation. The existence of a secondary maximum of ozone density near 80 km is established. An error analysis shows that the effects of random errors in the data and in the analysis on the final ozone profile are less than 10 percent between 50 and 82 km.

  11. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  12. Picosecond-precision multichannel autonomous time and frequency counter

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kwiatkowski, P.; RóŻyc, K.; Jachna, Z.; Sondej, T.

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 106 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  13. Picosecond-precision multichannel autonomous time and frequency counter.

    PubMed

    Szplet, R; Kwiatkowski, P; Różyc, K; Jachna, Z; Sondej, T

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 10 6 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  14. Poor interoperability of the Adams-Harbertson method for analysis of anthocyanins: comparison with AOAC pH differential method.

    PubMed

    Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo

    2013-01-01

    The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.

  15. Determining the effect of grain size and maximum induction upon coercive field of electrical steels

    NASA Astrophysics Data System (ADS)

    Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel

    2011-10-01

    Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.

  16. A novel simultaneous streak and framing camera without principle errors

    NASA Astrophysics Data System (ADS)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  17. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  18. The effect of sediment loading in Fennoscandia and the Barents Sea during the last glacial cycle on glacial isostatic adjustment observations

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; IJpelaar, Thijs

    2017-09-01

    Models for glacial isostatic adjustment (GIA) routinely include the effects of meltwater redistribution and changes in topography and coastlines. Since the sediment transport related to the dynamics of ice sheets may be comparable to that of sea level rise in terms of surface pressure, the loading effect of sediment deposition could cause measurable ongoing viscous readjustment. Here, we study the loading effect of glacially induced sediment redistribution (GISR) related to the Weichselian ice sheet in Fennoscandia and the Barents Sea. The surface loading effect and its effect on the gravitational potential is modeled by including changes in sediment thickness in the sea level equation following the method of Dalca et al. (2013). Sediment displacement estimates are estimated in two different ways: (i) from a compilation of studies on local features (trough mouth fans, large-scale failures, and basin flux) and (ii) from output of a coupled ice-sediment model. To account for uncertainty in Earth's rheology, three viscosity profiles are used. It is found that sediment transport can lead to changes in relative sea level of up to 2 m in the last 6000 years and larger effects occurring earlier in the deglaciation. This magnitude is below the error level of most of the relative sea level data because those data are sparse and errors increase with length of time before present. The effect on present-day uplift rates reaches a few tenths of millimeters per year in large parts of Norway and Sweden, which is around the measurement error of long-term GNSS (global navigation satellite system) monitoring networks. The maximum effect on present-day gravity rates as measured by the GRACE (Gravity Recovery and Climate Experiment) satellite mission is up to tenths of microgal per year, which is larger than the measurement error but below other error sources. Since GISR causes systematic uplift in most of mainland Scandinavia, including GISR in GIA models would improve the interpretation of GNSS and GRACE observations there.

  19. Results of x-ray mirror round-robin metrology measurements at the APS, ESRF, and SPring-8 optical metrology laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Assoufid, L.; Rommeveaux, A.; Ohashi, H.

    2005-01-01

    This paper presents the first series of round-robin metrology measurements of x-ray mirrors organized at the Advanced Photon Source (APS) in the USA, the European Synchrotron Radiation Facility in France, and the Super Photon Ring (SPring-8) (in a collaboration with Osaka University, ) in Japan. This work is part of the three institutions' three-way agreement to promote a direct exchange of research information and experience amongst their specialists. The purpose of the metrology round robin is to compare the performance and limitations of the instrumentation used at the optical metrology laboratories of these facilities and to set the basis formore » establishing guidelines and procedures to accurately perform the measurements. The optics used in the measurements were selected to reflect typical, as well as state of the art, in mirror fabrication. The first series of the round robin measurements focuses on flat and cylindrical mirrors with varying sizes and quality. Three mirrors (two flats and one cylinder) were successively measured using long trace profilers. Although the three facilities' LTPs are of different design, the measurements were found to be in excellent agreement. The maximum discrepancy of the rms slope error values is 0.1 {micro}rad, that of the rms shape error was 3 nm, and they all relate to the measurement of the cylindrical mirror. The next round-robin measurements will deal with elliptical and spherical optics.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chengqiang, L; Yin, Y; Chen, L

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans.more » Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.« less

  1. PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obreschkow, D.; Meyer, M.

    2013-11-10

    Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less

  2. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  3. A validation study of the psychometric properties of the Groningen Reflection Ability Scale.

    PubMed

    Andersen, Nina Bjerre; O'Neill, Lotte; Gormsen, Lise Kirstine; Hvidberg, Line; Morcke, Anne Mette

    2014-10-10

    Reflection, the ability to examine critically one's own learning and functioning, is considered important for 'the good doctor'. The Groningen Reflection Ability Scale (GRAS) is an instrument measuring student reflection, which has not yet been validated beyond the original Dutch study. The aim of this study was to adapt GRAS for use in a Danish setting and to investigate the psychometric properties of GRAS-DK. We performed a cross-cultural adaptation of GRAS from Dutch to Danish. Next, we collected primary data online, performed a retest, analysed data descriptively, estimated measurement error, performed an exploratory and a confirmatory factor analysis to test the proposed three-factor structure. 361 (69%) of 523 invited students completed GRAS-DK. Their mean score was 88 (SD = 11.42; scale maximum 115). Scores were approximately normally distributed. Measurement error and test-retest score differences were acceptable, apart from a few extreme outliers. However, the confirmatory factor analysis did not replicate the original three-factor model and neither could a one-dimensional structure be confirmed. GRAS is already in use, however we advise that use of GRAS-DK for effect measurements and group comparison awaits further review and validation studies. Our negative finding might be explained by a weak conceptualisation of personal reflection.

  4. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  5. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  6. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  7. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Harvesting tree biomass at the stand level to assess the accuracy of field and airborne biomass estimation in savannas.

    PubMed

    Colgan, Matthew S; Asner, Gregory P; Swemmer, Tony

    2013-07-01

    Tree biomass is an integrated measure of net growth and is critical for understanding, monitoring, and modeling ecosystem functions. Despite the importance of accurately measuring tree biomass, several fundamental barriers preclude direct measurement at large spatial scales, including the facts that trees must be felled to be weighed and that even modestly sized trees are challenging to maneuver once felled. Allometric methods allow for estimation of tree mass using structural characteristics, such as trunk diameter. Savanna trees present additional challenges, including limited available allometry and a prevalence of multiple stems per individual. Here we collected airborne lidar data over a semiarid savanna adjacent to the Kruger National Park, South Africa, and then harvested and weighed woody plant biomass at the plot scale to provide a standard against which field and airborne estimation methods could be compared. For an existing airborne lidar method, we found that half of the total error was due to averaging canopy height at the plot scale. This error was eliminated by instead measuring maximum height and crown area of individual trees from lidar data using an object-based method to identify individual tree crowns and estimate their biomass. The best object-based model approached the accuracy of field allometry at both the tree and plot levels, and it more than doubled the accuracy compared to existing airborne methods (17% vs. 44% deviation from harvested biomass). Allometric error accounted for less than one-third of the total residual error in airborne biomass estimates at the plot scale when using allometry with low bias. Airborne methods also gave more accurate predictions at the plot level than did field methods based on diameter-only allometry. These results provide a novel comparison of field and airborne biomass estimates using harvested plots and advance the role of lidar remote sensing in savanna ecosystems.

  9. Quality assurance of dynamic parameters in volumetric modulated arc therapy.

    PubMed

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-07-01

    The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Three tests (for gantry position-dose delivery synchronisation, gantry speed-dose delivery synchronisation and MLC leaf speed and positions) were performed. The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the "beginning" and "end" errors. For MLC position verification, the maximum error was -2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. This experiment demonstrates that the variables and parameters of the Synergy S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC.

  10. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  11. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  12. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell

    PubMed Central

    González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-García, Mateo; Dorta-Naranjo, Blas-Pablo

    2008-01-01

    This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar. PMID:27879884

  13. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell.

    PubMed

    González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-Garcia, Mateo; Dorta-Naranjo, Blas-Pablo

    2008-05-23

    This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar.

  14. Influence of erroneous patient records on population pharmacokinetic modeling and individual bayesian estimation.

    PubMed

    van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H

    2012-10-01

    Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.

  15. Development and application of the maximum entropy method and other spectral estimation techniques

    NASA Astrophysics Data System (ADS)

    King, W. R.

    1980-09-01

    This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.

  16. Continuous quantum measurements and the action uncertainty principle

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  17. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  18. The design and analysis of channel transmission communication system of XCTD profiler

    NASA Astrophysics Data System (ADS)

    Zheng, Yu; Wang, Xiao-Rui; Jin, Xiang-Yu; Song, Guo-Min; Shang, Ying-Sheng; Li, Hong-Zhi

    2016-10-01

    In this paper, a channel transmission communication system of expendable conductivity-temperature-depth is established in accordance to the operation characteristics of the transmission line to more accurately assess the characteristics of deep-sea abandoned profiler channel. The wrapping inductance is eliminated to maximum extent through the wrapping pattern of the underwater spool and the overwater spool and the calculation of the wrapping diameter. The feasibility of the proposed channel transmission communication system is verified through theoretical analysis and practical measurement of the transmission signal error rate in the amplitude shift keying (ASK) modulation. The proposed design provides a new research method for the channel assessment of complex abandoned measuring instrument and an important experiment evidence for the rapid development of the deep-sea abandoned measuring instrument.

  19. The design and analysis of channel transmission communication system of XCTD profiler.

    PubMed

    Zheng, Yu; Wang, Xiao-Rui; Jin, Xiang-Yu; Song, Guo-Min; Shang, Ying-Sheng; Li, Hong-Zhi

    2016-10-01

    In this paper, a channel transmission communication system of expendable conductivity-temperature-depth is established in accordance to the operation characteristics of the transmission line to more accurately assess the characteristics of deep-sea abandoned profiler channel. The wrapping inductance is eliminated to maximum extent through the wrapping pattern of the underwater spool and the overwater spool and the calculation of the wrapping diameter. The feasibility of the proposed channel transmission communication system is verified through theoretical analysis and practical measurement of the transmission signal error rate in the amplitude shift keying (ASK) modulation. The proposed design provides a new research method for the channel assessment of complex abandoned measuring instrument and an important experiment evidence for the rapid development of the deep-sea abandoned measuring instrument.

  20. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  1. Reducing Errors in Satellite Simulated Views of Clouds with an Improved Parameterization of Unresolved Scales

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Marchand, R.; Ackerman, T. P.

    2016-12-01

    Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A

  2. Magnetometer-enhanced personal locator for tunnels and GPS-denied outdoor environments

    NASA Astrophysics Data System (ADS)

    Kwanmuang, Surat; Ojeda, Lauro; Borenstein, Johann

    2011-06-01

    This paper describes recent advances with our earlier developed Personal Dead-reckoning (PDR) system for GPS-denied environments. The PDR system uses a foot-mounted Inertial Measurement Unit (IMU) that also houses a three axismagnetometer. In earlier work we developed methods for correcting the drift errors in the accelerometers, thereby allowing very accurate measurements of distance traveled. In addition, we developed a powerful heuristic method for correcting heading errors caused by gyro drift. The heuristics exploit the rectilinear features found in almost all manmade structures and therefore limit this technology to indoor use only. Most recently we integrated a three-axis magnetometer with the IMU, using a Kalman Filter. While it is well known that the ubiquitous magnetic disturbances found in most modern buildings render magnetometers almost completely useless indoors, these sensors are nonetheless very effective in pristine outdoor environments as well as in some tunnels and caves. The present paper describes the integrated magnetometer/IMU system and presents detailed experimental results. Specifically, the paper reports results of an objective test conducted by Firefighters of California's CAL-FIRE. In this particular test, two firefighters in full operational gear and one civilian hiked up a two-mile long mountain trail over rocky, sometimes steeply inclined terrain, each wearing one of our magnetometer-enhanced PDR systems but not using any GPS. During the hour-long hike the average position error was about 20 meters and the maximum error was less than 45 meters, which is about 1.4% of distance traveled for all three PDR systems.

  3. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model.

    PubMed

    Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

    2013-11-01

    The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles.

  4. Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women

    PubMed Central

    Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007

  5. A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2018-01-10

    In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.

  6. A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion

    PubMed Central

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il “Dan”

    2018-01-01

    In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively. PMID:29320414

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, Alexander M.; Deller, Timothy W.; Maramraju, Sri Harsha

    Purpose: The GE SIGNA PET/MR is a new whole body integrated time-of-flight (ToF)-PET/MR scanner from GE Healthcare. The system is capable of simultaneous PET and MR image acquisition with sub-400 ps coincidence time resolution. Simultaneous PET/MR holds great potential as a method of interrogating molecular, functional, and anatomical parameters in clinical disease in one study. Despite the complementary imaging capabilities of PET and MRI, their respective hardware tends to be incompatible due to mutual interference. In this work, the GE SIGNA PET/MR is evaluated in terms of PET performance and the potential effects of interference from MRI operation. Methods: Themore » NEMA NU 2-2012 protocol was followed to measure PET performance parameters including spatial resolution, noise equivalent count rate, sensitivity, accuracy, and image quality. Each of these tests was performed both with the MR subsystem idle and with continuous MR pulsing for the duration of the PET data acquisition. Most measurements were repeated at three separate test sites where the system is installed. Results: The scanner has achieved an average of 4.4, 4.1, and 5.3 mm full width at half maximum radial, tangential, and axial spatial resolutions, respectively, at 1 cm from the transaxial FOV center. The peak noise equivalent count rate (NECR) of 218 kcps and a scatter fraction of 43.6% are reached at an activity concentration of 17.8 kBq/ml. Sensitivity at the center position is 23.3 cps/kBq. The maximum relative slice count rate error below peak NECR was 3.3%, and the residual error from attenuation and scatter corrections was 3.6%. Continuous MR pulsing had either no effect or a minor effect on each measurement. Conclusions: Performance measurements of the ToF-PET whole body GE SIGNA PET/MR system indicate that it is a promising new simultaneous imaging platform.« less

  8. NEMA NU 2-2012 performance studies for the SiPM-based ToF-PET component of the GE SIGNA PET/MR system.

    PubMed

    Grant, Alexander M; Deller, Timothy W; Khalighi, Mohammad Mehdi; Maramraju, Sri Harsha; Delso, Gaspar; Levin, Craig S

    2016-05-01

    The GE SIGNA PET/MR is a new whole body integrated time-of-flight (ToF)-PET/MR scanner from GE Healthcare. The system is capable of simultaneous PET and MR image acquisition with sub-400 ps coincidence time resolution. Simultaneous PET/MR holds great potential as a method of interrogating molecular, functional, and anatomical parameters in clinical disease in one study. Despite the complementary imaging capabilities of PET and MRI, their respective hardware tends to be incompatible due to mutual interference. In this work, the GE SIGNA PET/MR is evaluated in terms of PET performance and the potential effects of interference from MRI operation. The NEMA NU 2-2012 protocol was followed to measure PET performance parameters including spatial resolution, noise equivalent count rate, sensitivity, accuracy, and image quality. Each of these tests was performed both with the MR subsystem idle and with continuous MR pulsing for the duration of the PET data acquisition. Most measurements were repeated at three separate test sites where the system is installed. The scanner has achieved an average of 4.4, 4.1, and 5.3 mm full width at half maximum radial, tangential, and axial spatial resolutions, respectively, at 1 cm from the transaxial FOV center. The peak noise equivalent count rate (NECR) of 218 kcps and a scatter fraction of 43.6% are reached at an activity concentration of 17.8 kBq/ml. Sensitivity at the center position is 23.3 cps/kBq. The maximum relative slice count rate error below peak NECR was 3.3%, and the residual error from attenuation and scatter corrections was 3.6%. Continuous MR pulsing had either no effect or a minor effect on each measurement. Performance measurements of the ToF-PET whole body GE SIGNA PET/MR system indicate that it is a promising new simultaneous imaging platform.

  9. Study of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method

    NASA Astrophysics Data System (ADS)

    Qin, Le; Xie, HuiMin; Zhu, RongHua; Wu, Dan; Che, ZhiGang; Zou, ShiKun

    2014-04-01

    This paper investigates the effect of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method. The selection of the location of the testing area is analyzed from theory and experiment. In the theoretical study, the factors which affect the surface released radial strain ɛ r were analyzed on the basis of the formulae of the hole-drilling method, and the relations between those factors and ɛ r were established. By combining Moiré interferometry with the hole-drilling method, the residual stress of interference-fit specimen was measured to verify the theoretical analysis. According to the analysis results, the testing area for minimizing the error of strain measurement is determined. Moreover, if the orientation of the maximum principal stress is known, the value of strain will be measured with higher precision by the Moiré interferometry method.

  10. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    PubMed

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  11. Mapping health outcome measures from a stroke registry to EQ-5D weights.

    PubMed

    Ghatnekar, Ola; Eriksson, Marie; Glader, Eva-Lotta

    2013-03-07

    To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n=272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for "dressing", "toileting", "mobility", "mood", "general health" and "proxy-responders" were applied to the validation set (n=272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥ 0.5 (P<0.01). This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility.

  12. An improved maximum permissible exposure meter for safety assessments of laser radiation

    NASA Astrophysics Data System (ADS)

    Corder, D. A.; Evans, D. R.; Tyrer, J. R.

    1997-12-01

    Current interest in laser radiation safety requires demonstration that a laser system has been designed to prevent exposure to levels of laser radiation exceeding the Maximum Permissible Exposure. In some simple systems it is possible to prove this by calculation, but in most cases it is preferable to confirm calculated results with a measurement. This measurement may be made with commercially available equipment, but there are limitations with this approach. A custom designed instrument is presented in which the full range of measurement issues have been addressed. Important features of the instrument are the design and optimisation of detector heads for the measurement task, and consideration of user interface requirements. Three designs for detector head are presented, these cover the majority of common laser types. Detector heads are designed to optimise the performance of relatively low cost detector elements for this measurement task. The three detector head designs are suitable for interfacing to photodiodes, low power thermopiles and pyroelectric detectors. Design of the user interface was an important aspect of the work. A user interface which is designed for the specific application minimises the risk of user error or misinterpretation of the measurement results. A palmtop computer was used to provide an advanced user interface. User requirements were considered in order that the final implement was well matched to the task of laser radiation hazard audits.

  13. Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.

    PubMed

    Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar

    2012-01-01

    In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.

  14. Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit

    PubMed Central

    Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar

    2012-01-01

    In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586

  15. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  16. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  17. An Assessment of State-of-the-Art Mean Sea Surface and Geoid Models of the Arctic Ocean: Implications for Sea Ice Freeboard Retrieval

    NASA Astrophysics Data System (ADS)

    Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven

    2017-11-01

    State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.

  18. Assessment and quantification of sources of variability in breast apparent diffusion coefficient (ADC) measurements at diffusion weighted imaging.

    PubMed

    Giannotti, E; Waugh, S; Priba, L; Davis, Z; Crowe, E; Vinnicombe, S

    2015-09-01

    Apparent Diffusion Coefficient (ADC) measurements are increasingly used for assessing breast cancer response to neoadjuvant chemotherapy although little data exists on ADC measurement reproducibility. The purpose of this work was to investigate and characterise the magnitude of errors in ADC measures that may be encountered in such follow-up studies- namely scanner stability, scan-scan reproducibility, inter- and intra- observer measures and the most reproducible measurement of ADC. Institutional Review Board approval was obtained for the prospective study of healthy volunteers and written consent acquired for the retrospective study of patient images. All scanning was performed on a 3.0-T MRI scanner. Scanner stability was assessed using an ice-water phantom weekly for 12 weeks. Inter-scan repeatability was assessed across two scans of 10 healthy volunteers (26-61 years; mean: 44.7 years). Inter- and intra-reader analysis repeatability was measured in 52 carcinomas from clinical patients (29-70 years; mean: 50.0 years) by measuring the whole tumor ADC value on a single slice with maximum tumor diameter (ADCS) and the ADC value of a small region of interest (ROI) on the same slice (ADCmin). Repeatability was assessed using intraclass correlation coefficients (ICC) and coefficients of repeatability (CoR). Scanner stability contributed 6% error to phantom ADC measurements (0.071×10(-3)mm(2)/s; mean ADC=1.089×10(-3)mm(2)/s). The measured scan-scan CoR in the volunteers was 0.122×10(-3)mm(2)/s, contributing an error of 8% to the mean measured values (ADCscan1=1.529×10(-3)mm(2)/s; ADCscan2=1.507×10(-3)mm(2)/s). Technical and clinical observers demonstrated excellent intra-observer repeatability (ICC>0.9). Clinical observer CoR values were marginally better than technical observer measures (ADCS=0.035×10(-3)mm(2)/s vs. 0.097×10(-3)mm(2)/s; ADCmin=0.09×10(-3)mm(2)/s vs. 0.114×10(-3)mm(2)/s). Inter-reader ICC values were good 0.864 (ADCS) and fair 0.677 (ADCmin). Corresponding CoR values were 0.202×10(-3)mm(2)/s and 0.264×10(-3)mm(2)/s, respectively. Both scanner stability and scan-scan variation have minimal influence on breast ADC measurements, contributing less than 10% error of average measured ADC values. Measurement of ADC values from a small ROI contributes a greater variability in measurements compared with measurement of ADC across the whole visible tumor on one slice. The greatest source of error in follow-up studies is likely to be associated with measures made by multiple observers, and this should be considered where multiple measures are required to assess response to treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Micro-vibration detection with heterodyne holography based on time-averaged method

    NASA Astrophysics Data System (ADS)

    Qin, XiaoDong; Pan, Feng; Chen, ZongHui; Hou, XueQin; Xiao, Wen

    2017-02-01

    We propose a micro-vibration detection method by introducing heterodyne interferometry to time-averaged holography. This method compensates for the deficiency of time-average holography in quantitative measurements and widens its range of application effectively. Acousto-optic modulators are used to modulate the frequencies of the reference beam and the object beam. Accurate detection of the maximum amplitude of each point in the vibration plane is performed by altering the frequency difference of both beams. The range of amplitude detection of plane vibration is extended. In the stable vibration mode, the distribution of the maximum amplitude of each point is measured and the fitted curves are plotted. Hence the plane vibration mode of the object is demonstrated intuitively and detected quantitatively. We analyzed the method in theory and built an experimental system with a sine signal as the excitation source and a typical piezoelectric ceramic plate as the target. The experimental results indicate that, within a certain error range, the detected vibration mode agrees with the intrinsic vibration characteristics of the object, thus proving the validity of this method.

  20. Design of Helical Capacitance Sensor for Holdup Measurement in Two-Phase Stratified Flow: A Sinusoidal Function Approach

    PubMed Central

    Lim, Lam Ghai; Pao, William K. S.; Hamid, Nor Hisham; Tang, Tong Boon

    2016-01-01

    A 360° twisted helical capacitance sensor was developed for holdup measurement in horizontal two-phase stratified flow. Instead of suppressing nonlinear response, the sensor was optimized in such a way that a ‘sine-like’ function was displayed on top of the linear function. This concept of design had been implemented and verified in both software and hardware. A good agreement was achieved between the finite element model of proposed design and the approximation model (pure sinusoidal function), with a maximum difference of ±1.2%. In addition, the design parameters of the sensor were analysed and investigated. It was found that the error in symmetry of the sinusoidal function could be minimized by adjusting the pitch of helix. The experiments of air-water and oil-water stratified flows were carried out and validated the sinusoidal relationship with a maximum difference of ±1.2% and ±1.3% for the range of water holdup from 0.15 to 0.85. The proposed design concept therefore may pose a promising alternative for the optimization of capacitance sensor design. PMID:27384567

  1. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers

    PubMed Central

    Gundle, Kenneth R.; White, Jedediah K.; Conrad, Ernest U.; Ching, Randal P.

    2017-01-01

    Introduction: Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Materials and Methods: Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Results: Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97) Conclusion: In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system. PMID:28694888

  2. Initial testing of a 3D printed perfusion phantom using digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Wood, Rachel P.; Khobragade, Parag; Ying, Leslie; Snyder, Kenneth; Wack, David; Bednarek, Daniel R.; Rudin, Stephen; Ionita, Ciprian N.

    2015-03-01

    Perfusion imaging is the most applied modality for the assessment of acute stroke. Parameters such as Cerebral Blood Flow (CBF), Cerebral Blood volume (CBV) and Mean Transit Time (MTT) are used to distinguish the tissue infarct core and ischemic penumbra. Due to lack of standardization these parameters vary significantly between vendors and software even when provided with the same data set. There is a critical need to standardize the systems and make them more reliable. We have designed a uniform phantom to test and verify the perfusion systems. We implemented a flow loop with different flow rates (250, 300, 350 ml/min) and injected the same amount of contrast. The images of the phantom were acquired using a Digital Angiographic system. Since this phantom is uniform, projection images obtained using DSA is sufficient for initial validation. To validate the phantom we measured the contrast concentration at three regions of interest (arterial input, venous output, perfused area) and derived time density curves (TDC). We then calculated the maximum slope, area under the TDCs and flow. The maximum slope calculations were linearly increasing with increase in flow rate, the area under the curve decreases with increase in flow rate. There was 25% error between the calculated flow and measured flow. The derived TDCs were clinically relevant and the calculated flow, maximum slope and areas under the curve were sensitive to the measured flow. We have created a systematic way to calibrate existing perfusion systems and assess their reliability.

  3. Automatic Detection of Preposition Errors in Learner Writing

    ERIC Educational Resources Information Center

    De Felice, Rachele; Pulman, Stephen

    2009-01-01

    In this article, we present an approach to the automatic correction of preposition errors in L2 English. Our system, based on a maximum entropy classifier, achieves average precision of 42% and recall of 35% on this task. The discussion of results obtained on correct and incorrect data aims to establish what characteristics of L2 writing prove…

  4. Land use surveys by means of automatic interpretation of LANDSAT system data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.

    1981-01-01

    Analyses for seven land-use classes are presented. The classes are: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation, and natural vegetation. The automatic classification of LANDSAT MSS data using a maximum likelihood algorithm shows a 39% average error of emission and a 3.45 error of commission for the seven classes.

  5. Hand-held dynamometry in patients with haematological malignancies: Measurement error in the clinical assessment of knee extension strength

    PubMed Central

    Knols, Ruud H; Aufdemkampe, Geert; de Bruin, Eling D; Uebelhart, Daniel; Aaronson, Neil K

    2009-01-01

    Background Hand-held dynamometry is a portable and inexpensive method to quantify muscle strength. To determine if muscle strength has changed, an examiner must know what part of the difference between a patient's pre-treatment and post-treatment measurements is attributable to real change, and what part is due to measurement error. This study aimed to determine the relative and absolute reliability of intra and inter-observer strength measurements with a hand-held dynamometer (HHD). Methods Two observers performed maximum voluntary peak torque measurements (MVPT) for isometric knee extension in 24 patients with haematological malignancies. For each patient, the measurements were carried out on the same day. The main outcome measures were the intraclass correlation coefficient (ICC ± 95%CI), the standard error of measurement (SEM), the smallest detectable difference (SDD), the relative values as % of the grand mean of the SEM and SDD, and the limits of agreement for the intra- and inter-observer '3 repetition average' and the 'highest value of 3 MVPT' knee extension strength measures. Results The intra-observer ICCs were 0.94 for the average of 3 MVPT (95%CI: 0.86–0.97) and 0.86 for the highest value of 3 MVPT (95%CI: 0.71–0.94). The ICCs for the inter-observer measurements were 0.89 for the average of 3 MVPT (95%CI: 0.75–0.95) and 0.77 for the highest value of 3 MVPT (95%CI: 0.54–0.90). The SEMs for the intra-observer measurements were 6.22 Nm (3.98% of the grand mean (GM) and 9.83 Nm (5.88% of GM). For the inter-observer measurements, the SEMs were 9.65 Nm (6.65% of GM) and 11.41 Nm (6.73% of GM). The SDDs for the generated parameters varied from 17.23 Nm (11.04% of GM) to 27.26 Nm (17.09% of GM) for intra-observer measurements, and 26.76 Nm (16.77% of GM) to 31.62 Nm (18.66% of GM) for inter-observer measurements, with similar results for the limits of agreement. Conclusion The results indicate that there is acceptable relative reliability for evaluating knee strength with a HHD, while the measurement error observed was modest. The HHD may be useful in detecting changes in knee extension strength at the individual patient level. PMID:19272149

  6. Navigation accuracy comparing non-covered frame and use of plastic sterile drapes to cover the reference frame in 3D acquisition.

    PubMed

    Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa

    2017-09-01

    Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.

  7. The ESS neutrino facility for CP violation discovery

    NASA Astrophysics Data System (ADS)

    Baussan, Eric; Bouquerel, Elian; Dracos, Marcos

    2017-09-01

    The comparatively large value of the neutrino mixing angle θ 13 measured in 2012 by neutrino reactor experiments has opened the possibility to observe for the first time CP violation in the leptonic sector. The measured value of θ 13 also privileges the 2nd oscillation maximum for the discovery of CP violation instead of the usually used 1st oscillation maximum. The sensitivity at the 2nd oscillation maximum is about three times higher than at the 1st oscillation maximum implying a significantly lower sensitivity to systematic errors. Measuring at the 2nd oscillation maximum necessitates a very intense neutrino beam with the appropriate energy. The world’s most intense pulsed spallation neutron source, the European Spallation Source, has a proton linac with 5 MW power and 2 GeV energy. This linac also has the potential to become the proton driver of the world’s most intense neutrino beam with very high potential for the discovery of neutrino CP violation. The physics performance of that neutrino Super Beam in conjunction with a megaton Water Cherenkov neutrino detector installed ca 1000 m down in a mine at a distance of about 500 km from ESS has been evaluated. In addition, the use of such a detector will make it possible to extent the physics program to proton decay, atmospheric neutrinos and astrophysics searches. The ESS proton linac upgrade, the accumulator ring needed for proton pulse compression, the target station optimization and the physics potential are described. In addition to the production of neutrinos, this facility will also be a copious source of muons which could be used to feed a low energy nuSTORM facility, a future neutrino factory or a muon collider. The ESS linac, under construction, will reach full operation at 5 MW by 2023 after which the upgrades for the neutrino facility could start.

  8. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  9. Extending the impulse response in order to reduce errors due to impulse noise and signal fading

    NASA Technical Reports Server (NTRS)

    Webb, Joseph A.; Rolls, Andrew J.; Sirisena, H. R.

    1988-01-01

    A finite impulse response (FIR) digital smearing filter was designed to produce maximum intersymbol interference and maximum extension of the impulse response of the signal in a noiseless binary channel. A matched FIR desmearing filter at the receiver then reduced the intersymbol interference to zero. Signal fades were simulated by means of 100 percent signal blockage in the channel. Smearing and desmearing filters of length 256, 512, and 1024 were used for these simulations. Results indicate that impulse response extension by means of bit smearing appears to be a useful technique for correcting errors due to impulse noise or signal fading in a binary channel.

  10. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware.

    PubMed

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  11. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware

    NASA Astrophysics Data System (ADS)

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  12. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  13. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  14. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    NASA Astrophysics Data System (ADS)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2017-10-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  15. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Peter R., E-mail: pmarti46@uwo.ca; Cool, Derek W.; Romagnoli, Cesare

    2014-07-15

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiologymore » resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was consistently greater when using spherical tumor shapes as opposed to no shape assumption. However, an assumption of spherical tumor shape for RMSE = 3.5 mm led to a mean overestimation of tumor sampling probabilities of 3%, implying that assuming spherical tumor shape may be reasonable for many prostate tumors. The authors also determined that a biopsy system would need to have a RMS needle delivery error of no more than 1.6 mm in order to sample 95% of tumors with one core. The authors’ experiments also indicated that the effect of axial-direction error on the measured tumor burden was mitigated by the 18 mm core length at 3.5 mm RMSE. Conclusions: For biopsy systems with RMSE ≥ 3.5 mm, more than one biopsy core must be taken from the majority of tumors to achieveP ≥ 95%. These observations support the authors’ perspective that some tumors of clinically significant sizes may require more than one biopsy attempt in order to be sampled during the first biopsy session. This motivates the authors’ ongoing development of an approach to optimize biopsy plans with the aim of achieving a desired probability of obtaining a sample from each tumor, while minimizing the number of biopsies. Optimized planning of within-tumor targets for MRI-3D TRUS fusion biopsy could support earlier diagnosis of prostate cancer while it remains localized to the gland and curable.« less

  16. Trunk repositioning errors are increased in balance-impaired older adults.

    PubMed

    Goldberg, Allon; Hernandez, Manuel Enrique; Alexander, Neil B

    2005-10-01

    Controlling the flexing trunk is critical in recovering from a loss of balance and avoiding a fall. To investigate the relationship between trunk control and balance in older adults, we measured trunk repositioning accuracy in young and balance-impaired and unimpaired older adults. Young adults (N = 8, mean age 24.3 years) and two groups of community-dwelling older adults defined by unipedal stance time (UST)-a balance-unimpaired group (UST > 30 seconds, N = 7, mean age 73.9 years) and a balance-impaired group (UST < 5 seconds, N = 8, mean age 79.6 years)-were tested in standing trunk control ability by reproducing a approximately 30 degrees trunk flexion angle under three visual-surface conditions: eyes opened and closed on the floor, and eyes opened on foam. Errors in reproducing the angle were defined as trunk repositioning errors (TREs). Clinical measures related to balance, trunk extensor strength, and self-reported disability were obtained. TREs were significantly greater in the balance-impaired group than in the other groups, even when controlling for trunk extensor strength and body mass. In older adults, there were significant correlations between TREs and three clinical measures of balance and fall risk, UST and maximum step length (-0.65 to -0.75), and Timed Up & Go score (0.55), and between TREs and age (0.63-0.76). In each group TREs were similar under the three visual-surface conditions. Test-retest reliability for TREs was good to excellent (intraclass correlation coefficients > or =0.74). Older balance-impaired adults have larger TREs, and thus poorer trunk control, than do balance-unimpaired older individuals. TREs are reliable and valid measures of underlying balance impairment in older adults, and may eventually prove to be useful in predicting the ability to recover from losses of balance and to avoid falls.

  17. Multi-sensor calibration of low-cost magnetic, angular rate and gravity systems.

    PubMed

    Lüken, Markus; Misgeld, Berno J E; Rüschen, Daniel; Leonhardt, Steffen

    2015-10-13

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed "Integrated Posture and Activity Network by Medit Aachen" (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is Sensors 2015, 15 25920 corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°.

  18. Force-Sensing Enhanced Simulation Environment (ForSense) for laparoscopic surgery training and assessment.

    PubMed

    Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara

    2015-04-01

    Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Multi-Sensor Calibration of Low-Cost Magnetic, Angular Rate and Gravity Systems

    PubMed Central

    Lüken, Markus; Misgeld, Berno J.E.; Rüschen, Daniel; Leonhardt, Steffen

    2015-01-01

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed “Integrated Posture and Activity Network by Medit Aachen” (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°. PMID:26473873

  20. A Comparative Analysis of Three Monocular Passive Ranging Methods on Real Infrared Sequences

    NASA Astrophysics Data System (ADS)

    Bondžulić, Boban P.; Mitrović, Srđan T.; Barbarić, Žarko P.; Andrić, Milenko S.

    2013-09-01

    Three monocular passive ranging methods are analyzed and tested on the real infrared sequences. The first method exploits scale changes of an object in successive frames, while other two use Beer-Lambert's Law. Ranging methods are evaluated by comparing with simultaneously obtained reference data at the test site. Research is addressed on scenarios where multiple sensor views or active measurements are not possible. The results show that these methods for range estimation can provide the fidelity required for object tracking. Maximum values of relative distance estimation errors in near-ideal conditions are less than 8%.

  1. Use of Fuzzycones for Sun-Only Attitude Determination: THEMIS Becomes ARTEMIS

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.; Felikson, Denis; Sedlak, Joseph E.

    2009-01-01

    In order for two THEMIS probes to successfully transition to ARTEMIS it will be necessary to determine attitudes with moderate accuracy using Sun sensor data only. To accomplish this requirement, an implementation of the Fuzzycones maximum likelihood algorithm was developed. The effect of different measurement uncertainty models on Fuzzycones attitude accuracy was investigated and a bin-transition technique was introduced to improve attitude accuracy using data with uniform error distributions. The algorithm was tested with THEMIS data and in simulations. The analysis results show that the attitude requirements can be met using Fuzzycones and data containing two bin-transitions.

  2. PREVALENCE OF UNCORRECTED REFRACTIVE ERRORS IN ADULTS AGED 30 YEARS AND ABOVE IN A RURAL POPULATION IN PAKISTAN.

    PubMed

    Abdullah, Ayesha S; Jadoon, Milhammad Zahid; Akram, Mohammad; Awan, Zahid Hussain; Azam, Mohammad; Safdar, Mohammad; Nigar, Mohammad

    2015-01-01

    Uncorrected refractive errors are a leading cause of visual disability globally. This population-based study was done to estimate the prevalence of uncorrected refractive errors in adults aged 30 years and above of village Pawakah, Khyber Pakhtunkhwa (KPK), Pakistan. It was a cross-sectional survey in which 1000 individuals were included randomly. All the individuals were screened for uncorrected refractive errors and those whose visual acuity (VA) was found to be less than 6/6 were refracted. In whom refraction was found to be unsatisfactory (i.e., a best corrected visual acuity of <6/6) further examination was done to establish the cause for the subnormal vision. A total of 917 subjects participated in the survey (response rate 92%). The prevalence of uncorrected refractive errors was found to be 23.97% among males and 20% among females. The prevalence of visually disabling refractive errors was 6.89% in males and 5.71% in females. The prevalence was seen to increase with age, with maximum prevalence in 51-60 years age group. Hypermetropia (10.14%) was found to be the commonest refractive error followed by Myopia (6.00%) and Astigmatism (5.6%). The prevalence of Presbyopia was 57.5% (60.45% in males and 55.23% in females). Poor affordability was the commonest barrier to the use of spectacles, followed by unawareness. Cataract was the commonest reason for impaired vision after refractive correction. The prevalence of blindness was 1.96% (1.53% in males and 2.28% in females) in this community with cataract as the commonest cause. Despite being the most easily avoidable cause of subnormal vision uncorrected refractive errors still account for a major proportion of the burden of decreased vision in this area. Effective measures for the screening and affordable correction of uncorrected refractive errors need to be incorpora'ted into the health care delivery system.

  3. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  4. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  5. Randomized controlled trial on the effects of training in the use of closed-circuit television on reading performance.

    PubMed

    Burggraaff, Marloes C; van Nispen, Ruth M A; Hoeben, Frank P; Knol, Dirk L; van Rens, Ger H M B

    2012-04-24

    To investigate the effectiveness of training in the use of closed-circuit television (CCTV) on reading performance in visually impaired patients. In a multicenter masked randomized controlled trial, 122 patients were randomized either to a treatment group that received usual delivery instructions from the CCTV supplier combined with concise outpatient standardized training, or to a control group that received delivery instructions only. The main outcome measure was reading performance, which was obtained by measuring reading acuity, reading speed, reading errors, column-tracking time, and technical reading, approximately two weeks after patients had received their CCTV and 3 months later. Videotapes of all measurements were rated by two investigators. Training effects were analyzed with linear mixed modeling. There were no statistically significant differences in results between the treatment and control group. However, introducing a CCTV increased reading acuity (mean difference [MD] 0.93 logRAD; P < 0.01) and maximum reading speed (MD 15 wpm; P < 0.01), and decreased the number of errors (MD 0.33; P = 0.04), compared to reading without CCTV. Average reading speed (P = 0.05), number of errors (P = 0.04), and column-tracking time (P = 0.01) improved over time. Prescribing a CCTV and the delivery instructions by the supplier seemed sufficient to improve reading performance. Additional training in the use of this device did not result in further improvement. Based on these results, outpatient low-vision rehabilitation centers may consider reallocating part of the training resources into other evidence-based rehabilitation programs. (trialregister.nl number, NTR1031.).

  6. Customization, control, and characterization of a commercial haptic device for high-fidelity rendering of weak forces.

    PubMed

    Gurari, Netta; Baud-Bovy, Gabriel

    2014-09-30

    The emergence of commercial haptic devices offers new research opportunities to enhance our understanding of the human sensory-motor system. Yet, commercial device capabilities have limitations which need to be addressed. This paper describes the customization of a commercial force feedback device for displaying forces with a precision that exceeds the human force perception threshold. The device was outfitted with a multi-axis force sensor and closed-loop controlled to improve its transparency. Additionally, two force sensing resistors were attached to the device to measure grip force. Force errors were modeled in the frequency- and time-domain to identify contributions from the mass, viscous friction, and Coulomb friction during open- and closed-loop control. The effect of user interaction on system stability was assessed in the context of a user study which aimed to measure force perceptual thresholds. Findings based on 15 participants demonstrate that the system maintains stability when rendering forces ranging from 0-0.20 N, with an average maximum absolute force error of 0.041 ± 0.013 N. Modeling the force errors revealed that Coulomb friction and inertia were the main contributors to force distortions during respectively slow and fast motions. Existing commercial force feedback devices cannot render forces with the required precision for certain testing scenarios. Building on existing robotics work, this paper shows how a device can be customized to make it reliable for studying the perception of weak forces. The customized and closed-loop controlled device is suitable for measuring force perceptual thresholds. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Dosimetric Characteristics of Wedged Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidhu, N.P.S.; Breitman, Karen

    2015-01-15

    The beam characteristics of the wedged fields in the nonwedged planes (planes normal to the wedged planes) were studied for 6 MV and 15 MV x-ray beams. A method was proposed for determining the maximum field length of a wedged field that can be used in the nonwedged plane without introducing undesirable alterations in the dose distributions of these fields. The method requires very few measurements. The relative wedge factors of 6 MV and 15 MV X-rays were determined for wedge filters of nominal wedge angles of 15°, 30°, 45°, and 60° as a function of depth and field size.more » For a 6 MV beam the relative wedge factors determined for a field size of 10 × 10 cm{sup 2} for 30°, 45°, and 60° wedge filters can be used for various field sizes ranging from 4 cm{sup 2} to 20 cm{sup 2} (except for the 60° wedge for which the maximum field size that can be used is 15 × 20 cm{sup 2}) without introducing errors in the dosimetric calculations of more than 0.5% for depths up to 20 cm and 1% for depths up to 30 cm. For the 15° wedge filter the relative wedge factor for a field size of 10 × 10 cm{sup 2} can be used over the same range of field sizes by introducing slightly higher error, 0.5% for depths up to 10 cm and 1% for depths up to 30 cm. For a 15 MV beam the maximum magnitude of the relative wedge factors for 45° and 60° lead wedges is of the order of 1%, and it is not important clinically to apply a correction of that magnitude. For a 15 MV beam the relative wedge factors determined for a field size of 6 × 6 cm{sup 2} for the 15° and 30° steel wedges can be used over a range of field sizes from 4 cm{sup 2} to 20 cm{sup 2} without causing dosimetric errors greater than 0.5% for depths up to 10 cm.« less

  8. Quality assurance of dynamic parameters in volumetric modulated arc therapy

    PubMed Central

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-01-01

    Objectives The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy® S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Methods Three tests (for gantry position–dose delivery synchronisation, gantry speed–dose delivery synchronisation and MLC leaf speed and positions) were performed. Results The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the “beginning” and “end” errors. For MLC position verification, the maximum error was −2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. Conclusion This experiment demonstrates that the variables and parameters of the Synergy® S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC. PMID:22745206

  9. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers

    PubMed Central

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-01-01

    AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586

  10. Solar Tracking Error Analysis of Fresnel Reflector

    PubMed Central

    Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

    2014-01-01

    Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664

  11. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  12. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  13. Ultraviolet absorption spectrum of HOCl

    NASA Technical Reports Server (NTRS)

    Burkholder, James B.

    1993-01-01

    The room temperature UV absorption spectrum of HOCl was measured over the wavelength range 200 to 380 nm with a diode array spectrometer. The absorption spectrum was identified from UV absorption spectra recorded following UV photolysis of equilibrium mixtures of Cl2O/H2O/HOCl. The HOCl spectrum is continuous with a maximum at 242 nm and a secondary peak at 304 nm. The measured absorption cross section at 242 nm was (2.1 +/- 0.3) x 10 exp -19/sq cm (2 sigma error limits). These results are in excellent agreement with the work of Knauth et al. (1979) but in poor agreement with the more recent measurements of Mishalanie et al. (1986) and Permien et al. (1988). An HOCl nu2 infrared band intensity of 230 +/- 35/sq cm atm was determined based on this UV absorption cross section. The present results are compared with these previous measurements and the discrepancies are discussed.

  14. Evaluation of methods for calculating maximum allowable standing height in amputees competing in Paralympic athletics.

    PubMed

    Connick, M J; Beckman, E; Ibusuki, T; Malone, L; Tweedy, S M

    2016-11-01

    The International Paralympic Committee has a maximum allowable standing height (MASH) rule that limits stature to a pre-trauma estimation. The MASH rule reduces the probability that bilateral lower limb amputees use disproportionately long prostheses in competition. Although there are several methods for estimating stature, the validity of these methods has not been compared. To identify the most appropriate method for the MASH rule, this study aimed to compare the criterion validity of estimations resulting from the current method, the Contini method, and four Canda methods (Canda-1, Canda-2, Canda-3, and Canda-4). Stature, ulna length, demispan, sitting height, thigh length, upper arm length, and forearm length measurements in 31 males and 30 females were used to calculate the respective estimation for each method. Results showed that Canda-1 (based on four anthropometric variables) produced the smallest error and best fitted the data in males and females. The current method was associated with the largest error of those tests because it increasingly overestimated height in people with smaller stature. The results suggest that the set of Canda equations provide a more valid MASH estimation in people with a range of upper limb and bilateral lower limb amputations compared with the current method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Mathematical model to interpret localized reflectance spectra measured in the presence of a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime J.; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2016-06-01

    Quantification of multiple fluorescence markers during neurosurgery has the potential to provide complementary contrast mechanisms between normal and malignant tissues, and one potential combination involves fluorescein sodium (FS) and aminolevulinic acid-induced protoporphyrin IX (PpIX). We focus on the interpretation of reflectance spectra containing contributions from elastically scattered (reflected) photons as well as fluorescence emissions from a strong fluorophore (i.e., FS). A model-based approach to extract μa and μs‧ in the presence of FS emission is validated in optical phantoms constructed with Intralipid (1% to 2% lipid) and whole blood (1% to 3% volume fraction), over a wide range of FS concentrations (0 to 1000 μg/ml). The results show that modeling reflectance as a combination of elastically scattered light and attenuation-corrected FS-based emission yielded more accurate tissue parameter estimates when compared with a nonmodified reflectance model, with reduced maximum errors for blood volume (22% versus 90%), microvascular saturation (21% versus 100%), and μs‧ (13% versus 207%). Additionally, quantitative PpIX fluorescence sampled in the same phantom as FS showed significant differences depending on the reflectance model used to estimate optical properties (i.e., maximum error 29% versus 86%). These data represent a first step toward using quantitative optical spectroscopy to guide surgeries through simultaneous assessment of FS and PpIX.

  16. Estimation of Surface Air Temperature from MODIS 1km Resolution Land Surface Temperature Over Northern China

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina

    2010-01-01

    Surface air temperature is a critical variable to describe the energy and water cycle of the Earth-atmosphere system and is a key input element for hydrology and land surface models. It is a very important variable in agricultural applications and climate change studies. This is a preliminary study to examine statistical relationships between ground meteorological station measured surface daily maximum/minimum air temperature and satellite remotely sensed land surface temperature from MODIS over the dry and semiarid regions of northern China. Studies were conducted for both MODIS-Terra and MODIS-Aqua by using year 2009 data. Results indicate that the relationships between surface air temperature and remotely sensed land surface temperature are statistically significant. The relationships between the maximum air temperature and daytime land surface temperature depends significantly on land surface types and vegetation index, but the minimum air temperature and nighttime land surface temperature has little dependence on the surface conditions. Based on linear regression relationship between surface air temperature and MODIS land surface temperature, surface maximum and minimum air temperatures are estimated from 1km MODIS land surface temperature under clear sky conditions. The statistical errors (sigma) of the estimated daily maximum (minimum) air temperature is about 3.8 C(3.7 C).

  17. Theoretical model for design and analysis of protectional eyewear.

    PubMed

    Zelzer, B; Speck, A; Langenbucher, A; Eppig, T

    2013-05-01

    Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.

  18. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  19. Cost-effectiveness of the stream-gaging program in New Jersey

    USGS Publications Warehouse

    Schopp, R.D.; Ulery, R.L.

    1984-01-01

    The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)

  20. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  1. Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida

    USGS Publications Warehouse

    Turner, J.F.

    1979-01-01

    A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)

  2. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    PubMed

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  3. Continuous-wave ultrasound reflectometry for surface roughness imaging applications

    PubMed Central

    Kinnick, R. R.; Greenleaf, J. F.; Fatemi, M.

    2009-01-01

    Background Measurement of surface roughness irregularities that result from various sources such as manufacturing processes, surface damage, and corrosion, is an important indicator of product quality for many nondestructive testing (NDT) industries. Many techniques exist, however because of their qualitative, time-consuming and direct-contact modes, it is of some importance to work out new experimental methods and efficient tools for quantitative estimation of surface roughness. Objective and Method Here we present continuous-wave ultrasound reflectometry (CWUR) as a novel nondestructive modality for imaging and measuring surface roughness in a non-contact mode. In CWUR, voltage variations due to phase shifts in the reflected ultrasound waves are recorded and processed to form an image of surface roughness. Results An acrylic test block with surface irregularities ranging from 4.22 μm to 19.05 μm as measured by a coordinate measuring machine (CMM), is scanned by an ultrasound transducer having a diameter of 45 mm, a focal distance of 70 mm, and a central frequency of 3 MHz. It is shown that CWUR technique gives very good agreement with the results obtained through CMM inasmuch as the maximum average percent error is around 11.5%. Conclusion Images obtained here demonstrate that CWUR may be used as a powerful noncontact and quantitative tool for nondestructive inspection and imaging of surface irregularities at the micron-size level with an average error of less than 11.5%. PMID:18664399

  4. Wrist electrogoniometry: are current mathematical correction procedures effective in reducing crosstalk in functional assessment?

    PubMed

    Foltran, Fabiana A; Silva, Luciana C C B; Sato, Tatiana O; Coury, Helenice J C G

    2013-01-01

    The recording of human movement is an essential requirement for biomechanical, clinical, and occupational analysis, allowing assessment of postural variation, occupational risks, and preventive programs in physical therapy and rehabilitation. The flexible electrogoniometer (EGM), considered a reliable and accurate device, is used for dynamic recordings of different joints. Despite these advantages, the EGM is susceptible to measurement errors, known as crosstalk. There are two known types of crosstalk: crosstalk due to sensor rotation and inherent crosstalk. Correction procedures have been proposed to correct these errors; however no study has used both procedures in clinical measures for wrist movements with the aim to optimize the correction. To evaluate the effects of mathematical correction procedures on: 1) crosstalk due to forearm rotation, 2) inherent sensor crosstalk; and 3) the combination of these two procedures. 43 healthy subjects had their maximum range of motion of wrist flexion/extension and ulnar/radials deviation recorded by EGM. The results were analyzed descriptively, and procedures were compared by differences. There was no significant difference in measurements before and after the application of correction procedures (P<0.05). Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by EGM.

  5. Resolution performance of a 0.60-NA, 364-nm laser direct writer

    NASA Astrophysics Data System (ADS)

    Allen, Paul C.; Buck, Peter D.

    1990-06-01

    ATEQ has developed a high resolution laser scanning printing engine based on the 8 beam architecture of the CORE- 2000. This printing engine has been incorporated into two systems: the CORE-2500 for the production of advanced masks and reticles and a prototype system for direct write on wafers. The laser direct writer incorporates a through-the-lens alignment system and a rotary chuck for theta alignment. Its resolution performance is delivered by a 0. 60 NA laser scan lens and a novel air-jet focus system. The short focal length high resolution lens also reduces beam position errors thereby improving overall pattern accuracy. In order to take advantage of the high NA optics a high performance focus servo was developed capable of dynamic focus with a maximum error of 0. 15 tm. The focus system uses a hot wire anemometer to measure air flow through an orifice abutting the wafer providing a direct measurement to the top surface of resist independent of substrate properties. Lens specifications are presented and compared with the previous design. Bench data of spot size vs. entrance pupil filling show spot size performance down to 0. 35 m FWHM. The lens has a linearity specification of 0. 05 m system measurements of lens linearity indicate system performance substantially below this. The aerial image of the scanned beams is measured using resist as a threshold detector. An effective spot size is

  6. Corrected score estimation in the proportional hazards model with misclassified discrete covariates

    PubMed Central

    Zucker, David M.; Spiegelman, Donna

    2013-01-01

    SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700

  7. Delay Analysis and Optimization of Bandwidth Request under Unicast Polling in IEEE 802.16e over Gilbert-Elliot Error Channel

    NASA Astrophysics Data System (ADS)

    Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae

    In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.

  8. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  9. Impact of ozone observations on the structure of a tropical cyclone using coupled atmosphere-chemistry data assimilation

    NASA Astrophysics Data System (ADS)

    Lim, S.; Park, S. K.; Zupanski, M.

    2015-04-01

    Since the air quality forecast is related to both chemistry and meteorology, the coupled atmosphere-chemistry data assimilation (DA) system is essential to air quality forecasting. Ozone (O3) plays an important role in chemical reactions and is usually assimilated in chemical DA. In tropical cyclones (TCs), O3 usually shows a lower concentration inside the eyewall and an elevated concentration around the eye, impacting atmospheric as well as chemical variables. To identify the impact of O3 observations on TC structure, including atmospheric and chemical information, we employed the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) with an ensemble-based DA algorithm - the maximum likelihood ensemble filter (MLEF). For a TC case that occurred over the East Asia, our results indicate that the ensemble forecast is reasonable, accompanied with larger background state uncertainty over the TC, and also over eastern China. Similarly, the assimilation of O3 observations impacts atmospheric and chemical variables near the TC and over eastern China. The strongest impact on air quality in the lower troposphere was over China, likely due to the pollution advection. In the vicinity of the TC, however, the strongest impact on chemical variables adjustment was at higher levels. The impact on atmospheric variables was similar in both over China and near the TC. The analysis results are validated using several measures that include the cost function, root-mean-squared error with respect to observations, and degrees of freedom for signal (DFS). All measures indicate a positive impact of DA on the analysis - the cost function and root mean square error have decreased by 16.9 and 8.87%, respectively. In particular, the DFS indicates a strong positive impact of observations in the TC area, with a weaker maximum over northeast China.

  10. Ensemble data assimilation of total column ozone using a coupled meteorology-chemistry model and its impact on the structure of Typhoon Nabi (2005)

    NASA Astrophysics Data System (ADS)

    Lim, S.; Park, S. K.; Zupanski, M.

    2015-09-01

    Ozone (O3) plays an important role in chemical reactions and is usually incorporated in chemical data assimilation (DA). In tropical cyclones (TCs), O3 usually shows a lower concentration inside the eyewall and an elevated concentration around the eye, impacting meteorological as well as chemical variables. To identify the impact of O3 observations on TC structure, including meteorological and chemical information, we developed a coupled meteorology-chemistry DA system by employing the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and an ensemble-based DA algorithm - the maximum likelihood ensemble filter (MLEF). For a TC case that occurred over East Asia, Typhoon Nabi (2005), our results indicate that the ensemble forecast is reasonable, accompanied with larger background state uncertainty over the TC, and also over eastern China. Similarly, the assimilation of O3 observations impacts meteorological and chemical variables near the TC and over eastern China. The strongest impact on air quality in the lower troposphere was over China, likely due to the pollution advection. In the vicinity of the TC, however, the strongest impact on chemical variables adjustment was at higher levels. The impact on meteorological variables was similar in both over China and near the TC. The analysis results are verified using several measures that include the cost function, root mean square (RMS) error with respect to observations, and degrees of freedom for signal (DFS). All measures indicate a positive impact of DA on the analysis - the cost function and RMS error have decreased by 16.9 and 8.87 %, respectively. In particular, the DFS indicates a strong positive impact of observations in the TC area, with a weaker maximum over northeastern China.

  11. Dynamic Method for Identifying Collected Sample Mass

    NASA Technical Reports Server (NTRS)

    Carson, John

    2008-01-01

    G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  12. Design of Instrument Dials for Maximum Legibility. Part 5. Origin Location, Scale Break, Number Location, and Contrast Direction

    DTIC Science & Technology

    1951-05-01

    prccedur&:s to be of hipn accuracy. Ambij;uity of subject responizes due to overlap of entries on tU,, record sheets vas negligible. Handwriting ...experimental variables on reading errors us carried out by analysis of variance methods. For this purpose it was convenient to consider different classes...on any scale - an error ofY one numbered division. For this reason, the result. of the analysis of variance of the /10’s errors by dial types may

  13. Generation of a crowned pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.

    1988-01-01

    A method of generating crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc sec for the numerical examples). Tooth contact analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and determine the bearing contact.

  14. Multi-institutional evaluation of end-to-end protocol for IMRT/VMAT treatment chains utilizing conventional linacs.

    PubMed

    Loughery, Brian; Knill, Cory; Silverstein, Evan; Zakjevskii, Viatcheslav; Masi, Kathryn; Covington, Elizabeth; Snyder, Karen; Song, Kwang; Snyder, Michael

    2018-03-20

    We conducted a multi-institutional assessment of a recently developed end-to-end monthly quality assurance (QA) protocol for external beam radiation therapy treatment chains. This protocol validates the entire treatment chain against a baseline to detect the presence of complex errors not easily found in standard component-based QA methods. Participating physicists from 3 institutions ran the end-to-end protocol on treatment chains that include Imaging and Radiation Oncology Core (IROC)-credentialed linacs. Results were analyzed in the form of American Association of Physicists in Medicine (AAPM) Task Group (TG)-119 so that they may be referenced by future test participants. Optically stimulated luminescent dosimeter (OSLD), EBT3 radiochromic film, and A1SL ion chamber readings were accumulated across 10 test runs. Confidence limits were calculated to determine where 95% of measurements should fall. From calculated confidence limits, 95% of measurements should be within 5% error for OSLDs, 4% error for ionization chambers, and 4% error for (96% relative gamma pass rate) radiochromic film at 3% agreement/3 mm distance to agreement. Data were separated by institution, model of linac, and treatment protocol (intensity-modulated radiation therapy [IMRT] vs volumetric modulated arc therapy [VMAT]). A total of 97% of OSLDs, 98% of ion chambers, and 93% of films were within the confidence limits; measurements were found outside these limits by a maximum of 4%, < 1%, and < 1%, respectively. Data were consistent despite institutional differences in OSLD reading equipment and radiochromic film calibration techniques. Results from this test may be used by clinics for data comparison. Areas of improvement were identified in the end-to-end protocol that can be implemented in an updated version. The consistency of our data demonstrates the reproducibility and ease-of-use of such tests and suggests a potential role for their use in broad end-to-end QA initiatives. Copyright © 2018 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  15. Magnetic Resonance Imaging–Guided versus Surrogate-Based Motion Tracking in Liver Radiation Therapy: A Prospective Comparative Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it; Seregni, Matteo; Fattori, Giovanni

    Purpose: This study applied automatic feature detection on cine–magnetic resonance imaging (MRI) liver images in order to provide a prospective comparison between MRI-guided and surrogate-based tracking methods for motion-compensated liver radiation therapy. Methods and Materials: In a population of 30 subjects (5 volunteers plus 25 patients), 2 oblique sagittal slices were acquired across the liver at high temporal resolution. An algorithm based on scale invariant feature transform (SIFT) was used to extract and track multiple features throughout the image sequence. The position of abdominal markers was also measured directly from the image series, and the internal motion of each featuremore » was quantified through multiparametric analysis. Surrogate-based tumor tracking with a state-of-the-art external/internal correlation model was simulated. The geometrical tracking error was measured, and its correlation with external motion parameters was also investigated. Finally, the potential gain in tracking accuracy relying on MRI guidance was quantified as a function of the maximum allowed tracking error. Results: An average of 45 features was extracted for each subject across the whole liver. The multi-parametric motion analysis reported relevant inter- and intrasubject variability, highlighting the value of patient-specific and spatially-distributed measurements. Surrogate-based tracking errors (relative to the motion amplitude) were were in the range 7% to 23% (1.02-3.57mm) and were significantly influenced by external motion parameters. The gain of MRI guidance compared to surrogate-based motion tracking was larger than 30% in 50% of the subjects when considering a 1.5-mm tracking error tolerance. Conclusions: Automatic feature detection applied to cine-MRI allows detailed liver motion description to be obtained. Such information was used to quantify the performance of surrogate-based tracking methods and to provide a prospective comparison with respect to MRI-guided radiation therapy, which could support the definition of patient-specific optimal treatment strategies.« less

  16. Force and Directional Force Modulation Effects on Accuracy and Variability in Low-Level Pinch Force Tracking.

    PubMed

    Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence

    2018-01-01

    The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.

  17. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  18. Theoretical Study of the Conditions of Maximum Manifestation of the Error Due to Inhomogeneity of Thermocouple Legs

    NASA Astrophysics Data System (ADS)

    Liu, Zhigang; Song, Wenguang; Kochan, Orest; Mykyichuk, Mykola; Jun, Su

    2017-07-01

    The method of theoretical analysis of temperature ranges for the maximum manifestation of the error due to acquired thermoelectric inhomogeneity of thermocouple legs is proposed in this paper. The drift function of the reference function of a type K thermocouples in a ceramic insulation, that consisted of 1.2 mm diameter thermoelements after their exposure to 800°C for 10 000 h in an oxidizing atmosphere (air), is analyzed. The method takes into account various operating conditions to determine the optimal conditions for studying inhomogeneous thermocouples. The method can be applied for other types of thermocouples when taking into account their specific characteristics and the conditions that they have been exposed to.

  19. Exploiting the Modified Colombo-Nyquist Rule for Co-estimating Sub-monthly Gravity Field Solutions from a GRACE-like Mission

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Weigelt, M.; Mueller, J.

    2017-12-01

    In order to suppress the impact of aliasing errors on the standard monthly GRACE gravity-field solutions, co-estimating sub-monthly (daily/two-day) low-degree solutions has been suggested as a solution. The maximum degree of the low-degree solutions is chosen via the Colombo-Nyquist rule of thumb. However, it is now established that the sampling of satellites puts a restriction on the maximum estimable order and not the degree - modified Colombo-Nyquist rule. Therefore, in this contribution, we co-estimate low-order sub-monthly solutions, and compare and contrast them with the low-degree sub-monthly solutions. We also investigate their efficacies in dealing with aliasing errors.

  20. Intra and inter-session reliability of rapid Transcranial Magnetic Stimulation stimulus-response curves of tibialis anterior muscle in healthy older adults

    PubMed Central

    Colombo, Vera Maria; van de Ruit, Mark; Grey, Michael J.; Monticone, Marco; Ferriero, Giorgio; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Ferrante, Simona

    2017-01-01

    Objective The clinical use of Transcranial Magnetic Stimulation (TMS) as a technique to assess corticospinal excitability is limited by the time for data acquisition and the measurement variability. This study aimed at evaluating the reliability of Stimulus-Response (SR) curves acquired with a recently proposed rapid protocol on tibialis anterior muscle of healthy older adults. Methods Twenty-four neurologically-intact adults (age:55–75 years) were recruited for this test-retest study. During each session, six SR curves, 3 at rest and 3 during isometric muscle contractions at 5% of maximum voluntary contraction (MVC), were acquired. Motor Evoked Potentials (MEPs) were normalized to the maximum peripherally evoked response; the coil position and orientation were monitored with an optical tracking system. Intra- and inter-session reliability of motor threshold (MT), area under the curve (AURC), MEPmax, stimulation intensity at which the MEP is mid-way between MEPmax and MEPmin (I50), slope in I50, MEP latency, and silent period (SP) were assessed in terms of Standard Error of Measurement (SEM), relative SEM, Minimum Detectable Change (MDC), and Intraclass Correlation Coefficient (ICC). Results The relative SEM was ≤10% for MT, I50, latency and SP both at rest and 5%MVC, while it ranged between 11% and 37% for AURC, MEPmax, and slope. MDC values were overall quite large; e.g., MT required a change of 12%MSO at rest and 10%MSO at 5%MVC to be considered a real change. Inter-sessions ICC were >0.6 for all measures but slope at rest and MEPmax and latency at 5%MVC. Conclusions Measures derived from SR curves acquired in <4 minutes are affected by similar measurement errors to those found with long-lasting protocols, suggesting that the rapid method is at least as reliable as the traditional methods. As specifically designed to include older adults, this study provides normative data for future studies involving older neurological patients (e.g. stroke survivors). PMID:28910370

Top